diff --git a/spaces/1doemePnordwo/upscale/README.md b/spaces/1doemePnordwo/upscale/README.md deleted file mode 100644 index f20bf57b8796bd879e46f2126d7be6a057aaa2c8..0000000000000000000000000000000000000000 --- a/spaces/1doemePnordwo/upscale/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: UPSCALE -emoji: 📷 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: cvsys/upscale ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CRACK X-Force 2019.zip.md b/spaces/1gistliPinn/ChatGPT4/Examples/CRACK X-Force 2019.zip.md deleted file mode 100644 index 6283ce1672ba583474083542bba398b3a02d6c07..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/CRACK X-Force 2019.zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

CRACK X-Force 2019.zip


Download Ziphttps://imgfil.com/2uy17d



-
-Listen to Xforce Keygen PowerMill 2019 64 Bit Download and 164 more episodes by FBX 2018 32bit Activation Code Zip. File, free! ... 2009 64 ... 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android TV 12 ISO Everything You Need to Know Before You Download.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android TV 12 ISO Everything You Need to Know Before You Download.md deleted file mode 100644 index 237e4fdf147f7db2544be38e58491b73782f72b5..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android TV 12 ISO Everything You Need to Know Before You Download.md +++ /dev/null @@ -1,147 +0,0 @@ - -

How to Download Android TV 12 ISO and Why You Should Try It

-

Android TV is a smart TV platform that runs on the Android operating system. It allows you to access a variety of apps, games, and streaming services on your TV. Android TV also supports Google Assistant, Chromecast, and other Google features.

-

Android TV 12 is the latest version of the platform, based on the Android 12 codebase. It brings a lot of new features and improvements to enhance your TV experience. In this article, we will show you how to download Android TV 12 ISO and why you should try it.

-

download android tv 12 iso


Download File >>> https://urlin.us/2uSUgP



-

Android TV 12 Features

-

Android TV 12 comes with several exciting new features and enhancements. Here are some of the highlights:

-

Native 4K Rendering

-

Android TV has always supported 4K content, but the user interface was rendered in 1080p. With Android TV 12, you can now enjoy a crisp and clear UI in native 4K resolution, if your device supports it. This will make the text, icons, and animations look sharper and smoother on your screen.

-

Refresh Rate Switching

-

Android TV 12 also supports dynamic refresh rate switching, which means it can automatically adjust the refresh rate of your display to match the content you are watching. This will reduce motion judder and improve the smoothness of the video playback. You can enable this feature in the Display & Sound settings.

-

Privacy Indicators and Toggles

-

If your Android TV has a camera or a microphone, you might be concerned about your privacy. Android TV 12 addresses this issue by adding privacy indicators and toggles. Whenever an app uses your camera or microphone, you will see a bright green icon on the top corner of your screen. You can also block the access to these sensors for all apps from the settings menu.

-

Quick Connect for Wi-Fi

-

Setting up your Wi-Fi connection on your Android TV can be a hassle, especially if you have a long or complex password. Android TV 12 makes this process easier with Quick Connect. This feature allows you to scan a QR code on your screen with your phone and enter the password there. This way, you don't have to use the remote to type in the password.

-

Tweaked Design and Animations

-

Android TV 12 also brings some minor changes to the design and animations of the UI. The home screen now has a more refined look with background blurs and smoother transitions. The settings menu also has a new layout with larger icons and text. The boot animation has also been updated with a new logo and colors.

-

How to download android tv 12 iso for ADT-3 Developer Kit
-Download android tv 12 iso with 4K UI support and dynamic refresh rate
-Android tv 12 iso download link and installation guide
-Download android tv 12 iso for Treble-compliant devices
-Android tv 12 iso features and benefits
-Where to download android tv 12 iso for free
-Download android tv 12 iso with improved HDR and surround sound support
-Android tv 12 iso system requirements and compatibility
-Download android tv 12 iso with Android 12-style UI and background blurs
-Android tv 12 iso review and feedback
-Download android tv 12 iso with Google Play support and updates
-Android tv 12 iso vs Android TV 11 comparison
-Download android tv 12 iso with enhanced security and privacy features
-Android tv 12 iso troubleshooting and tips
-Download android tv 12 iso with new remote control app and voice assistant
-Android tv 12 iso best apps and games
-Download android tv 12 iso with faster performance and smoother animations
-Android tv 12 iso customization and settings
-Download android tv 12 iso with built-in Chromecast and Google TV integration
-Android tv 12 iso FAQs and answers
-Download android tv 12 iso with support for external storage and USB devices
-Android tv 12 iso developer options and tools
-Download android tv 12 iso with new accessibility features and options
-Android tv 12 iso keyboard and mouse support
-Download android tv 12 iso with multi-user and guest mode support
-Android tv 12 iso parental controls and restrictions
-Download android tv 12 iso with new notification panel and quick settings
-Android tv 12 iso network and connectivity options
-Download android tv 12 iso with new media player and audio effects
-Android tv 12 iso backup and restore options
-Download android tv 12 iso with new wallpapers and themes
-Android tv 12 iso screen mirroring and casting options
-Download android tv 12 iso with new widgets and shortcuts
-Android tv 12 iso sleep mode and power saving options
-Download android tv 12 iso with new sound modes and profiles
-Android tv 12 iso Bluetooth and wireless options
-Download android tv 12 iso with new languages and locales support
-Android tv 12 iso date and time options
-Download android tv 12 iso with new gesture navigation and touch controls
-Android tv 12 iso display calibration and adjustment options

-

Android TV 12 Compatibility

-

Before you download Android TV 12 ISO, you need to make sure that your device is compatible with it. Here are some things to consider:

-

Supported Devices

-

Android TV 12 is currently only available for developers who have an ADT-3 developer device. This is a dongle that runs on Android TV and it from the Google Store. If you have a different device, such as a smart TV, a set-top box, or a streaming stick, you will have to wait for the official release of Android TV 12, which is expected later this year.

How to Check Your Device Compatibility

-

If you are not sure whether your device is compatible with Android TV 12, you can check it by following these steps:

-
    -
  1. Go to the Settings menu on your Android TV.
  2. -
  3. Select Device Preferences.
  4. -
  5. Select About.
  6. -
  7. Look for the Build number and check if it starts with RVC or SVP. If it does, your device is compatible with Android TV 12. If it starts with QTS or QSR, your device is not compatible.
  8. -
-

Android TV 12 Installation

-

If you have an ADT-3 developer device and you want to install Android TV 12 on it, you have two options: using the Android Flash Tool or using the system image. Here are the steps for each method:

-

Requirements

-

Before you proceed with the installation, you need to have the following requirements:

- -

Using Android Flash Tool

-

The Android Flash Tool is a web-based tool that allows you to flash Android TV 12 on your ADT-3 device without downloading any files. Here are the steps to use it:

-
    -
  1. Go to the Android Flash Tool website on your computer.
  2. -
  3. Allow the website to access your USB devices.
  4. -
  5. Connect your ADT-3 device to your computer using the USB cable.
  6. -
  7. Select your device from the list and click Add Device.
  8. -
  9. Select the Android TV 12 build from the list and click Install.
  10. -
  11. Follow the instructions on the screen and wait for the installation to complete.
  12. -
  13. Disconnect your ADT-3 device from your computer and reboot it.
  14. -
-

Using System Image

-

The system image is a file that contains the Android TV 12 software for your ADT-3 device. You can download it from the Android Developers website and flash it manually using a command-line tool. Here are the steps to use it:

-
    -
  1. Download the system image file for your ADT-3 device from the Android Developers website and unzip it on your computer.
  2. -
  3. Install the Android SDK Platform-Tools on your computer and add them to your PATH environment variable.
  4. -
  5. Enable Developer Options and USB Debugging on your ADT-3 device. To do this, go to Settings > Device Preferences > About > Build and tap it seven times. Then go back to Settings > Device Preferences > Developer Options and turn on USB Debugging.
  6. -
  7. Connect your ADT-3 device to your computer using the USB cable.
  8. -
  9. Open a terminal or command prompt window on your computer and navigate to the folder where you unzipped the system image file.
  10. -
  11. Type adb reboot bootloader and press Enter. This will reboot your ADT-3 device into bootloader mode.
  12. -
  13. Type fastboot devices and press Enter. This will show you a list of connected devices. Make sure your ADT-3 device is listed.
  14. -
  15. Type flash-all.bat (for Windows) or ./flash-all.sh (for Mac OS or Linux) and press Enter. This will flash Android TV 12 on your ADT-3 device.
  16. -
  17. Wait for the flashing process to finish and disconnect your ADT-3 device from your computer.
  18. -
  19. Reboot your ADT-3 device and enjoy Android TV 12.
  20. -
-

Conclusion

-

Android TV 12 is a major update for the smart TV platform that brings many new features and improvements. If you have an ADT-3 developer device, you can download Android TV 12 ISO and install it using either the Android Flash Tool or the system image. If you have a different device, you will have to wait for the official release of Android TV 12, which is expected later this year. We hope this article helped you learn how to download Android TV 12 ISO and why you should try it. If you have any questions or feedback, please let us know in the comments below.

-

FAQs

-

What is the difference between Android TV and Google TV?

-

Android TV and Google TV are both smart TV platforms that run on the Android operating system. However, Google TV is a newer version that has a different user interface and features. Google TV is more personalized and integrated with Google services, such as Google Photos, YouTube, and Google Assistant. Google TV also supports a wider range of apps and devices than Android TV.

-

How can I update my Android TV to Android 12?

-

If you have a compatible device, you can update your Android TV to Android 12 by following these steps:

-
    -
  1. Go to the Settings menu on your Android TV.
  2. -
  3. Select Device Preferences.
  4. -
  5. Select System Update.
  6. -
  7. Check for updates and download the latest version of Android 12.
  8. -
  9. Install the update and reboot your device.
  10. -
-

How can I enable 4K UI on my Android TV?

-

If you have a 4K-capable device and display, you can enable 4K UI on your Android TV by following these steps:

-
    -
  1. Go to the Settings menu on your Android TV.
  2. -
  3. Select Device Preferences.
  4. -
  5. Select Display & Sound.
  6. -
  7. Select Resolution.
  8. -
  9. Select 4K (2160p).
  10. -
-

How can I block the camera and microphone on my Android TV?

-

If you want to block the access to the camera and microphone on your Android TV, you can do so by following these steps:

-
    -
  1. Go to the Settings menu on your Android TV.
  2. -
  3. Select Device Preferences.
  4. -
  5. Select Privacy.
  6. -
  7. Select Camera or Microphone.
  8. -
  9. Turn off the toggle for Allow apps to access your camera or microphone.
  10. -
-

How can I use Quick Connect to set up my Wi-Fi on my Android TV?

-

If you want to use Quick Connect to set up your Wi-Fi on your Android TV, you need to have a phone with the Google Home app installed. Then, you can follow these steps:

-
    -
  1. Go to the Settings menu on your Android TV.
  2. -
  3. Select Network & Internet.
  4. -
  5. Select Add network.
  6. -
  7. Select Quick Connect.
  8. -
  9. Scan the QR code on your screen with your phone using the Google Home app.
  10. -
  11. Enter your Wi-Fi password on your phone and tap Connect.
  12. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawlhalla for Mac How to Download and Play the Free-to-Play Platform Fighter.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawlhalla for Mac How to Download and Play the Free-to-Play Platform Fighter.md deleted file mode 100644 index e802184e37318d6e0d5b4390f629c8b550dcb003..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawlhalla for Mac How to Download and Play the Free-to-Play Platform Fighter.md +++ /dev/null @@ -1,109 +0,0 @@ - -

How to Download and Play Brawlhalla on Mac

-

If you are looking for a fun and exciting fighting game that you can play on your Mac, you might want to check out Brawlhalla. Brawlhalla is a free 2D platform fighting game that supports up to 8 players online or local, with full cross-play for PC, PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS, and Android. In this article, we will tell you what Brawlhalla is, how to download it on your Mac, and some tips and tricks to improve your gameplay.

-

What is Brawlhalla?

-

Brawlhalla is a game developed by Blue Mammoth Games and published by Ubisoft. It was released in 2017 and has since gained a huge fan base of over 100 million players. Brawlhalla is inspired by the likes of Super Smash Bros. and features cartoonish graphics and simple controls. Here are some of the main features of Brawlhalla:

-

brawlhalla download mac


Download File » https://urlin.us/2uT2dl



-

A free 2D platform fighting game with cross-play support

-

Brawlhalla is completely free to play and does not have any pay-to-win elements. You can unlock all the characters (called Legends) by playing the game or buying them with in-game currency (called Gold). You can also buy cosmetic items (called Skins) with real money or another in-game currency (called Mammoth Coins). Brawlhalla also supports cross-play across all platforms, meaning you can play with your friends no matter what device they use.

-

Features over 50 unique characters and weapons

-

Brawlhalla has a diverse roster of over 50 Legends, each with their own stats, abilities, and personalities. You can choose from historical warriors, mythical creatures, original characters, and even crossover characters from other franchises like Lara Croft, Shovel Knight, The Walking Dead, Ben 10, Steven Universe, WWE, Hellboy, Adventure Time, Rayman, and more. Each Legend has two weapons that they can use in combat, ranging from swords, axes, hammers, bows, guns, spears, gauntlets, scythes, katars, cannons, orbs, greatswords, rocket lances, blasters, daggers, and more. You can switch between your weapons by picking them up from the stage or throwing them at your opponents.

-

Offers various game modes and events

-

Brawlhalla has a variety of game modes that you can enjoy solo or with others. You can play casual free-for-alls or team battles with up to 8 players online or local. You can also play ranked matches to climb the ladder and earn rewards. You can also invite your friends to a private room or join custom games created by other players. Brawlhalla also has weekly rotations of different game modes like Strikeout, Bubble Tag, Kung Foot, Snowbrawl, Bombsketball, Morph, Horde Mode, and more. Additionally, Brawlhalla hosts seasonal events that offer exclusive skins, colors, avatars, and other items.

-

How to Download Brawlhalla on Mac

-

If you want to play Brawlhalla on your Mac, you need to meet the following requirements:

-

Requirements for Mac OS

- -

If your Mac meets these requirements, you can download Brawlhalla through Steam, which is a digital distribution platform for games and software. Here are the steps to download Brawlhalla through Steam:

-

Steps to download Brawlhalla through Steam

-
    -
  1. Go to the Steam website and click on the "Install Steam" button. This will download the Steam installer on your Mac.
  2. -
  3. Open the Steam installer and follow the instructions to install Steam on your Mac.
  4. -
  5. Launch Steam and log in with your Steam account. If you don't have a Steam account, you can create one for free.
  6. -
  7. In the Steam app, go to the "Store" tab and search for "Brawlhalla" in the search bar.
  8. -
  9. Click on the "Brawlhalla" game and then click on the "Play Game" button. This will add Brawlhalla to your Steam library and start downloading it on your Mac.
  10. -
  11. Once the download is complete, you can launch Brawlhalla from your Steam library and start playing.
  12. -
-

Alternative ways to play Brawlhalla on Mac

-

If you don't want to use Steam or if your Mac does not meet the requirements, you can still play Brawlhalla on your Mac using other methods. Here are some alternative ways to play Brawlhalla on Mac:

-

brawlhalla mac os 64bit free download
-brawlhalla cross-play platform fighter mac
-brawlhalla epic games store mac download
-brawlhalla mac steam download guide
-brawlhalla mac system requirements and specs
-brawlhalla mac controller support and settings
-brawlhalla mac keyboard and mouse tips
-brawlhalla mac gameplay and review
-brawlhalla mac online multiplayer modes
-brawlhalla mac offline single player modes
-brawlhalla mac custom game rooms and maps
-brawlhalla mac ranked matches and leaderboards
-brawlhalla mac tournaments and events
-brawlhalla mac patch notes and updates
-brawlhalla mac news and announcements
-brawlhalla mac skins and cosmetics
-brawlhalla mac legends and weapons
-brawlhalla mac combos and techniques
-brawlhalla mac tips and tricks for beginners
-brawlhalla mac advanced strategies and guides
-brawlhalla mac best legends and tier list
-brawlhalla mac best weapons and loadouts
-brawlhalla mac best game modes and settings
-brawlhalla mac best custom maps and mods
-brawlhalla mac best skins and cosmetics to buy
-brawlhalla mac free gold and mammoth coins
-brawlhalla mac free codes and giveaways
-brawlhalla mac free crossover events and collaborations
-brawlhalla mac free battle pass and rewards
-brawlhalla mac free community colors and avatars
-brawlhalla mac how to unlock all legends
-brawlhalla mac how to level up fast
-brawlhalla mac how to improve your skills
-brawlhalla mac how to win more matches
-brawlhalla mac how to play with friends online
-brawlhalla mac how to join a clan or create one
-brawlhalla mac how to report a bug or issue
-brawlhalla mac how to contact support or feedback
-brawlhalla mac how to stream or record your gameplay
-brawlhalla mac how to watch replays or highlights
-brawlhalla mac comparison with other fighting games
-brawlhalla mac history and development story
-brawlhalla mac fun facts and trivia
-brawlhalla mac fan art and memes
-brawlhalla mac fan fiction and lore
-brawlhalla mac fan videos and podcasts

- -

Tips and Tricks for Brawlhalla

-

Brawlhalla is a game that requires skill, strategy, and practice to master. Here are some tips and tricks that can help you improve your gameplay and have more fun in Brawlhalla:

-

Improve your movement, recovery, and dodging skills

-

Movement is one of the most important aspects of Brawlhalla, as it determines how you position yourself, attack, defend, and survive. You should learn how to use your jumps, dashes, fast falls, wall jumps, gravity cancels, chase dodges, and recovery moves effectively. You should also learn how to dodge your opponent's attacks and punish them accordingly. You can use different types of dodges like spot dodge, directional dodge, speed dodge, and chain dodge depending on the situation.

-

Experiment with different characters and weapons

-

Brawlhalla has a lot of variety in terms of characters and weapons, so you should try them all out and find out which ones suit your playstyle and preference. You should also learn the strengths, weaknesses, combos, strings, signatures, and matchups of each character and weapon. You can use the Brawlhalla Wiki or Brawlmance to get more information about the game's mechanics and statistics.

-

Practice in training mode and watch pro players

-

Brawlhalla has a training mode that allows you to practice your skills against a dummy or a bot. You can customize the settings of the training mode to suit your needs. You can also watch pro players stream or upload videos of their gameplay on platforms like Twitch or YouTube. You can learn from their strategies, techniques, tips, and mistakes.

-

Conclusion

-

Brawlhalla is a fun and exciting fighting game that you can play on your Mac for free. You can download it through Steam or use other methods if you prefer. You can also improve your gameplay by following some tips and tricks that we have shared in this article. We hope you enjoy playing Brawlhalla on your Mac and have a blast with your friends online or offline.

-

FAQs

-

Here are some frequently asked questions about Brawlhalla and their answers:

-

Is Brawlhalla free?

-

Yes, Brawlhalla is free to play and does not have any pay-to-win elements. You can unlock all the characters by playing the game or buying them with in-game currency. You can also buy cosmetic items with real money or another in-game currency.

-

Is Brawlhalla cross-platform?

-

Yes, Brawlhalla supports cross-play across all platforms, including PC, PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS, and Android. You can play with your friends no matter what device they use.

-

How many players can play Brawlhalla?

-

Brawlhalla supports up to 8 players online or local in various game modes. You can play casual free-for-alls or team battles, ranked matches, custom games, or weekly rotations of different game modes.

-

How do I change my controls in Brawlhalla?

-

You can change your controls in Brawlhalla by going to the "Settings" menu and then the "Controls" tab. You can customize your keyboard or controller settings to your liking. You can also change your mouse sensitivity and aim assist options.

-

How do I get better at Brawlhalla?

-

You can get better at Brawlhalla by practicing your movement, recovery, and dodging skills, experimenting with different characters and weapons, practicing in training mode and watching pro players, and learning from your mistakes and feedback.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Naija Ludo Pro APK and Play the Classic Dice and Race Game with Friends.md b/spaces/1phancelerku/anime-remove-background/Download Naija Ludo Pro APK and Play the Classic Dice and Race Game with Friends.md deleted file mode 100644 index a7ff8cfaca1cfc08b2abeea2f426c5122b3effc7..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Naija Ludo Pro APK and Play the Classic Dice and Race Game with Friends.md +++ /dev/null @@ -1,128 +0,0 @@ - -

Naija Ludo Pro APK: A Fun and Exciting Board Game for Everyone

-

Do you love playing board games with your friends and family? Do you want to experience a classic dice and race game with a Nigerian twist? If yes, then you should try naija ludo pro apk, a professional board game that is made for professionals.

-

naija ludo pro apk


Download Zip ✦✦✦ https://jinyurl.com/2uNObE



-

Naija ludo pro apk is an android game that you can download from APKCombo or Google Play Store. It is based on the popular board game Ludo, which originated from India and became famous around the world. Naija ludo pro apk has many features that make it more fun and challenging than other ludo games. Some of these features are:

- -

All these features are accessible through options. You can also adjust the sound, music, vibration, and language settings according to your liking. Naija ludo pro apk is a game that will keep you entertained for hours.

-

History of Ludo Game

-

Ludo is a game that has a long and rich history. It is believed that it evolved from an ancient Indian game called Pachisi, which was created in the sixth century CE. The earliest evidence of this game's evolution in India is the depiction of boards on the caves of Ellora, a UNESCO World Heritage Site in Maharashtra. The original version of Pachisi was also described in the Indian epic Mahabharata, in which Shakuni used cursed dice to beat the Pandavas, leading to a series of events that resulted in the Kurukshetra War.

-

naija ludo pro game download
-naija ludo pro online multiplayer
-naija ludo pro apk free download
-naija ludo pro app for android
-naija ludo pro latest version
-naija ludo pro board game
-naija ludo pro dice and race
-naija ludo pro mod apk
-naija ludo pro review
-naija ludo pro features
-naija ludo pro gameplay
-naija ludo pro price
-naija ludo pro tips and tricks
-naija ludo pro hack
-naija ludo pro cheats
-naija ludo pro settings
-naija ludo pro rules
-naija ludo pro strategy
-naija ludo pro how to play
-naija ludo pro best board
-naija ludo pro difficulty level
-naija ludo pro speed control
-naija ludo pro barrier option
-naija ludo pro safe-house option
-naija ludo pro one die or two dice option
-naija ludo pro remove piece option
-naija ludo pro play again option
-naija ludo pro supported languages
-naija ludo pro content rating
-naija ludo pro apk size
-naija ludo pro developer
-naija ludo pro category
-naija ludo pro google play id
-naija ludo pro installs
-naija ludo pro update date
-naija ludo pro trailer video
-naija ludo pro screenshots
-naija ludo pro ratings and reviews
-naija ludo pro similar games
-naija ludo pro alternatives
-naija ludo pro vs classic Ludo
-naija Ludo Pro Bluetooth multiplayer
-NAIJA LUDO PRO visual hand
-NAIJA LUDO PRO piece capture
-NAIJA LUDO PRO net energy gain
-NAIJA LUDO PRO mini sun
-NAIJA LUDO PRO 100 million degrees
-NAIJA LUDO PRO holy grail experiment
-NAIJA LUDO PRO Korea Institute of Fusion Energy

-

Pachisi was modified by different cultures and regions over time, giving rise to various versions of the game. Some of these versions are Chaupar, Chausar, Chopad, Chatush Pada,

Rules of Ludo Game

-

Ludo is a game that can be played by two to four players, without partnerships. The objective of the game is to race your four tokens from start to finish according to the rolls of a single die. The game has some basic rules that you need to follow in order to play it properly. Here are the rules of ludo game:

- -

Benefits of Playing Ludo Game

-

Ludo is not only a fun and exciting board game, but also a beneficial one for your health and well-being. Playing ludo can improve your brain function, give you pleasure and relieve stress, lower your blood pressure and boost your immunity, and more. Here are some of the benefits of playing ludo game:

-

Tips and Tricks for Playing Ludo Game

-

Ludo is a game that requires both luck and skill. You need to roll the dice well, but you also need to use your brain to make the best moves. Here are some tips and tricks for playing ludo game that can help you win more often:

- -

Conclusion

-

Ludo is a game that has been enjoyed by millions of people for centuries. It is a game that combines luck and skill, fun and challenge, joy and laughter. It is a game that can be played by anyone, anywhere, anytime.

-

Naija ludo pro apk is a game that takes ludo to the next level. It is a game that offers more features, more options, more boards, more levels, more fun. It is a game that lets you play with your friends or family online or offline.

-

If you are looking for a professional board game that is made for professionals, then naija ludo pro apk is the game for you. Download it today from APKCombo or Google Play Store and enjoy the game of ludo like never before.

-

FAQs

-

Here are some of the frequently asked questions about naija ludo pro apk:

-
    -
  1. Where can I download naija ludo pro apk?
  2. -

    You can download naija ludo pro apk from APKCombo or Google Play Store. These are the official and trusted sources for downloading the game. You can also scan the QR code on the game's website to download it directly to your device.

    -
  3. Is naija ludo pro apk safe and secure?
  4. -

    Yes, naija ludo pro apk is safe and secure to use. It does not contain any viruses, malware, or spyware that can harm your device or data. It also does not require any unnecessary permissions or access to your personal information. It is a game that respects your privacy and security.

    -
  5. Can I play naija ludo pro apk online with other players?
  6. -

    Yes, you can play naija ludo pro apk online with other players from around the world. You can either join a random match or create a private room and invite your friends or family to join. You can also chat with your opponents and send them emojis during the game.

    -
  7. Can I customize the board and pieces in naija ludo pro apk?
  8. -

    Yes, you can customize the board and pieces in naija ludo pro apk according to your preference. You can choose among three different boards: classic, modern, and Nigerian. You can also choose among four different sets of pieces: standard, premium, deluxe, and royal. You can also change the colour of your pieces if you want.

    -
  9. What are the differences between naija ludo pro apk and other ludo games?
  10. -

    Naija ludo pro apk is a game that has many differences from other ludo games. Some of these differences are:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Treasure of Montezuma 4 and Experience the Ultimate Match-3 Adventure.md b/spaces/1phancelerku/anime-remove-background/Download Treasure of Montezuma 4 and Experience the Ultimate Match-3 Adventure.md deleted file mode 100644 index e0bd5d42b961c7ea52fdd4166917e0424afec66b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Treasure of Montezuma 4 and Experience the Ultimate Match-3 Adventure.md +++ /dev/null @@ -1,122 +0,0 @@ - -

    How to Download Treasure of Montezuma 4 and Enjoy a Thrilling Puzzle Adventure

    -

    If you are looking for a new and exciting puzzle game to play, you should definitely check out Treasure of Montezuma 4. This is the fourth installment of the popular series that has captivated millions of players around the world. In this article, we will tell you what Treasure of Montezuma 4 is, why you should download it, and how to do it from different platforms. Read on and get ready to embark on an amazing journey through the ancient Aztec civilization.

    -

    download treasure of montezuma 4


    Downloadhttps://jinyurl.com/2uNMkr



    -

    What is Treasure of Montezuma 4?

    -

    Treasure of Montezuma 4 is a tile-matching puzzle game that combines elements of adventure, mystery, and magic. You play as Anna, an archaeologist who travels to an Aztec ruin to uncover an ancient secret. Along the way, you will encounter various challenges and surprises that will keep you hooked for hours.

    -

    A brief introduction to the game and its features

    -

    The game has three modes: Story Mode, Quest Mode, and Puzzle Mode. In Story Mode, you will follow Anna's story as she explores the ruin and faces an epic boss battle. In Quest Mode, you will complete different tasks and earn rewards. In Puzzle Mode, you will solve tricky puzzles with limited moves.

    -

    The game also has 98 levels in Story Mode and 69 levels in Quest Mode, each with different goals and obstacles. You will need to match three or more tiles of the same color to clear them from the board and create powerful combos. You will also collect crystals and coins that you can use to upgrade your character and build your own Ziggurat.

    -

    download treasure of montezuma 4 full version
    -download treasure of montezuma 4 for pc
    -download treasure of montezuma 4 free online
    -download treasure of montezuma 4 apk
    -download treasure of montezuma 4 mod
    -download treasure of montezuma 4 crack
    -download treasure of montezuma 4 game
    -download treasure of montezuma 4 android
    -download treasure of montezuma 4 ios
    -download treasure of montezuma 4 mac
    -download treasure of montezuma 4 windows 10
    -download treasure of montezuma 4 steam
    -download treasure of montezuma 4 torrent
    -download treasure of montezuma 4 cheats
    -download treasure of montezuma 4 tips
    -download treasure of montezuma 4 walkthrough
    -download treasure of montezuma 4 guide
    -download treasure of montezuma 4 review
    -download treasure of montezuma 4 gameplay
    -download treasure of montezuma 4 trailer
    -download treasure of montezuma 4 latest version
    -download treasure of montezuma 4 update
    -download treasure of montezuma 4 patch
    -download treasure of montezuma 4 serial key
    -download treasure of montezuma 4 license key
    -download treasure of montezuma 4 activation key
    -download treasure of montezuma 4 registration key
    -download treasure of montezuma 4 product key
    -download treasure of montezuma 4 cd key
    -download treasure of montezuma 4 keygen
    -download treasure of montezuma 4 generator
    -download treasure of montezuma 4 hack
    -download treasure of montezuma 4 unlimited lives
    -download treasure of montezuma 4 unlimited coins
    -download treasure of montezuma 4 unlimited gems
    -download treasure of montezuma 4 unlimited boosters
    -download treasure of montezuma 4 unlimited time
    -download treasure of montezuma 4 no ads
    -download treasure of montezuma 4 premium
    -download treasure of montezuma 4 pro
    -download treasure of montezuma 4 deluxe
    -download treasure of montezuma 4 gold edition
    -download treasure of montezuma 4 collector's edition
    -download treasure of montezuma 4 special edition
    -download treasure of montezuma 4 ultimate edition
    -how to download treasure of montezuma 4
    -where to download treasure of montezuma 4
    -why to download treasure of montezuma 4

    -

    Moreover, the game features seven powerful totems and eight unique bonuses that will help you in your quest. The totems are ancient gods that have special abilities, such as creating explosions, swapping tiles, or freezing time. The bonuses are items that you can activate during the game, such as hammers, bombs, or lightning bolts.

    -

    The game also has stunning graphics and sound effects that create an immersive atmosphere. You will enjoy the colorful animations, the realistic backgrounds, and the authentic music. You will also learn interesting facts about the Aztec culture and history as you play.

    -

    Why should you download Treasure of Montezuma 4?

    -

    Treasure of Montezuma 4 is not just another puzzle game. It is a game that offers many benefits for players of all ages and preferences. Here are some of them:

    -

    The benefits of playing the game, such as fun, challenge, and learning

    -

    Fun: How the game offers a variety of modes, levels, and special effects

    -

    One of the main reasons to download Treasure of Montezuma 4 is that it is fun. The game offers a variety of modes, levels, and special effects that make it entertaining and engaging. You will never get bored with this game because there is always something new and exciting to discover. Whether you want to follow Anna's story, complete quests, or solve puzzles , you will find something that suits your mood and taste.

    -

    Challenge: How the game tests your skills, strategy, and speed

    -

    Another reason to download Treasure of Montezuma 4 is that it is challenging. The game tests your skills, strategy, and speed in different ways. You will need to think fast and act faster to clear the board and achieve the goals. You will also need to plan ahead and use the totems and bonuses wisely to overcome the obstacles. The game has different difficulty levels, from easy to hard, so you can adjust the challenge according to your preference and ability.

    -

    Learning: How the game teaches you about the Aztec culture and history

    -

    A third reason to download Treasure of Montezuma 4 is that it is educational. The game teaches you about the Aztec culture and history in a fun and interactive way. You will learn about the Aztec gods, symbols, rituals, and architecture as you play. You will also discover the secrets of the Ziggurat, a massive pyramid that was built by the Aztecs to honor their gods. The game has a built-in encyclopedia that provides more information and facts about the topics covered in the game.

    -

    How to download Treasure of Montezuma 4?

    -

    Now that you know what Treasure of Montezuma 4 is and why you should download it, you might be wondering how to do it. Well, the good news is that downloading Treasure of Montezuma 4 is easy and convenient. You can download the game from different platforms, such as Steam, PlayStation, and GameHouse. Here are the steps to do it from each platform:

    -

    The steps to download the game from different platforms, such as Steam, PlayStation, and GameHouse

    -

    Steam: How to buy the game for a discounted price and install it on your PC

    -

    If you want to download Treasure of Montezuma 4 from Steam, you will need to have a Steam account and a compatible PC. You can create a Steam account for free by visiting https://store.steampowered.com/join/. Once you have an account, you can buy the game for a discounted price of $2.99 (regular price $9.99) by visiting https://store.steampowered.com/app/347400/The_Treasures_of_Montezuma_4/. After you buy the game, you can install it on your PC by following these steps:

    -
      -
    1. Open Steam and log in with your account.
    2. -
    3. Go to Library and find Treasure of Montezuma 4 in your list of games.
    4. -
    5. Click on Install and choose a location for the game files.
    6. -
    7. Wait for the installation to finish and click on Play.
    8. -
    -

    Congratulations! You have successfully downloaded Treasure of Montezuma 4 from Steam. Enjoy!

    -

    PlayStation: How to purchase the game from the PlayStation Store and play it on your PS4 or PS5

    -

    If you want to download Treasure of Montezuma 4 from PlayStation, you will need to have a PlayStation account and a PS4 or PS5 console. You can create a PlayStation account for free by visiting https://www.playstation.com/en-us/support/account/create-account/. Once you have an account, you can purchase the game for $9.99 by visiting https://store.playstation.com/en-us/product/UP4151-CUSA01975_00-TREASUREMONTEZUM. After you purchase the game, you can download it on your PS4 or PS5 by following these steps:

    -
      -
    1. Turn on your PS4 or PS5 and log in with your account.
    2. -
    3. Go to Library and find Treasure of Montezuma 4 in your list of games.
    4. -
    5. Select Download and wait for the download to finish.
    6. -
    7. Select Start to launch the game.
    8. -
    -

    Congratulations! You have successfully downloaded Treasure of Montezuma 4 from PlayStation. Enjoy!

    -

    GameHouse: How to sign up for a free trial and access thousands of games, including Treasure of Montezuma 4

    -

    If you want to download Treasure of Montezuma 4 from GameHouse, you will need to sign up for a free trial and access thousands of games, including Treasure of Montezuma 4. GameHouse is a website that offers unlimited access to over 2,500 games for a monthly fee of $10.99. However, you can try it for free for 14 days by visiting https://www.gamehouse.com/. Once you sign up for a free trial, you can download Treasure of Montezuma 4 by following these steps:

    -
      -
    1. Open GameHouse and log in with your account.
    2. -
    3. Go to Puzzle Games and find Treasure of Montezuma 4 in the list of games.
    4. -
    5. Select Play Now and wait for the game to load.
    6. -
    7. Select Full Screen to enjoy the game in full screen mode.
    8. -
    -

    Congratulations! You have successfully downloaded Treasure of Montezuma 4 from GameHouse. Enjoy!

    -

    Conclusion

    -

    Treasure of Montezuma 4 is a fantastic puzzle game that will keep you entertained and challenged for hours. It has three modes, 98 levels, seven totems, eight bonuses, and stunning graphics and sound effects. It also teaches you about the Aztec culture and history in a fun and interactive way. You can download the game from Steam, PlayStation, or GameHouse, depending on your preference and device. Don't miss this opportunity to experience a thrilling puzzle adventure. Download Treasure of Montezuma 4 today and discover the secrets of the Ziggurat!

    -

    FAQs

    -

    Here are some frequently asked questions about Treasure of Montezuma 4:

    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/style.css b/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/viewer.py b/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/viewer.py deleted file mode 100644 index d2326c38205c6eaddb4f567e3b088329187af258..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/viewer.py +++ /dev/null @@ -1,1160 +0,0 @@ -"""A pyglet-based interactive 3D scene viewer. -""" -import copy -import os -import sys -from threading import Thread, RLock -import time - -import imageio -import numpy as np -import OpenGL -import trimesh - -try: - from Tkinter import Tk, tkFileDialog as filedialog -except Exception: - try: - from tkinter import Tk, filedialog as filedialog - except Exception: - pass - -from .constants import (TARGET_OPEN_GL_MAJOR, TARGET_OPEN_GL_MINOR, - MIN_OPEN_GL_MAJOR, MIN_OPEN_GL_MINOR, - TEXT_PADDING, DEFAULT_SCENE_SCALE, - DEFAULT_Z_FAR, DEFAULT_Z_NEAR, RenderFlags, TextAlign) -from .light import DirectionalLight -from .node import Node -from .camera import PerspectiveCamera, OrthographicCamera, IntrinsicsCamera -from .trackball import Trackball -from .renderer import Renderer -from .mesh import Mesh - -import pyglet -from pyglet import clock -pyglet.options['shadow_window'] = False - - -class Viewer(pyglet.window.Window): - """An interactive viewer for 3D scenes. - - The viewer's camera is separate from the scene's, but will take on - the parameters of the scene's main view camera and start in the same pose. - If the scene does not have a camera, a suitable default will be provided. - - Parameters - ---------- - scene : :class:`Scene` - The scene to visualize. - viewport_size : (2,) int - The width and height of the initial viewing window. - render_flags : dict - A set of flags for rendering the scene. Described in the note below. - viewer_flags : dict - A set of flags for controlling the viewer's behavior. - Described in the note below. - registered_keys : dict - A map from ASCII key characters to tuples containing: - - - A function to be called whenever the key is pressed, - whose first argument will be the viewer itself. - - (Optionally) A list of additional positional arguments - to be passed to the function. - - (Optionally) A dict of keyword arguments to be passed - to the function. - - kwargs : dict - Any keyword arguments left over will be interpreted as belonging to - either the :attr:`.Viewer.render_flags` or :attr:`.Viewer.viewer_flags` - dictionaries. Those flag sets will be updated appropriately. - - Note - ---- - The basic commands for moving about the scene are given as follows: - - - **Rotating about the scene**: Hold the left mouse button and - drag the cursor. - - **Rotating about the view axis**: Hold ``CTRL`` and the left mouse - button and drag the cursor. - - **Panning**: - - - Hold SHIFT, then hold the left mouse button and drag the cursor, or - - Hold the middle mouse button and drag the cursor. - - - **Zooming**: - - - Scroll the mouse wheel, or - - Hold the right mouse button and drag the cursor. - - Other keyboard commands are as follows: - - - ``a``: Toggles rotational animation mode. - - ``c``: Toggles backface culling. - - ``f``: Toggles fullscreen mode. - - ``h``: Toggles shadow rendering. - - ``i``: Toggles axis display mode - (no axes, world axis, mesh axes, all axes). - - ``l``: Toggles lighting mode - (scene lighting, Raymond lighting, or direct lighting). - - ``m``: Toggles face normal visualization. - - ``n``: Toggles vertex normal visualization. - - ``o``: Toggles orthographic mode. - - ``q``: Quits the viewer. - - ``r``: Starts recording a GIF, and pressing again stops recording - and opens a file dialog. - - ``s``: Opens a file dialog to save the current view as an image. - - ``w``: Toggles wireframe mode - (scene default, flip wireframes, all wireframe, or all solid). - - ``z``: Resets the camera to the initial view. - - Note - ---- - The valid keys for ``render_flags`` are as follows: - - - ``flip_wireframe``: `bool`, If `True`, all objects will have their - wireframe modes flipped from what their material indicates. - Defaults to `False`. - - ``all_wireframe``: `bool`, If `True`, all objects will be rendered - in wireframe mode. Defaults to `False`. - - ``all_solid``: `bool`, If `True`, all objects will be rendered in - solid mode. Defaults to `False`. - - ``shadows``: `bool`, If `True`, shadows will be rendered. - Defaults to `False`. - - ``vertex_normals``: `bool`, If `True`, vertex normals will be - rendered as blue lines. Defaults to `False`. - - ``face_normals``: `bool`, If `True`, face normals will be rendered as - blue lines. Defaults to `False`. - - ``cull_faces``: `bool`, If `True`, backfaces will be culled. - Defaults to `True`. - - ``point_size`` : float, The point size in pixels. Defaults to 1px. - - Note - ---- - The valid keys for ``viewer_flags`` are as follows: - - - ``rotate``: `bool`, If `True`, the scene's camera will rotate - about an axis. Defaults to `False`. - - ``rotate_rate``: `float`, The rate of rotation in radians per second. - Defaults to `PI / 3.0`. - - ``rotate_axis``: `(3,) float`, The axis in world coordinates to rotate - about. Defaults to ``[0,0,1]``. - - ``view_center``: `(3,) float`, The position to rotate the scene about. - Defaults to the scene's centroid. - - ``use_raymond_lighting``: `bool`, If `True`, an additional set of three - directional lights that move with the camera will be added to the scene. - Defaults to `False`. - - ``use_direct_lighting``: `bool`, If `True`, an additional directional - light that moves with the camera and points out of it will be added to - the scene. Defaults to `False`. - - ``lighting_intensity``: `float`, The overall intensity of the - viewer's additional lights (when they're in use). Defaults to 3.0. - - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will - be used. Otherwise, an orthographic camera is used. Defaults to `True`. - - ``save_directory``: `str`, A directory to open the file dialogs in. - Defaults to `None`. - - ``window_title``: `str`, A title for the viewer's application window. - Defaults to `"Scene Viewer"`. - - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz. - Defaults to `30.0`. - - ``fullscreen``: `bool`, Whether to make viewer fullscreen. - Defaults to `False`. - - ``show_world_axis``: `bool`, Whether to show the world axis. - Defaults to `False`. - - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes. - Defaults to `False`. - - ``caption``: `list of dict`, Text caption(s) to display on the viewer. - Defaults to `None`. - - Note - ---- - Animation can be accomplished by running the viewer with ``run_in_thread`` - enabled. Then, just run a loop in your main thread, updating the scene as - needed. Before updating the scene, be sure to acquire the - :attr:`.Viewer.render_lock`, and release it when your update is done. - """ - - def __init__(self, scene, viewport_size=None, - render_flags=None, viewer_flags=None, - registered_keys=None, run_in_thread=False, - auto_start=True, - **kwargs): - - ####################################################################### - # Save attributes and flags - ####################################################################### - if viewport_size is None: - viewport_size = (640, 480) - self._scene = scene - self._viewport_size = viewport_size - self._render_lock = RLock() - self._is_active = False - self._should_close = False - self._run_in_thread = run_in_thread - self._auto_start = auto_start - - self._default_render_flags = { - 'flip_wireframe': False, - 'all_wireframe': False, - 'all_solid': False, - 'shadows': False, - 'vertex_normals': False, - 'face_normals': False, - 'cull_faces': True, - 'point_size': 1.0, - } - self._default_viewer_flags = { - 'mouse_pressed': False, - 'rotate': False, - 'rotate_rate': np.pi / 3.0, - 'rotate_axis': np.array([0.0, 0.0, 1.0]), - 'view_center': None, - 'record': False, - 'use_raymond_lighting': False, - 'use_direct_lighting': False, - 'lighting_intensity': 3.0, - 'use_perspective_cam': True, - 'save_directory': None, - 'window_title': 'Scene Viewer', - 'refresh_rate': 30.0, - 'fullscreen': False, - 'show_world_axis': False, - 'show_mesh_axes': False, - 'caption': None - } - self._render_flags = self._default_render_flags.copy() - self._viewer_flags = self._default_viewer_flags.copy() - self._viewer_flags['rotate_axis'] = ( - self._default_viewer_flags['rotate_axis'].copy() - ) - - if render_flags is not None: - self._render_flags.update(render_flags) - if viewer_flags is not None: - self._viewer_flags.update(viewer_flags) - - for key in kwargs: - if key in self.render_flags: - self._render_flags[key] = kwargs[key] - elif key in self.viewer_flags: - self._viewer_flags[key] = kwargs[key] - - # TODO MAC OS BUG FOR SHADOWS - if sys.platform == 'darwin': - self._render_flags['shadows'] = False - - self._registered_keys = {} - if registered_keys is not None: - self._registered_keys = { - ord(k.lower()): registered_keys[k] for k in registered_keys - } - - ####################################################################### - # Save internal settings - ####################################################################### - - # Set up caption stuff - self._message_text = None - self._ticks_till_fade = 2.0 / 3.0 * self.viewer_flags['refresh_rate'] - self._message_opac = 1.0 + self._ticks_till_fade - - # Set up raymond lights and direct lights - self._raymond_lights = self._create_raymond_lights() - self._direct_light = self._create_direct_light() - - # Set up axes - self._axes = {} - self._axis_mesh = Mesh.from_trimesh( - trimesh.creation.axis(origin_size=0.1, axis_radius=0.05, - axis_length=1.0), smooth=False) - if self.viewer_flags['show_world_axis']: - self._set_axes(world=self.viewer_flags['show_world_axis'], - mesh=self.viewer_flags['show_mesh_axes']) - - ####################################################################### - # Set up camera node - ####################################################################### - self._camera_node = None - self._prior_main_camera_node = None - self._default_camera_pose = None - self._default_persp_cam = None - self._default_orth_cam = None - self._trackball = None - self._saved_frames = [] - - # Extract main camera from scene and set up our mirrored copy - znear = None - zfar = None - if scene.main_camera_node is not None: - n = scene.main_camera_node - camera = copy.copy(n.camera) - if isinstance(camera, (PerspectiveCamera, IntrinsicsCamera)): - self._default_persp_cam = camera - znear = camera.znear - zfar = camera.zfar - elif isinstance(camera, OrthographicCamera): - self._default_orth_cam = camera - znear = camera.znear - zfar = camera.zfar - self._default_camera_pose = scene.get_pose(scene.main_camera_node) - self._prior_main_camera_node = n - - # Set defaults as needed - if zfar is None: - zfar = max(scene.scale * 10.0, DEFAULT_Z_FAR) - if znear is None or znear == 0: - if scene.scale == 0: - znear = DEFAULT_Z_NEAR - else: - znear = min(scene.scale / 10.0, DEFAULT_Z_NEAR) - - if self._default_persp_cam is None: - self._default_persp_cam = PerspectiveCamera( - yfov=np.pi / 3.0, znear=znear, zfar=zfar - ) - if self._default_orth_cam is None: - xmag = ymag = scene.scale - if scene.scale == 0: - xmag = ymag = 1.0 - self._default_orth_cam = OrthographicCamera( - xmag=xmag, ymag=ymag, - znear=znear, - zfar=zfar - ) - if self._default_camera_pose is None: - self._default_camera_pose = self._compute_initial_camera_pose() - - # Pick camera - if self.viewer_flags['use_perspective_cam']: - camera = self._default_persp_cam - else: - camera = self._default_orth_cam - - self._camera_node = Node( - matrix=self._default_camera_pose, camera=camera - ) - scene.add_node(self._camera_node) - scene.main_camera_node = self._camera_node - self._reset_view() - - ####################################################################### - # Initialize OpenGL context and renderer - ####################################################################### - self._renderer = Renderer( - self._viewport_size[0], self._viewport_size[1], - self.render_flags['point_size'] - ) - self._is_active = True - - if self.run_in_thread: - self._thread = Thread(target=self._init_and_start_app) - self._thread.start() - else: - if auto_start: - self._init_and_start_app() - - def start(self): - self._init_and_start_app() - - @property - def scene(self): - """:class:`.Scene` : The scene being visualized. - """ - return self._scene - - @property - def viewport_size(self): - """(2,) int : The width and height of the viewing window. - """ - return self._viewport_size - - @property - def render_lock(self): - """:class:`threading.RLock` : If acquired, prevents the viewer from - rendering until released. - - Run :meth:`.Viewer.render_lock.acquire` before making updates to - the scene in a different thread, and run - :meth:`.Viewer.render_lock.release` once you're done to let the viewer - continue. - """ - return self._render_lock - - @property - def is_active(self): - """bool : `True` if the viewer is active, or `False` if it has - been closed. - """ - return self._is_active - - @property - def run_in_thread(self): - """bool : Whether the viewer was run in a separate thread. - """ - return self._run_in_thread - - @property - def render_flags(self): - """dict : Flags for controlling the renderer's behavior. - - - ``flip_wireframe``: `bool`, If `True`, all objects will have their - wireframe modes flipped from what their material indicates. - Defaults to `False`. - - ``all_wireframe``: `bool`, If `True`, all objects will be rendered - in wireframe mode. Defaults to `False`. - - ``all_solid``: `bool`, If `True`, all objects will be rendered in - solid mode. Defaults to `False`. - - ``shadows``: `bool`, If `True`, shadows will be rendered. - Defaults to `False`. - - ``vertex_normals``: `bool`, If `True`, vertex normals will be - rendered as blue lines. Defaults to `False`. - - ``face_normals``: `bool`, If `True`, face normals will be rendered as - blue lines. Defaults to `False`. - - ``cull_faces``: `bool`, If `True`, backfaces will be culled. - Defaults to `True`. - - ``point_size`` : float, The point size in pixels. Defaults to 1px. - - """ - return self._render_flags - - @render_flags.setter - def render_flags(self, value): - self._render_flags = value - - @property - def viewer_flags(self): - """dict : Flags for controlling the viewer's behavior. - - The valid keys for ``viewer_flags`` are as follows: - - - ``rotate``: `bool`, If `True`, the scene's camera will rotate - about an axis. Defaults to `False`. - - ``rotate_rate``: `float`, The rate of rotation in radians per second. - Defaults to `PI / 3.0`. - - ``rotate_axis``: `(3,) float`, The axis in world coordinates to - rotate about. Defaults to ``[0,0,1]``. - - ``view_center``: `(3,) float`, The position to rotate the scene - about. Defaults to the scene's centroid. - - ``use_raymond_lighting``: `bool`, If `True`, an additional set of - three directional lights that move with the camera will be added to - the scene. Defaults to `False`. - - ``use_direct_lighting``: `bool`, If `True`, an additional directional - light that moves with the camera and points out of it will be - added to the scene. Defaults to `False`. - - ``lighting_intensity``: `float`, The overall intensity of the - viewer's additional lights (when they're in use). Defaults to 3.0. - - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will - be used. Otherwise, an orthographic camera is used. Defaults to - `True`. - - ``save_directory``: `str`, A directory to open the file dialogs in. - Defaults to `None`. - - ``window_title``: `str`, A title for the viewer's application window. - Defaults to `"Scene Viewer"`. - - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz. - Defaults to `30.0`. - - ``fullscreen``: `bool`, Whether to make viewer fullscreen. - Defaults to `False`. - - ``show_world_axis``: `bool`, Whether to show the world axis. - Defaults to `False`. - - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes. - Defaults to `False`. - - ``caption``: `list of dict`, Text caption(s) to display on - the viewer. Defaults to `None`. - - """ - return self._viewer_flags - - @viewer_flags.setter - def viewer_flags(self, value): - self._viewer_flags = value - - @property - def registered_keys(self): - """dict : Map from ASCII key character to a handler function. - - This is a map from ASCII key characters to tuples containing: - - - A function to be called whenever the key is pressed, - whose first argument will be the viewer itself. - - (Optionally) A list of additional positional arguments - to be passed to the function. - - (Optionally) A dict of keyword arguments to be passed - to the function. - - """ - return self._registered_keys - - @registered_keys.setter - def registered_keys(self, value): - self._registered_keys = value - - def close_external(self): - """Close the viewer from another thread. - - This function will wait for the actual close, so you immediately - manipulate the scene afterwards. - """ - self._should_close = True - while self.is_active: - time.sleep(1.0 / self.viewer_flags['refresh_rate']) - - def save_gif(self, filename=None): - """Save the stored GIF frames to a file. - - To use this asynchronously, run the viewer with the ``record`` - flag and the ``run_in_thread`` flags set. - Kill the viewer after your desired time with - :meth:`.Viewer.close_external`, and then call :meth:`.Viewer.save_gif`. - - Parameters - ---------- - filename : str - The file to save the GIF to. If not specified, - a file dialog will be opened to ask the user where - to save the GIF file. - """ - if filename is None: - filename = self._get_save_filename(['gif', 'all']) - if filename is not None: - self.viewer_flags['save_directory'] = os.path.dirname(filename) - imageio.mimwrite(filename, self._saved_frames, - fps=self.viewer_flags['refresh_rate'], - palettesize=128, subrectangles=True) - self._saved_frames = [] - - def on_close(self): - """Exit the event loop when the window is closed. - """ - # Remove our camera and restore the prior one - if self._camera_node is not None: - self.scene.remove_node(self._camera_node) - if self._prior_main_camera_node is not None: - self.scene.main_camera_node = self._prior_main_camera_node - - # Delete any lighting nodes that we've attached - if self.viewer_flags['use_raymond_lighting']: - for n in self._raymond_lights: - if self.scene.has_node(n): - self.scene.remove_node(n) - if self.viewer_flags['use_direct_lighting']: - if self.scene.has_node(self._direct_light): - self.scene.remove_node(self._direct_light) - - # Delete any axis nodes that we've attached - self._remove_axes() - - # Delete renderer - if self._renderer is not None: - self._renderer.delete() - self._renderer = None - - # Force clean-up of OpenGL context data - try: - OpenGL.contextdata.cleanupContext() - self.close() - except Exception: - pass - finally: - self._is_active = False - super(Viewer, self).on_close() - pyglet.app.exit() - - def on_draw(self): - """Redraw the scene into the viewing window. - """ - if self._renderer is None: - return - - if self.run_in_thread or not self._auto_start: - self.render_lock.acquire() - - # Make OpenGL context current - self.switch_to() - - # Render the scene - self.clear() - self._render() - - if self._message_text is not None: - self._renderer.render_text( - self._message_text, - self.viewport_size[0] - TEXT_PADDING, - TEXT_PADDING, - font_pt=20, - color=np.array([0.1, 0.7, 0.2, - np.clip(self._message_opac, 0.0, 1.0)]), - align=TextAlign.BOTTOM_RIGHT - ) - - if self.viewer_flags['caption'] is not None: - for caption in self.viewer_flags['caption']: - xpos, ypos = self._location_to_x_y(caption['location']) - self._renderer.render_text( - caption['text'], - xpos, - ypos, - font_name=caption['font_name'], - font_pt=caption['font_pt'], - color=caption['color'], - scale=caption['scale'], - align=caption['location'] - ) - - if self.run_in_thread or not self._auto_start: - self.render_lock.release() - - def on_resize(self, width, height): - """Resize the camera and trackball when the window is resized. - """ - if self._renderer is None: - return - - self._viewport_size = (width, height) - self._trackball.resize(self._viewport_size) - self._renderer.viewport_width = self._viewport_size[0] - self._renderer.viewport_height = self._viewport_size[1] - self.on_draw() - - def on_mouse_press(self, x, y, buttons, modifiers): - """Record an initial mouse press. - """ - self._trackball.set_state(Trackball.STATE_ROTATE) - if (buttons == pyglet.window.mouse.LEFT): - ctrl = (modifiers & pyglet.window.key.MOD_CTRL) - shift = (modifiers & pyglet.window.key.MOD_SHIFT) - if (ctrl and shift): - self._trackball.set_state(Trackball.STATE_ZOOM) - elif ctrl: - self._trackball.set_state(Trackball.STATE_ROLL) - elif shift: - self._trackball.set_state(Trackball.STATE_PAN) - elif (buttons == pyglet.window.mouse.MIDDLE): - self._trackball.set_state(Trackball.STATE_PAN) - elif (buttons == pyglet.window.mouse.RIGHT): - self._trackball.set_state(Trackball.STATE_ZOOM) - - self._trackball.down(np.array([x, y])) - - # Stop animating while using the mouse - self.viewer_flags['mouse_pressed'] = True - - def on_mouse_drag(self, x, y, dx, dy, buttons, modifiers): - """Record a mouse drag. - """ - self._trackball.drag(np.array([x, y])) - - def on_mouse_release(self, x, y, button, modifiers): - """Record a mouse release. - """ - self.viewer_flags['mouse_pressed'] = False - - def on_mouse_scroll(self, x, y, dx, dy): - """Record a mouse scroll. - """ - if self.viewer_flags['use_perspective_cam']: - self._trackball.scroll(dy) - else: - spfc = 0.95 - spbc = 1.0 / 0.95 - sf = 1.0 - if dy > 0: - sf = spfc * dy - elif dy < 0: - sf = - spbc * dy - - c = self._camera_node.camera - xmag = max(c.xmag * sf, 1e-8) - ymag = max(c.ymag * sf, 1e-8 * c.ymag / c.xmag) - c.xmag = xmag - c.ymag = ymag - - def on_key_press(self, symbol, modifiers): - """Record a key press. - """ - # First, check for registered key callbacks - if symbol in self.registered_keys: - tup = self.registered_keys[symbol] - callback = None - args = [] - kwargs = {} - if not isinstance(tup, (list, tuple, np.ndarray)): - callback = tup - else: - callback = tup[0] - if len(tup) == 2: - args = tup[1] - if len(tup) == 3: - kwargs = tup[2] - callback(self, *args, **kwargs) - return - - # Otherwise, use default key functions - - # A causes the frame to rotate - self._message_text = None - if symbol == pyglet.window.key.A: - self.viewer_flags['rotate'] = not self.viewer_flags['rotate'] - if self.viewer_flags['rotate']: - self._message_text = 'Rotation On' - else: - self._message_text = 'Rotation Off' - - # C toggles backface culling - elif symbol == pyglet.window.key.C: - self.render_flags['cull_faces'] = ( - not self.render_flags['cull_faces'] - ) - if self.render_flags['cull_faces']: - self._message_text = 'Cull Faces On' - else: - self._message_text = 'Cull Faces Off' - - # F toggles face normals - elif symbol == pyglet.window.key.F: - self.viewer_flags['fullscreen'] = ( - not self.viewer_flags['fullscreen'] - ) - self.set_fullscreen(self.viewer_flags['fullscreen']) - self.activate() - if self.viewer_flags['fullscreen']: - self._message_text = 'Fullscreen On' - else: - self._message_text = 'Fullscreen Off' - - # S toggles shadows - elif symbol == pyglet.window.key.H and sys.platform != 'darwin': - self.render_flags['shadows'] = not self.render_flags['shadows'] - if self.render_flags['shadows']: - self._message_text = 'Shadows On' - else: - self._message_text = 'Shadows Off' - - elif symbol == pyglet.window.key.I: - if (self.viewer_flags['show_world_axis'] and not - self.viewer_flags['show_mesh_axes']): - self.viewer_flags['show_world_axis'] = False - self.viewer_flags['show_mesh_axes'] = True - self._set_axes(False, True) - self._message_text = 'Mesh Axes On' - elif (not self.viewer_flags['show_world_axis'] and - self.viewer_flags['show_mesh_axes']): - self.viewer_flags['show_world_axis'] = True - self.viewer_flags['show_mesh_axes'] = True - self._set_axes(True, True) - self._message_text = 'All Axes On' - elif (self.viewer_flags['show_world_axis'] and - self.viewer_flags['show_mesh_axes']): - self.viewer_flags['show_world_axis'] = False - self.viewer_flags['show_mesh_axes'] = False - self._set_axes(False, False) - self._message_text = 'All Axes Off' - else: - self.viewer_flags['show_world_axis'] = True - self.viewer_flags['show_mesh_axes'] = False - self._set_axes(True, False) - self._message_text = 'World Axis On' - - # L toggles the lighting mode - elif symbol == pyglet.window.key.L: - if self.viewer_flags['use_raymond_lighting']: - self.viewer_flags['use_raymond_lighting'] = False - self.viewer_flags['use_direct_lighting'] = True - self._message_text = 'Direct Lighting' - elif self.viewer_flags['use_direct_lighting']: - self.viewer_flags['use_raymond_lighting'] = False - self.viewer_flags['use_direct_lighting'] = False - self._message_text = 'Default Lighting' - else: - self.viewer_flags['use_raymond_lighting'] = True - self.viewer_flags['use_direct_lighting'] = False - self._message_text = 'Raymond Lighting' - - # M toggles face normals - elif symbol == pyglet.window.key.M: - self.render_flags['face_normals'] = ( - not self.render_flags['face_normals'] - ) - if self.render_flags['face_normals']: - self._message_text = 'Face Normals On' - else: - self._message_text = 'Face Normals Off' - - # N toggles vertex normals - elif symbol == pyglet.window.key.N: - self.render_flags['vertex_normals'] = ( - not self.render_flags['vertex_normals'] - ) - if self.render_flags['vertex_normals']: - self._message_text = 'Vert Normals On' - else: - self._message_text = 'Vert Normals Off' - - # O toggles orthographic camera mode - elif symbol == pyglet.window.key.O: - self.viewer_flags['use_perspective_cam'] = ( - not self.viewer_flags['use_perspective_cam'] - ) - if self.viewer_flags['use_perspective_cam']: - camera = self._default_persp_cam - self._message_text = 'Perspective View' - else: - camera = self._default_orth_cam - self._message_text = 'Orthographic View' - - cam_pose = self._camera_node.matrix.copy() - cam_node = Node(matrix=cam_pose, camera=camera) - self.scene.remove_node(self._camera_node) - self.scene.add_node(cam_node) - self.scene.main_camera_node = cam_node - self._camera_node = cam_node - - # Q quits the viewer - elif symbol == pyglet.window.key.Q: - self.on_close() - - # R starts recording frames - elif symbol == pyglet.window.key.R: - if self.viewer_flags['record']: - self.save_gif() - self.set_caption(self.viewer_flags['window_title']) - else: - self.set_caption( - '{} (RECORDING)'.format(self.viewer_flags['window_title']) - ) - self.viewer_flags['record'] = not self.viewer_flags['record'] - - # S saves the current frame as an image - elif symbol == pyglet.window.key.S: - self._save_image() - - # W toggles through wireframe modes - elif symbol == pyglet.window.key.W: - if self.render_flags['flip_wireframe']: - self.render_flags['flip_wireframe'] = False - self.render_flags['all_wireframe'] = True - self.render_flags['all_solid'] = False - self._message_text = 'All Wireframe' - elif self.render_flags['all_wireframe']: - self.render_flags['flip_wireframe'] = False - self.render_flags['all_wireframe'] = False - self.render_flags['all_solid'] = True - self._message_text = 'All Solid' - elif self.render_flags['all_solid']: - self.render_flags['flip_wireframe'] = False - self.render_flags['all_wireframe'] = False - self.render_flags['all_solid'] = False - self._message_text = 'Default Wireframe' - else: - self.render_flags['flip_wireframe'] = True - self.render_flags['all_wireframe'] = False - self.render_flags['all_solid'] = False - self._message_text = 'Flip Wireframe' - - # Z resets the camera viewpoint - elif symbol == pyglet.window.key.Z: - self._reset_view() - - if self._message_text is not None: - self._message_opac = 1.0 + self._ticks_till_fade - - @staticmethod - def _time_event(dt, self): - """The timer callback. - """ - # Don't run old dead events after we've already closed - if not self._is_active: - return - - if self.viewer_flags['record']: - self._record() - if (self.viewer_flags['rotate'] and not - self.viewer_flags['mouse_pressed']): - self._rotate() - - # Manage message opacity - if self._message_text is not None: - if self._message_opac > 1.0: - self._message_opac -= 1.0 - else: - self._message_opac *= 0.90 - if self._message_opac < 0.05: - self._message_opac = 1.0 + self._ticks_till_fade - self._message_text = None - - if self._should_close: - self.on_close() - else: - self.on_draw() - - def _reset_view(self): - """Reset the view to a good initial state. - - The view is initially along the positive x-axis at a - sufficient distance from the scene. - """ - scale = self.scene.scale - if scale == 0.0: - scale = DEFAULT_SCENE_SCALE - centroid = self.scene.centroid - - if self.viewer_flags['view_center'] is not None: - centroid = self.viewer_flags['view_center'] - - self._camera_node.matrix = self._default_camera_pose - self._trackball = Trackball( - self._default_camera_pose, self.viewport_size, scale, centroid - ) - - def _get_save_filename(self, file_exts): - file_types = { - 'png': ('png files', '*.png'), - 'jpg': ('jpeg files', '*.jpg'), - 'gif': ('gif files', '*.gif'), - 'all': ('all files', '*'), - } - filetypes = [file_types[x] for x in file_exts] - try: - root = Tk() - save_dir = self.viewer_flags['save_directory'] - if save_dir is None: - save_dir = os.getcwd() - filename = filedialog.asksaveasfilename( - initialdir=save_dir, title='Select file save location', - filetypes=filetypes - ) - except Exception: - return None - - root.destroy() - if filename == (): - return None - return filename - - def _save_image(self): - filename = self._get_save_filename(['png', 'jpg', 'gif', 'all']) - if filename is not None: - self.viewer_flags['save_directory'] = os.path.dirname(filename) - imageio.imwrite(filename, self._renderer.read_color_buf()) - - def _record(self): - """Save another frame for the GIF. - """ - data = self._renderer.read_color_buf() - if not np.all(data == 0.0): - self._saved_frames.append(data) - - def _rotate(self): - """Animate the scene by rotating the camera. - """ - az = (self.viewer_flags['rotate_rate'] / - self.viewer_flags['refresh_rate']) - self._trackball.rotate(az, self.viewer_flags['rotate_axis']) - - def _render(self): - """Render the scene into the framebuffer and flip. - """ - scene = self.scene - self._camera_node.matrix = self._trackball.pose.copy() - - # Set lighting - vli = self.viewer_flags['lighting_intensity'] - if self.viewer_flags['use_raymond_lighting']: - for n in self._raymond_lights: - n.light.intensity = vli / 3.0 - if not self.scene.has_node(n): - scene.add_node(n, parent_node=self._camera_node) - else: - self._direct_light.light.intensity = vli - for n in self._raymond_lights: - if self.scene.has_node(n): - self.scene.remove_node(n) - - if self.viewer_flags['use_direct_lighting']: - if not self.scene.has_node(self._direct_light): - scene.add_node( - self._direct_light, parent_node=self._camera_node - ) - elif self.scene.has_node(self._direct_light): - self.scene.remove_node(self._direct_light) - - flags = RenderFlags.NONE - if self.render_flags['flip_wireframe']: - flags |= RenderFlags.FLIP_WIREFRAME - elif self.render_flags['all_wireframe']: - flags |= RenderFlags.ALL_WIREFRAME - elif self.render_flags['all_solid']: - flags |= RenderFlags.ALL_SOLID - - if self.render_flags['shadows']: - flags |= RenderFlags.SHADOWS_DIRECTIONAL | RenderFlags.SHADOWS_SPOT - if self.render_flags['vertex_normals']: - flags |= RenderFlags.VERTEX_NORMALS - if self.render_flags['face_normals']: - flags |= RenderFlags.FACE_NORMALS - if not self.render_flags['cull_faces']: - flags |= RenderFlags.SKIP_CULL_FACES - - self._renderer.render(self.scene, flags) - - def _init_and_start_app(self): - # Try multiple configs starting with target OpenGL version - # and multisampling and removing these options if exception - # Note: multisampling not available on all hardware - from pyglet.gl import Config - confs = [Config(sample_buffers=1, samples=4, - depth_size=24, - double_buffer=True, - major_version=TARGET_OPEN_GL_MAJOR, - minor_version=TARGET_OPEN_GL_MINOR), - Config(depth_size=24, - double_buffer=True, - major_version=TARGET_OPEN_GL_MAJOR, - minor_version=TARGET_OPEN_GL_MINOR), - Config(sample_buffers=1, samples=4, - depth_size=24, - double_buffer=True, - major_version=MIN_OPEN_GL_MAJOR, - minor_version=MIN_OPEN_GL_MINOR), - Config(depth_size=24, - double_buffer=True, - major_version=MIN_OPEN_GL_MAJOR, - minor_version=MIN_OPEN_GL_MINOR)] - for conf in confs: - try: - super(Viewer, self).__init__(config=conf, resizable=True, - width=self._viewport_size[0], - height=self._viewport_size[1]) - break - except pyglet.window.NoSuchConfigException: - pass - - if not self.context: - raise ValueError('Unable to initialize an OpenGL 3+ context') - clock.schedule_interval( - Viewer._time_event, 1.0 / self.viewer_flags['refresh_rate'], self - ) - self.switch_to() - self.set_caption(self.viewer_flags['window_title']) - pyglet.app.run() - - def _compute_initial_camera_pose(self): - centroid = self.scene.centroid - if self.viewer_flags['view_center'] is not None: - centroid = self.viewer_flags['view_center'] - scale = self.scene.scale - if scale == 0.0: - scale = DEFAULT_SCENE_SCALE - - s2 = 1.0 / np.sqrt(2.0) - cp = np.eye(4) - cp[:3,:3] = np.array([ - [0.0, -s2, s2], - [1.0, 0.0, 0.0], - [0.0, s2, s2] - ]) - hfov = np.pi / 6.0 - dist = scale / (2.0 * np.tan(hfov)) - cp[:3,3] = dist * np.array([1.0, 0.0, 1.0]) + centroid - - return cp - - def _create_raymond_lights(self): - thetas = np.pi * np.array([1.0 / 6.0, 1.0 / 6.0, 1.0 / 6.0]) - phis = np.pi * np.array([0.0, 2.0 / 3.0, 4.0 / 3.0]) - - nodes = [] - - for phi, theta in zip(phis, thetas): - xp = np.sin(theta) * np.cos(phi) - yp = np.sin(theta) * np.sin(phi) - zp = np.cos(theta) - - z = np.array([xp, yp, zp]) - z = z / np.linalg.norm(z) - x = np.array([-z[1], z[0], 0.0]) - if np.linalg.norm(x) == 0: - x = np.array([1.0, 0.0, 0.0]) - x = x / np.linalg.norm(x) - y = np.cross(z, x) - - matrix = np.eye(4) - matrix[:3,:3] = np.c_[x,y,z] - nodes.append(Node( - light=DirectionalLight(color=np.ones(3), intensity=1.0), - matrix=matrix - )) - - return nodes - - def _create_direct_light(self): - light = DirectionalLight(color=np.ones(3), intensity=1.0) - n = Node(light=light, matrix=np.eye(4)) - return n - - def _set_axes(self, world, mesh): - scale = self.scene.scale - if world: - if 'scene' not in self._axes: - n = Node(mesh=self._axis_mesh, scale=np.ones(3) * scale * 0.3) - self.scene.add_node(n) - self._axes['scene'] = n - else: - if 'scene' in self._axes: - self.scene.remove_node(self._axes['scene']) - self._axes.pop('scene') - - if mesh: - old_nodes = [] - existing_axes = set([self._axes[k] for k in self._axes]) - for node in self.scene.mesh_nodes: - if node not in existing_axes: - old_nodes.append(node) - - for node in old_nodes: - if node in self._axes: - continue - n = Node( - mesh=self._axis_mesh, - scale=np.ones(3) * node.mesh.scale * 0.5 - ) - self.scene.add_node(n, parent_node=node) - self._axes[node] = n - else: - to_remove = set() - for main_node in self._axes: - if main_node in self.scene.mesh_nodes: - self.scene.remove_node(self._axes[main_node]) - to_remove.add(main_node) - for main_node in to_remove: - self._axes.pop(main_node) - - def _remove_axes(self): - for main_node in self._axes: - axis_node = self._axes[main_node] - self.scene.remove_node(axis_node) - self._axes = {} - - def _location_to_x_y(self, location): - if location == TextAlign.CENTER: - return (self.viewport_size[0] / 2.0, self.viewport_size[1] / 2.0) - elif location == TextAlign.CENTER_LEFT: - return (TEXT_PADDING, self.viewport_size[1] / 2.0) - elif location == TextAlign.CENTER_RIGHT: - return (self.viewport_size[0] - TEXT_PADDING, - self.viewport_size[1] / 2.0) - elif location == TextAlign.BOTTOM_LEFT: - return (TEXT_PADDING, TEXT_PADDING) - elif location == TextAlign.BOTTOM_RIGHT: - return (self.viewport_size[0] - TEXT_PADDING, TEXT_PADDING) - elif location == TextAlign.BOTTOM_CENTER: - return (self.viewport_size[0] / 2.0, TEXT_PADDING) - elif location == TextAlign.TOP_LEFT: - return (TEXT_PADDING, self.viewport_size[1] - TEXT_PADDING) - elif location == TextAlign.TOP_RIGHT: - return (self.viewport_size[0] - TEXT_PADDING, - self.viewport_size[1] - TEXT_PADDING) - elif location == TextAlign.TOP_CENTER: - return (self.viewport_size[0] / 2.0, - self.viewport_size[1] - TEXT_PADDING) - - -__all__ = ['Viewer'] diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/hifigan/stft_loss.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/hifigan/stft_loss.py deleted file mode 100644 index e47447455341e5725d6f82ded66dc08b5d2b1cc5..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/hifigan/stft_loss.py +++ /dev/null @@ -1,136 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""STFT-based Loss modules.""" - -import torch -import torch.nn.functional as F - - -def stft(x, fft_size, hop_size, win_length, window): - """Perform STFT and convert to magnitude spectrogram. - Args: - x (Tensor): Input signal tensor (B, T). - fft_size (int): FFT size. - hop_size (int): Hop size. - win_length (int): Window length. - window (str): Window function type. - Returns: - Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1). - """ - x_stft = torch.stft(x, fft_size, hop_size, win_length, window) - real = x_stft[..., 0] - imag = x_stft[..., 1] - - # NOTE(kan-bayashi): clamp is needed to avoid nan or inf - return torch.sqrt(torch.clamp(real ** 2 + imag ** 2, min=1e-7)).transpose(2, 1) - - -class SpectralConvergengeLoss(torch.nn.Module): - """Spectral convergence loss module.""" - - def __init__(self): - """Initilize spectral convergence loss module.""" - super(SpectralConvergengeLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - Returns: - Tensor: Spectral convergence loss value. - """ - return torch.norm(y_mag - x_mag, p="fro") / torch.norm(y_mag, p="fro") - - -class LogSTFTMagnitudeLoss(torch.nn.Module): - """Log STFT magnitude loss module.""" - - def __init__(self): - """Initilize los STFT magnitude loss module.""" - super(LogSTFTMagnitudeLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - Returns: - Tensor: Log STFT magnitude loss value. - """ - return F.l1_loss(torch.log(y_mag), torch.log(x_mag)) - - -class STFTLoss(torch.nn.Module): - """STFT loss module.""" - - def __init__(self, fft_size=1024, shift_size=120, win_length=600, window="hann_window"): - """Initialize STFT loss module.""" - super(STFTLoss, self).__init__() - self.fft_size = fft_size - self.shift_size = shift_size - self.win_length = win_length - self.window = getattr(torch, window)(win_length) - self.spectral_convergenge_loss = SpectralConvergengeLoss() - self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss() - - def forward(self, x, y): - """Calculate forward propagation. - Args: - x (Tensor): Predicted signal (B, T). - y (Tensor): Groundtruth signal (B, T). - Returns: - Tensor: Spectral convergence loss value. - Tensor: Log STFT magnitude loss value. - """ - x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window.to(x.get_device())) - y_mag = stft(y, self.fft_size, self.shift_size, self.win_length, self.window.to(x.get_device())) - sc_loss = self.spectral_convergenge_loss(x_mag, y_mag) - mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag) - - return sc_loss, mag_loss - - -class MultiResolutionSTFTLoss(torch.nn.Module): - """Multi resolution STFT loss module.""" - - def __init__(self, - fft_sizes=[1024, 2048, 512], - hop_sizes=[120, 240, 50], - win_lengths=[600, 1200, 240], - window="hann_window"): - """Initialize Multi resolution STFT loss module. - Args: - fft_sizes (list): List of FFT sizes. - hop_sizes (list): List of hop sizes. - win_lengths (list): List of window lengths. - window (str): Window function type. - """ - super(MultiResolutionSTFTLoss, self).__init__() - assert len(fft_sizes) == len(hop_sizes) == len(win_lengths) - self.stft_losses = torch.nn.ModuleList() - for fs, ss, wl in zip(fft_sizes, hop_sizes, win_lengths): - self.stft_losses += [STFTLoss(fs, ss, wl, window)] - - def forward(self, x, y): - """Calculate forward propagation. - Args: - x (Tensor): Predicted signal (B, T). - y (Tensor): Groundtruth signal (B, T). - Returns: - Tensor: Multi resolution spectral convergence loss value. - Tensor: Multi resolution log STFT magnitude loss value. - """ - sc_loss = 0.0 - mag_loss = 0.0 - for f in self.stft_losses: - sc_l, mag_l = f(x, y) - sc_loss += sc_l - mag_loss += mag_l - sc_loss /= len(self.stft_losses) - mag_loss /= len(self.stft_losses) - - return sc_loss, mag_loss \ No newline at end of file diff --git a/spaces/AIWaves/Debate/src/agents/__init__.py b/spaces/AIWaves/Debate/src/agents/__init__.py deleted file mode 100644 index 69b468b54240b0a357eac1ba7573971cf65b412c..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Debate/src/agents/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .evolve import * -from .SOP import * -from .State import * -from .utils import * \ No newline at end of file diff --git a/spaces/Abhaykoul/HelpingAI-t2/README.md b/spaces/Abhaykoul/HelpingAI-t2/README.md deleted file mode 100644 index 1f10f08cb9cbceae47bae18ecbd2887d92104033..0000000000000000000000000000000000000000 --- a/spaces/Abhaykoul/HelpingAI-t2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HelpingAI T2 -emoji: ⚡ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/helper.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/helper.py deleted file mode 100644 index 5a9a93293ae3fb3ff11c8dcbd6fa2c68bd2f3bb7..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/helper.py +++ /dev/null @@ -1,77 +0,0 @@ -from __future__ import annotations - -import asyncio -import sys -from asyncio import AbstractEventLoop -from os import path -from typing import Dict, List -import browser_cookie3 - -# Change event loop policy on windows -if sys.platform == 'win32': - if isinstance( - asyncio.get_event_loop_policy(), asyncio.WindowsProactorEventLoopPolicy - ): - asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) - -# Local Cookie Storage -_cookies: Dict[str, Dict[str, str]] = {} - -# If event loop is already running, handle nested event loops -# If "nest_asyncio" is installed, patch the event loop. -def get_event_loop() -> AbstractEventLoop: - try: - asyncio.get_running_loop() - except RuntimeError: - try: - return asyncio.get_event_loop() - except RuntimeError: - asyncio.set_event_loop(asyncio.new_event_loop()) - return asyncio.get_event_loop() - try: - event_loop = asyncio.get_event_loop() - if not hasattr(event_loop.__class__, "_nest_patched"): - import nest_asyncio - nest_asyncio.apply(event_loop) - return event_loop - except ImportError: - raise RuntimeError( - 'Use "create_async" instead of "create" function in a running event loop. Or install the "nest_asyncio" package.' - ) - - -# Load cookies for a domain from all supported browsers. -# Cache the results in the "_cookies" variable. -def get_cookies(cookie_domain: str) -> Dict[str, str]: - if cookie_domain not in _cookies: - _cookies[cookie_domain] = {} - try: - for cookie in browser_cookie3.load(cookie_domain): - _cookies[cookie_domain][cookie.name] = cookie.value - except: - pass - return _cookies[cookie_domain] - - -def format_prompt(messages: List[Dict[str, str]], add_special_tokens=False) -> str: - if add_special_tokens or len(messages) > 1: - formatted = "\n".join( - [ - "%s: %s" % ((message["role"]).capitalize(), message["content"]) - for message in messages - ] - ) - return f"{formatted}\nAssistant:" - else: - return messages[0]["content"] - - -def get_browser(user_data_dir: str = None): - from undetected_chromedriver import Chrome - from platformdirs import user_config_dir - - if not user_data_dir: - user_data_dir = user_config_dir("g4f") - user_data_dir = path.join(user_data_dir, "Default") - - return Chrome(user_data_dir=user_data_dir) \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/rotate-plugin.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/rotate-plugin.d.ts deleted file mode 100644 index 609db44d262e411679d95ad1db5aa29db93ef05d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/rotate-plugin.d.ts +++ /dev/null @@ -1,9 +0,0 @@ -import Rotate from './rotate'; - -export default class RotatePlugin extends Phaser.Plugins.BasePlugin { - add( - gameObject: Phaser.GameObjects.GameObject, - config?: Rotate.IConfig - ): Rotate; - -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/Bejeweled.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/Bejeweled.js deleted file mode 100644 index 84758f7f7909b7d70c4c655d0379ae413df511c4..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/Bejeweled.js +++ /dev/null @@ -1,82 +0,0 @@ -import ComponentBase from '../../plugins/utils/componentbase/ComponentBase.js'; -import MainState from './states/MainState.js'; -import Board from './board/Board.js'; -import Input from './input/Input.js'; -import WaitEvents from '../../plugins/waitevents.js'; -import InputMethods from './methods/InputMethods.js'; -import BoardMethods from './methods/BoardMethods.js'; -import WaitEventMethods from './methods/WaitEventMethods.js'; -import DataManagerMethods from '../../plugins/utils/data/DataManagerMethods.js'; - - -const GetValue = Phaser.Utils.Objects.GetValue; - -class Bejeweled extends ComponentBase { - constructor(scene, config) { - super(scene, config); - // this.scene - - var rexBoardKey = GetValue(config, 'rexBoard', 'rexBoard'); - this.rexBoard = scene[rexBoardKey]; - - this.board = new Board(this, config); - - var defaultInput = GetValue(config, 'input', true); - if (defaultInput) { - this.input = new Input(this, config); - } else { - this.input = undefined; - } - - this.waitEvents = new WaitEvents(); - - this.mainState = new MainState(this, config); - - this.boot(); - } - - boot() { - this.scene.events.once('shutdown', this.destroy, this); - } - - shutdown(fromScene) { - super.shutdown(fromScene); - - if (this.input) { - this.input.destroy(); - } - this.board.destroy(); - this.mainState.destroy(); - this.waitEvents.destroy(); - - this.destroyDataManager(); - - this.board = undefined; - this.mainState = undefined; - this.input = undefined; - this.waitEvents = undefined; - - return this; - } - - destroy(fromScene) { - this.emit('destroy'); - super.destroy(fromScene); - return this; - } - - start() { - this.mainState.goto('START'); - return this; - } -} - -Object.assign( - Bejeweled.prototype, - InputMethods, - BoardMethods, - WaitEventMethods, - DataManagerMethods -); - -export default Bejeweled; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.d.ts deleted file mode 100644 index 0312cf859572c6636998ad00a7a48a04d6505bb3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import InputText from '../../../plugins/inputtext'; -export default InputText; \ No newline at end of file diff --git a/spaces/AkitoP/umamusume_bert_vits2/text/symbols.py b/spaces/AkitoP/umamusume_bert_vits2/text/symbols.py deleted file mode 100644 index 6151147fd5c61037d5c11797bb2f585694c598a6..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/text/symbols.py +++ /dev/null @@ -1,188 +0,0 @@ -punctuation = ["!", "?", "…", ",", ".", "'", "-"] -pu_symbols = punctuation + ["SP", "UNK"] -pad = "_" - -# chinese -zh_symbols = [ - "E", - "En", - "a", - "ai", - "an", - "ang", - "ao", - "b", - "c", - "ch", - "d", - "e", - "ei", - "en", - "eng", - "er", - "f", - "g", - "h", - "i", - "i0", - "ia", - "ian", - "iang", - "iao", - "ie", - "in", - "ing", - "iong", - "ir", - "iu", - "j", - "k", - "l", - "m", - "n", - "o", - "ong", - "ou", - "p", - "q", - "r", - "s", - "sh", - "t", - "u", - "ua", - "uai", - "uan", - "uang", - "ui", - "un", - "uo", - "v", - "van", - "ve", - "vn", - "w", - "x", - "y", - "z", - "zh", - "AA", - "EE", - "OO", -] -num_zh_tones = 6 - -# japanese -ja_symbols = [ - "N", - "a", - "a:", - "b", - "by", - "ch", - "d", - "dy", - "e", - "e:", - "f", - "g", - "gy", - "h", - "hy", - "i", - "i:", - "j", - "k", - "ky", - "m", - "my", - "n", - "ny", - "o", - "o:", - "p", - "py", - "q", - "r", - "ry", - "s", - "sh", - "t", - "ts", - "ty", - "u", - "u:", - "w", - "y", - "z", - "zy", - # ":" -] -num_ja_tones = 1 - -# English -en_symbols = [ - "aa", - "ae", - "ah", - "ao", - "aw", - "ay", - "b", - "ch", - "d", - "dh", - "eh", - "er", - "ey", - "f", - "g", - "hh", - "ih", - "iy", - "jh", - "k", - "l", - "m", - "n", - "ng", - "ow", - "oy", - "p", - "r", - "s", - "sh", - "t", - "th", - "uh", - "uw", - "V", - "w", - "y", - "z", - "zh", -] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = {"ZH": 0, "JP": 1, "EN": 2} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - "ZH": 0, - "JP": num_zh_tones, - "EN": num_zh_tones + num_ja_tones, -} - -if __name__ == "__main__": - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a & b)) diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/NLP/UTF8.pm b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/NLP/UTF8.pm deleted file mode 100644 index b28cb4dede3b84f45aeade2e24f240e3a39e7cc1..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/NLP/UTF8.pm +++ /dev/null @@ -1,1404 +0,0 @@ -################################################################ -# # -# UTF8 # -# # -################################################################ - -package NLP::UTF8; - -use NLP::utilities; -$util = NLP::utilities; - -%empty_ht = (); - -sub new { - local($caller) = @_; - - my $object = {}; - my $class = ref( $caller ) || $caller; - bless($object, $class); - return $object; -} - -sub unicode_string2string { -# input: string that might contain unicode sequences such as "U+0627" -# output: string in pure utf-8 - local($caller,$s) = @_; - - my $pre; - my $unicode; - my $post; - my $r1; - my $r2; - my $r3; - - ($pre,$unicode,$post) = ($s =~ /^(.*)(?:U\+|\\u)([0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f])(.*)$/); - return $s unless defined($post); - $r1 = $caller->unicode_string2string($pre); - $r2 = $caller->unicode_hex_string2string($unicode); - $r3 = $caller->unicode_string2string($post); - $result = $r1 . $r2 . $r3; - return $result; -} - -sub unicode_hex_string2string { -# input: "0627" (interpreted as hex code) -# output: utf-8 string for Arabic letter alef - local($caller,$unicode) = @_; - return "" unless defined($unicode); - my $d = hex($unicode); - return $caller->unicode2string($d); -} - -sub unicode2string { -# input: non-neg integer, e.g. 0x627 -# output: utf-8 string for Arabic letter alef - local($caller,$d) = @_; - return "" unless defined($d) && $d >= 0; - return sprintf("%c",$d) if $d <= 0x7F; - - my $lastbyte1 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c",$d | 0xC0, $lastbyte1) if $d <= 0x1F; - - my $lastbyte2 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c",$d | 0xE0, $lastbyte2, $lastbyte1) if $d <= 0xF; - - my $lastbyte3 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c",$d | 0xF0, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x7; - - my $lastbyte4 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c%c",$d | 0xF8, $lastbyte4, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x3; - - my $lastbyte5 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c%c%c",$d | 0xFC, $lastbyte5, $lastbyte4, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x1; - return ""; # bad input -} - -sub html2utf8 { - local($caller, $string) = @_; - - return $string unless $string =~ /\&\#\d{3,5};/; - - my $prev = ""; - my $s = $string; - while ($s ne $prev) { - $prev = $s; - ($pre,$d,$post) = ($s =~ /^(.*)\&\#(\d+);(.*)$/); - if (defined($d) && ((($d >= 160) && ($d <= 255)) - || (($d >= 1500) && ($d <= 1699)) - || (($d >= 19968) && ($d <= 40879)))) { - $html_code = "\&\#" . $d . ";"; - $utf8_code = $caller->unicode2string($d); - $s =~ s/$html_code/$utf8_code/; - } - } - return $s; -} - -sub xhtml2utf8 { - local($caller, $string) = @_; - - return $string unless $string =~ /\&\#x[0-9a-fA-F]{2,5};/; - - my $prev = ""; - my $s = $string; - while ($s ne $prev) { - $prev = $s; - if (($pre, $html_code, $x, $post) = ($s =~ /^(.*)(\&\#x([0-9a-fA-F]{2,5});)(.*)$/)) { - $utf8_code = $caller->unicode_hex_string2string($x); - $s =~ s/$html_code/$utf8_code/; - } - } - return $s; -} - -sub utf8_marker { - return sprintf("%c%c%c\n", 0xEF, 0xBB, 0xBF); -} - -sub enforcer { -# input: string that might not conform to utf-8 -# output: string in pure utf-8, with a few "smart replacements" and possibly "?" - local($caller,$s,$no_repair) = @_; - - my $ascii; - my $utf8; - my $rest; - - return $s if $s =~ /^[\x00-\x7F]*$/; - - $no_repair = 0 unless defined($no_repair); - $orig = $s; - $result = ""; - - while ($s ne "") { - ($ascii,$rest) = ($s =~ /^([\x00-\x7F]+)(.*)$/); - if (defined($ascii)) { - $result .= $ascii; - $s = $rest; - next; - } - ($utf8,$rest) = ($s =~ /^([\xC0-\xDF][\x80-\xBF])(.*)$/); - ($utf8,$rest) = ($s =~ /^([\xE0-\xEF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - ($utf8,$rest) = ($s =~ /^([\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - ($utf8,$rest) = ($s =~ /^([\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - if (defined($utf8)) { - $result .= $utf8; - $s = $rest; - next; - } - ($c,$rest) = ($s =~ /^(.)(.*)$/); - if (defined($c)) { - if ($no_repair) { $result .= "?"; } - elsif ($c =~ /\x85/) { $result .= "..."; } - elsif ($c =~ /\x91/) { $result .= "'"; } - elsif ($c =~ /\x92/) { $result .= "'"; } - elsif ($c =~ /\x93/) { $result .= $caller->unicode2string(0x201C); } - elsif ($c =~ /\x94/) { $result .= $caller->unicode2string(0x201D); } - elsif ($c =~ /[\xC0-\xFF]/) { - $c2 = $c; - $c2 =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c2"; - } else { - $result .= "?"; - } - $s = $rest; - next; - } - $s = ""; - } - $result .= "\n" if ($orig =~ /\n$/) && ! ($result =~ /\n$/); - return $result; -} - -sub split_into_utf8_characters { -# input: utf8 string -# output: list of sub-strings, each representing a utf8 character - local($caller,$string,$group_control, *ht) = @_; - - @characters = (); - $end_of_token_p_string = ""; - $skipped_bytes = ""; - $group_control = "" unless defined($group_control); - $group_ascii_numbers = ($group_control =~ /ASCII numbers/); - $group_ascii_spaces = ($group_control =~ /ASCII spaces/); - $group_ascii_punct = ($group_control =~ /ASCII punct/); - $group_ascii_chars = ($group_control =~ /ASCII chars/); - $group_xml_chars = ($group_control =~ /XML chars/); - $group_xml_tags = ($group_control =~ /XML tags/); - $return_only_chars = ($group_control =~ /return only chars/); - $return_trailing_whitespaces = ($group_control =~ /return trailing whitespaces/); - if ($group_control =~ /ASCII all/) { - $group_ascii_numbers = 1; - $group_ascii_spaces = 1; - $group_ascii_chars = 1; - $group_ascii_punct = 1; - } - if ($group_control =~ /(XML chars and tags|XML tags and chars)/) { - $group_xml_chars = 1; - $group_xml_tags = 1; - } - $orig_string = $string; - $string .= " "; - while ($string =~ /\S/) { - # one-character UTF-8 = ASCII - if ($string =~ /^[\x00-\x7F]/) { - if ($group_xml_chars - && (($dec_unicode, $rest) = ($string =~ /^&#(\d+);(.*)$/s)) - && ($utf8_char = $caller->unicode2string($dec_unicode))) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_chars - && (($hex_unicode, $rest) = ($string =~ /^&#x([0-9a-f]{1,6});(.*)$/is)) - && ($utf8_char = $caller->unicode_hex_string2string($hex_unicode))) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_chars - && (($html_entity_name, $rest) = ($string =~ /^&([a-z]{1,6});(.*)$/is)) - && ($dec_unicode = $ht{HTML_ENTITY_NAME_TO_DECUNICODE}->{$html_entity_name}) - && ($utf8_char = $caller->unicode2string($dec_unicode)) - ) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_tags - && (($tag, $rest) = ($string =~ /^(<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>)(.*)$/s))) { - push(@characters, $tag); - $string = $rest; - } elsif ($group_ascii_numbers && ($string =~ /^[12]\d\d\d\.[01]?\d.[0-3]?\d([^0-9].*)?$/)) { - ($date) = ($string =~ /^(\d\d\d\d\.\d?\d.\d?\d)([^0-9].*)?$/); - push(@characters,$date); - $string = substr($string, length($date)); - } elsif ($group_ascii_numbers && ($string =~ /^\d/)) { - ($number) = ($string =~ /^(\d+(,\d\d\d)*(\.\d+)?)/); - push(@characters,$number); - $string = substr($string, length($number)); - } elsif ($group_ascii_spaces && ($string =~ /^(\s+)/)) { - ($space) = ($string =~ /^(\s+)/); - $string = substr($string, length($space)); - } elsif ($group_ascii_punct && (($punct_seq) = ($string =~ /^(-+|\.+|[:,%()"])/))) { - push(@characters,$punct_seq); - $string = substr($string, length($punct_seq)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(\$[A-Z]*|[A-Z]{1,3}\$)/))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($abbrev) = ($string =~ /^((?:Jan|Feb|Febr|Mar|Apr|Jun|Jul|Aug|Sep|Sept|Oct|Nov|Dec|Mr|Mrs|Dr|a.m|p.m)\.)/))) { - push(@characters,$abbrev); - $string = substr($string, length($abbrev)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(second|minute|hour|day|week|month|year|inch|foot|yard|meter|kilometer|mile)-(?:long|old)/i))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(zero|one|two|three|four|five|six|seven|eight|nine|ten|eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen|eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty|ninety|hundred|thousand|million|billion|trillion)-/i))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^([a-zA-Z]+)(?:[ ,;%?|()"]|'s |' |\. |\d+[:hms][0-9 ])/))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && ($string =~ /^([\x21-\x27\x2A-\x7E]+)/)) { # exclude () - ($ascii) = ($string =~ /^([\x21-\x27\x2A-\x7E]+)/); # ASCII black-characters - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } elsif ($group_ascii_chars && ($string =~ /^([\x21-\x7E]+)/)) { - ($ascii) = ($string =~ /^([\x21-\x7E]+)/); # ASCII black-characters - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } elsif ($group_ascii_chars && ($string =~ /^([\x00-\x7F]+)/)) { - ($ascii) = ($string =~ /^([\x00-\x7F]+)/); - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } else { - push(@characters,substr($string, 0, 1)); - $string = substr($string, 1); - } - - # two-character UTF-8 - } elsif ($string =~ /^[\xC0-\xDF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 2)); - $string = substr($string, 2); - - # three-character UTF-8 - } elsif ($string =~ /^[\xE0-\xEF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 3)); - $string = substr($string, 3); - - # four-character UTF-8 - } elsif ($string =~ /^[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 4)); - $string = substr($string, 4); - - # five-character UTF-8 - } elsif ($string =~ /^[\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 5)); - $string = substr($string, 5); - - # six-character UTF-8 - } elsif ($string =~ /^[\xFC-\xFD][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 6)); - $string = substr($string, 6); - - # not a UTF-8 character - } else { - $skipped_bytes .= substr($string, 0, 1); - $string = substr($string, 1); - } - - $end_of_token_p_string .= ($string =~ /^\S/) ? "0" : "1" - if $#characters >= length($end_of_token_p_string); - } - $string =~ s/ $//; # remove previously added space, but keep original spaces - if ($return_trailing_whitespaces) { - while ($string =~ /^[ \t]/) { - push(@characters,substr($string, 0, 1)); - $string = substr($string, 1); - } - push(@characters, "\n") if $orig_string =~ /\n$/; - } - return ($return_only_chars) ? @characters : ($skipped_bytes, $end_of_token_p_string, @characters); -} - -sub max_substring_info { - local($caller,$s1,$s2,$info_type) = @_; - - ($skipped_bytes1, $end_of_token_p_string1, @char_list1) = $caller->split_into_utf8_characters($s1, "", *empty_ht); - ($skipped_bytes2, $end_of_token_p_string2, @char_list2) = $caller->split_into_utf8_characters($s2, "", *empty_ht); - return 0 if $skipped_bytes1 || $skipped_bytes2; - - $best_substring_start1 = 0; - $best_substring_start2 = 0; - $best_substring_length = 0; - - foreach $start_pos2 ((0 .. $#char_list2)) { - last if $start_pos2 + $best_substring_length > $#char_list2; - foreach $start_pos1 ((0 .. $#char_list1)) { - last if $start_pos1 + $best_substring_length > $#char_list1; - $matching_length = 0; - while (($start_pos1 + $matching_length <= $#char_list1) - && ($start_pos2 + $matching_length <= $#char_list2) - && ($char_list1[$start_pos1+$matching_length] eq $char_list2[$start_pos2+$matching_length])) { - $matching_length++; - } - if ($matching_length > $best_substring_length) { - $best_substring_length = $matching_length; - $best_substring_start1 = $start_pos1; - $best_substring_start2 = $start_pos2; - } - } - } - if ($info_type =~ /^max-ratio1$/) { - $length1 = $#char_list1 + 1; - return ($length1 > 0) ? ($best_substring_length / $length1) : 0; - } elsif ($info_type =~ /^max-ratio2$/) { - $length2 = $#char_list2 + 1; - return ($length2 > 0) ? ($best_substring_length / $length2) : 0; - } elsif ($info_type =~ /^substring$/) { - return join("", @char_list1[$best_substring_start1 .. $best_substring_start1+$best_substring_length-1]); - } else { - $length1 = $#char_list1 + 1; - $length2 = $#char_list2 + 1; - $info = "s1=$s1;s2=$s2"; - $info .= ";best_substring_length=$best_substring_length"; - $info .= ";best_substring_start1=$best_substring_start1"; - $info .= ";best_substring_start2=$best_substring_start2"; - $info .= ";length1=$length1"; - $info .= ";length2=$length2"; - return $info; - } -} - -sub n_shared_chars_at_start { - local($caller,$s1,$s2) = @_; - - my $n = 0; - while (($s1 ne "") && ($s2 ne "")) { - ($c1, $rest1) = ($s1 =~ /^(.[\x80-\xBF]*)(.*)$/); - ($c2, $rest2) = ($s2 =~ /^(.[\x80-\xBF]*)(.*)$/); - if ($c1 eq $c2) { - $n++; - $s1 = $rest1; - $s2 = $rest2; - } else { - last; - } - } - return $n; -} - -sub char_length { - local($caller,$string,$byte_offset) = @_; - - my $char = ($byte_offset) ? substr($string, $byte_offset) : $string; - return 1 if $char =~ /^[\x00-\x7F]/; - return 2 if $char =~ /^[\xC0-\xDF]/; - return 3 if $char =~ /^[\xE0-\xEF]/; - return 4 if $char =~ /^[\xF0-\xF7]/; - return 5 if $char =~ /^[\xF8-\xFB]/; - return 6 if $char =~ /^[\xFC-\xFD]/; - return 0; -} - -sub length_in_utf8_chars { - local($caller,$s) = @_; - - $s =~ s/[\x80-\xBF]//g; - $s =~ s/[\x00-\x7F\xC0-\xFF]/c/g; - return length($s); -} - -sub byte_length_of_n_chars { - local($caller,$char_length,$string,$byte_offset,$undef_return_value) = @_; - - $byte_offset = 0 unless defined($byte_offset); - $undef_return_value = -1 unless defined($undef_return_value); - my $result = 0; - my $len; - foreach $i ((1 .. $char_length)) { - $len = $caller->char_length($string,($byte_offset+$result)); - return $undef_return_value unless $len; - $result += $len; - } - return $result; -} - -sub replace_non_ASCII_bytes { - local($caller,$string,$replacement) = @_; - - $replacement = "HEX" unless defined($replacement); - if ($replacement =~ /^(Unicode|U\+4|\\u|HEX)$/) { - $new_string = ""; - while (($pre,$utf8_char, $post) = ($string =~ /^([\x09\x0A\x20-\x7E]*)([\x00-\x08\x0B-\x1F\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]|[\xF8-\xFF][\x80-\xBF]+|[\x80-\xBF])(.*)$/s)) { - if ($replacement =~ /Unicode/) { - $new_string .= $pre . "utf8_to_unicode($utf8_char)) . ">"; - } elsif ($replacement =~ /\\u/) { - $new_string .= $pre . "\\u" . (uc sprintf("%04x", $caller->utf8_to_unicode($utf8_char))); - } elsif ($replacement =~ /U\+4/) { - $new_string .= $pre . "utf8_to_4hex_unicode($utf8_char)) . ">"; - } else { - $new_string .= $pre . "utf8_to_hex($utf8_char) . ">"; - } - $string = $post; - } - $new_string .= $string; - } else { - $new_string = $string; - $new_string =~ s/[\x80-\xFF]/$replacement/g; - } - return $new_string; -} - -sub valid_utf8_string_p { - local($caller,$string) = @_; - - return $string =~ /^(?:[\x09\x0A\x20-\x7E]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])*$/; -} - -sub valid_utf8_string_incl_ascii_control_p { - local($caller,$string) = @_; - - return $string =~ /^(?:[\x00-\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])*$/; -} - -sub utf8_to_hex { - local($caller,$s) = @_; - - $hex = ""; - foreach $i ((0 .. length($s)-1)) { - $hex .= uc sprintf("%2.2x",ord(substr($s, $i, 1))); - } - return $hex; -} - -sub hex_to_utf8 { - local($caller,$s) = @_; - # surface string \xE2\x80\xBA to UTF8 - - my $utf8 = ""; - while (($hex, $rest) = ($s =~ /^(?:\\x)?([0-9A-Fa-f]{2,2})(.*)$/)) { - $utf8 .= sprintf("%c", hex($hex)); - $s = $rest; - } - return $utf8; -} - -sub utf8_to_4hex_unicode { - local($caller,$s) = @_; - - return sprintf("%4.4x", $caller->utf8_to_unicode($s)); -} - -sub utf8_to_unicode { - local($caller,$s) = @_; - - $unicode = 0; - foreach $i ((0 .. length($s)-1)) { - $c = substr($s, $i, 1); - if ($c =~ /^[\x80-\xBF]$/) { - $unicode = $unicode * 64 + (ord($c) & 0x3F); - } elsif ($c =~ /^[\xC0-\xDF]$/) { - $unicode = $unicode * 32 + (ord($c) & 0x1F); - } elsif ($c =~ /^[\xE0-\xEF]$/) { - $unicode = $unicode * 16 + (ord($c) & 0x0F); - } elsif ($c =~ /^[\xF0-\xF7]$/) { - $unicode = $unicode * 8 + (ord($c) & 0x07); - } elsif ($c =~ /^[\xF8-\xFB]$/) { - $unicode = $unicode * 4 + (ord($c) & 0x03); - } elsif ($c =~ /^[\xFC-\xFD]$/) { - $unicode = $unicode * 2 + (ord($c) & 0x01); - } - } - return $unicode; -} - -sub charhex { - local($caller,$string) = @_; - - my $result = ""; - while ($string ne "") { - $char = substr($string, 0, 1); - $string = substr($string, 1); - if ($char =~ /^[ -~]$/) { - $result .= $char; - } else { - $hex = sprintf("%2.2x",ord($char)); - $hex =~ tr/a-f/A-F/; - $result .= ""; - } - } - return $result; -} - -sub windows1252_to_utf8 { - local($caller,$s, $norm_to_ascii_p, $preserve_potential_utf8s_p) = @_; - - return $s if $s =~ /^[\x00-\x7F]*$/; # all ASCII - - $norm_to_ascii_p = 1 unless defined($norm_to_ascii_p); - $preserve_potential_utf8s_p = 1 unless defined($preserve_potential_utf8s_p); - my $result = ""; - my $c = ""; - while ($s ne "") { - $n_bytes = 1; - if ($s =~ /^[\x00-\x7F]/) { - $result .= substr($s, 0, 1); # ASCII - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xC0-\xDF][\x80-\xBF]/)) { - $result .= substr($s, 0, 2); # valid 2-byte UTF8 - $n_bytes = 2; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xE0-\xEF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 3); # valid 3-byte UTF8 - $n_bytes = 3; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 4); # valid 4-byte UTF8 - $n_bytes = 4; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 5); # valid 5-byte UTF8 - $n_bytes = 5; - } elsif ($s =~ /^[\xA0-\xBF]/) { - $c = substr($s, 0, 1); - $result .= "\xC2$c"; - } elsif ($s =~ /^[\xC0-\xFF]/) { - $c = substr($s, 0, 1); - $c =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c"; - } elsif ($s =~ /^\x80/) { - $result .= "\xE2\x82\xAC"; # Euro sign - } elsif ($s =~ /^\x82/) { - $result .= "\xE2\x80\x9A"; # single low quotation mark - } elsif ($s =~ /^\x83/) { - $result .= "\xC6\x92"; # Latin small letter f with hook - } elsif ($s =~ /^\x84/) { - $result .= "\xE2\x80\x9E"; # double low quotation mark - } elsif ($s =~ /^\x85/) { - $result .= ($norm_to_ascii_p) ? "..." : "\xE2\x80\xA6"; # horizontal ellipsis (three dots) - } elsif ($s =~ /^\x86/) { - $result .= "\xE2\x80\xA0"; # dagger - } elsif ($s =~ /^\x87/) { - $result .= "\xE2\x80\xA1"; # double dagger - } elsif ($s =~ /^\x88/) { - $result .= "\xCB\x86"; # circumflex - } elsif ($s =~ /^\x89/) { - $result .= "\xE2\x80\xB0"; # per mille sign - } elsif ($s =~ /^\x8A/) { - $result .= "\xC5\xA0"; # Latin capital letter S with caron - } elsif ($s =~ /^\x8B/) { - $result .= "\xE2\x80\xB9"; # single left-pointing angle quotation mark - } elsif ($s =~ /^\x8C/) { - $result .= "\xC5\x92"; # OE ligature - } elsif ($s =~ /^\x8E/) { - $result .= "\xC5\xBD"; # Latin capital letter Z with caron - } elsif ($s =~ /^\x91/) { - $result .= ($norm_to_ascii_p) ? "`" : "\xE2\x80\x98"; # left single quotation mark - } elsif ($s =~ /^\x92/) { - $result .= ($norm_to_ascii_p) ? "'" : "\xE2\x80\x99"; # right single quotation mark - } elsif ($s =~ /^\x93/) { - $result .= "\xE2\x80\x9C"; # left double quotation mark - } elsif ($s =~ /^\x94/) { - $result .= "\xE2\x80\x9D"; # right double quotation mark - } elsif ($s =~ /^\x95/) { - $result .= "\xE2\x80\xA2"; # bullet - } elsif ($s =~ /^\x96/) { - $result .= ($norm_to_ascii_p) ? "-" : "\xE2\x80\x93"; # n dash - } elsif ($s =~ /^\x97/) { - $result .= ($norm_to_ascii_p) ? "-" : "\xE2\x80\x94"; # m dash - } elsif ($s =~ /^\x98/) { - $result .= ($norm_to_ascii_p) ? "~" : "\xCB\x9C"; # small tilde - } elsif ($s =~ /^\x99/) { - $result .= "\xE2\x84\xA2"; # trade mark sign - } elsif ($s =~ /^\x9A/) { - $result .= "\xC5\xA1"; # Latin small letter s with caron - } elsif ($s =~ /^\x9B/) { - $result .= "\xE2\x80\xBA"; # single right-pointing angle quotation mark - } elsif ($s =~ /^\x9C/) { - $result .= "\xC5\x93"; # oe ligature - } elsif ($s =~ /^\x9E/) { - $result .= "\xC5\xBE"; # Latin small letter z with caron - } elsif ($s =~ /^\x9F/) { - $result .= "\xC5\xB8"; # Latin capital letter Y with diaeresis - } else { - $result .= "?"; - } - $s = substr($s, $n_bytes); - } - return $result; -} - -sub delete_weird_stuff { - local($caller, $s) = @_; - - # delete control chacters (except tab and linefeed), zero-width characters, byte order mark, - # directional marks, join marks, variation selectors, Arabic tatweel - $s =~ s/([\x00-\x08\x0B-\x1F\x7F]|\xC2[\x80-\x9F]|\xD9\x80|\xE2\x80[\x8B-\x8F]|\xEF\xB8[\x80-\x8F]|\xEF\xBB\xBF|\xF3\xA0[\x84-\x87][\x80-\xBF])//g; - return $s; -} - -sub number_of_utf8_character { - local($caller, $s) = @_; - - $s2 = $s; - $s2 =~ s/[\x80-\xBF]//g; - return length($s2); -} - -sub cap_letter_reg_exp { - # includes A-Z and other Latin-based capital letters with accents, umlauts and other decorations etc. - return "[A-Z]|\xC3[\x80-\x96\x98-\x9E]|\xC4[\x80\x82\x84\x86\x88\x8A\x8C\x8E\x90\x94\x964\x98\x9A\x9C\x9E\xA0\xA2\xA4\xA6\xA8\xAA\xAC\xAE\xB0\xB2\xB4\xB6\xB9\xBB\xBD\xBF]|\xC5[\x81\x83\x85\x87\x8A\x8C\x8E\x90\x92\x96\x98\x9A\x9C\x9E\xA0\xA2\xA4\xA6\xA8\xAA\xAC\xB0\xB2\xB4\xB6\xB8\xB9\xBB\xBD]"; -} - -sub regex_extended_case_expansion { - local($caller, $s) = @_; - - if ($s =~ /\xC3/) { - $s =~ s/\xC3\xA0/\xC3\[\x80\xA0\]/g; - $s =~ s/\xC3\xA1/\xC3\[\x81\xA1\]/g; - $s =~ s/\xC3\xA2/\xC3\[\x82\xA2\]/g; - $s =~ s/\xC3\xA3/\xC3\[\x83\xA3\]/g; - $s =~ s/\xC3\xA4/\xC3\[\x84\xA4\]/g; - $s =~ s/\xC3\xA5/\xC3\[\x85\xA5\]/g; - $s =~ s/\xC3\xA6/\xC3\[\x86\xA6\]/g; - $s =~ s/\xC3\xA7/\xC3\[\x87\xA7\]/g; - $s =~ s/\xC3\xA8/\xC3\[\x88\xA8\]/g; - $s =~ s/\xC3\xA9/\xC3\[\x89\xA9\]/g; - $s =~ s/\xC3\xAA/\xC3\[\x8A\xAA\]/g; - $s =~ s/\xC3\xAB/\xC3\[\x8B\xAB\]/g; - $s =~ s/\xC3\xAC/\xC3\[\x8C\xAC\]/g; - $s =~ s/\xC3\xAD/\xC3\[\x8D\xAD\]/g; - $s =~ s/\xC3\xAE/\xC3\[\x8E\xAE\]/g; - $s =~ s/\xC3\xAF/\xC3\[\x8F\xAF\]/g; - $s =~ s/\xC3\xB0/\xC3\[\x90\xB0\]/g; - $s =~ s/\xC3\xB1/\xC3\[\x91\xB1\]/g; - $s =~ s/\xC3\xB2/\xC3\[\x92\xB2\]/g; - $s =~ s/\xC3\xB3/\xC3\[\x93\xB3\]/g; - $s =~ s/\xC3\xB4/\xC3\[\x94\xB4\]/g; - $s =~ s/\xC3\xB5/\xC3\[\x95\xB5\]/g; - $s =~ s/\xC3\xB6/\xC3\[\x96\xB6\]/g; - $s =~ s/\xC3\xB8/\xC3\[\x98\xB8\]/g; - $s =~ s/\xC3\xB9/\xC3\[\x99\xB9\]/g; - $s =~ s/\xC3\xBA/\xC3\[\x9A\xBA\]/g; - $s =~ s/\xC3\xBB/\xC3\[\x9B\xBB\]/g; - $s =~ s/\xC3\xBC/\xC3\[\x9C\xBC\]/g; - $s =~ s/\xC3\xBD/\xC3\[\x9D\xBD\]/g; - $s =~ s/\xC3\xBE/\xC3\[\x9E\xBE\]/g; - } - if ($s =~ /\xC5/) { - $s =~ s/\xC5\x91/\xC5\[\x90\x91\]/g; - $s =~ s/\xC5\xA1/\xC5\[\xA0\xA1\]/g; - $s =~ s/\xC5\xB1/\xC5\[\xB0\xB1\]/g; - } - - return $s; -} - -sub extended_lower_case { - local($caller, $s) = @_; - - $s =~ tr/A-Z/a-z/; - - # Latin-1 - if ($s =~ /\xC3[\x80-\x9F]/) { - $s =~ s/À/à/g; - $s =~ s/Á/á/g; - $s =~ s/Â/â/g; - $s =~ s/Ã/ã/g; - $s =~ s/Ä/ä/g; - $s =~ s/Å/å/g; - $s =~ s/Æ/æ/g; - $s =~ s/Ç/ç/g; - $s =~ s/È/è/g; - $s =~ s/É/é/g; - $s =~ s/Ê/ê/g; - $s =~ s/Ë/ë/g; - $s =~ s/Ì/ì/g; - $s =~ s/Í/í/g; - $s =~ s/Î/î/g; - $s =~ s/Ï/ï/g; - $s =~ s/Ð/ð/g; - $s =~ s/Ñ/ñ/g; - $s =~ s/Ò/ò/g; - $s =~ s/Ó/ó/g; - $s =~ s/Ô/ô/g; - $s =~ s/Õ/õ/g; - $s =~ s/Ö/ö/g; - $s =~ s/Ø/ø/g; - $s =~ s/Ù/ù/g; - $s =~ s/Ú/ú/g; - $s =~ s/Û/û/g; - $s =~ s/Ü/ü/g; - $s =~ s/Ý/ý/g; - $s =~ s/Þ/þ/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/Ā/ā/g; - $s =~ s/Ă/ă/g; - $s =~ s/Ą/ą/g; - $s =~ s/Ć/ć/g; - $s =~ s/Ĉ/ĉ/g; - $s =~ s/Ċ/ċ/g; - $s =~ s/Č/č/g; - $s =~ s/Ď/ď/g; - $s =~ s/Đ/đ/g; - $s =~ s/Ē/ē/g; - $s =~ s/Ĕ/ĕ/g; - $s =~ s/Ė/ė/g; - $s =~ s/Ę/ę/g; - $s =~ s/Ě/ě/g; - $s =~ s/Ĝ/ĝ/g; - $s =~ s/Ğ/ğ/g; - $s =~ s/Ġ/ġ/g; - $s =~ s/Ģ/ģ/g; - $s =~ s/Ĥ/ĥ/g; - $s =~ s/Ħ/ħ/g; - $s =~ s/Ĩ/ĩ/g; - $s =~ s/Ī/ī/g; - $s =~ s/Ĭ/ĭ/g; - $s =~ s/Į/į/g; - $s =~ s/İ/ı/g; - $s =~ s/IJ/ij/g; - $s =~ s/Ĵ/ĵ/g; - $s =~ s/Ķ/ķ/g; - $s =~ s/Ĺ/ĺ/g; - $s =~ s/Ļ/ļ/g; - $s =~ s/Ľ/ľ/g; - $s =~ s/Ŀ/ŀ/g; - $s =~ s/Ł/ł/g; - $s =~ s/Ń/ń/g; - $s =~ s/Ņ/ņ/g; - $s =~ s/Ň/ň/g; - $s =~ s/Ŋ/ŋ/g; - $s =~ s/Ō/ō/g; - $s =~ s/Ŏ/ŏ/g; - $s =~ s/Ő/ő/g; - $s =~ s/Œ/œ/g; - $s =~ s/Ŕ/ŕ/g; - $s =~ s/Ŗ/ŗ/g; - $s =~ s/Ř/ř/g; - $s =~ s/Ś/ś/g; - $s =~ s/Ŝ/ŝ/g; - $s =~ s/Ş/ş/g; - $s =~ s/Š/š/g; - $s =~ s/Ţ/ţ/g; - $s =~ s/Ť/ť/g; - $s =~ s/Ŧ/ŧ/g; - $s =~ s/Ũ/ũ/g; - $s =~ s/Ū/ū/g; - $s =~ s/Ŭ/ŭ/g; - $s =~ s/Ů/ů/g; - $s =~ s/Ű/ű/g; - $s =~ s/Ų/ų/g; - $s =~ s/Ŵ/ŵ/g; - $s =~ s/Ŷ/ŷ/g; - $s =~ s/Ź/ź/g; - $s =~ s/Ż/ż/g; - $s =~ s/Ž/ž/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/Α/α/g; - $s =~ s/Β/β/g; - $s =~ s/Γ/γ/g; - $s =~ s/Δ/δ/g; - $s =~ s/Ε/ε/g; - $s =~ s/Ζ/ζ/g; - $s =~ s/Η/η/g; - $s =~ s/Θ/θ/g; - $s =~ s/Ι/ι/g; - $s =~ s/Κ/κ/g; - $s =~ s/Λ/λ/g; - $s =~ s/Μ/μ/g; - $s =~ s/Ν/ν/g; - $s =~ s/Ξ/ξ/g; - $s =~ s/Ο/ο/g; - $s =~ s/Π/π/g; - $s =~ s/Ρ/ρ/g; - $s =~ s/Σ/σ/g; - $s =~ s/Τ/τ/g; - $s =~ s/Υ/υ/g; - $s =~ s/Φ/φ/g; - $s =~ s/Χ/χ/g; - $s =~ s/Ψ/ψ/g; - $s =~ s/Ω/ω/g; - $s =~ s/Ϊ/ϊ/g; - $s =~ s/Ϋ/ϋ/g; - $s =~ s/Ά/ά/g; - $s =~ s/Έ/έ/g; - $s =~ s/Ή/ή/g; - $s =~ s/Ί/ί/g; - $s =~ s/Ό/ό/g; - $s =~ s/Ύ/ύ/g; - $s =~ s/Ώ/ώ/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/А/а/g; - $s =~ s/Б/б/g; - $s =~ s/В/в/g; - $s =~ s/Г/г/g; - $s =~ s/Д/д/g; - $s =~ s/Е/е/g; - $s =~ s/Ж/ж/g; - $s =~ s/З/з/g; - $s =~ s/И/и/g; - $s =~ s/Й/й/g; - $s =~ s/К/к/g; - $s =~ s/Л/л/g; - $s =~ s/М/м/g; - $s =~ s/Н/н/g; - $s =~ s/О/о/g; - $s =~ s/П/п/g; - $s =~ s/Р/р/g; - $s =~ s/С/с/g; - $s =~ s/Т/т/g; - $s =~ s/У/у/g; - $s =~ s/Ф/ф/g; - $s =~ s/Х/х/g; - $s =~ s/Ц/ц/g; - $s =~ s/Ч/ч/g; - $s =~ s/Ш/ш/g; - $s =~ s/Щ/щ/g; - $s =~ s/Ъ/ъ/g; - $s =~ s/Ы/ы/g; - $s =~ s/Ь/ь/g; - $s =~ s/Э/э/g; - $s =~ s/Ю/ю/g; - $s =~ s/Я/я/g; - $s =~ s/Ѐ/ѐ/g; - $s =~ s/Ё/ё/g; - $s =~ s/Ђ/ђ/g; - $s =~ s/Ѓ/ѓ/g; - $s =~ s/Є/є/g; - $s =~ s/Ѕ/ѕ/g; - $s =~ s/І/і/g; - $s =~ s/Ї/ї/g; - $s =~ s/Ј/ј/g; - $s =~ s/Љ/љ/g; - $s =~ s/Њ/њ/g; - $s =~ s/Ћ/ћ/g; - $s =~ s/Ќ/ќ/g; - $s =~ s/Ѝ/ѝ/g; - $s =~ s/Ў/ў/g; - $s =~ s/Џ/џ/g; - } - # Fullwidth A-Z - if ($s =~ /\xEF\xBC[\xA1-\xBA]/) { - $s =~ s/A/a/g; - $s =~ s/B/b/g; - $s =~ s/C/c/g; - $s =~ s/D/d/g; - $s =~ s/E/e/g; - $s =~ s/F/f/g; - $s =~ s/G/g/g; - $s =~ s/H/h/g; - $s =~ s/I/i/g; - $s =~ s/J/j/g; - $s =~ s/K/k/g; - $s =~ s/L/l/g; - $s =~ s/M/m/g; - $s =~ s/N/n/g; - $s =~ s/O/o/g; - $s =~ s/P/p/g; - $s =~ s/Q/q/g; - $s =~ s/R/r/g; - $s =~ s/S/s/g; - $s =~ s/T/t/g; - $s =~ s/U/u/g; - $s =~ s/V/v/g; - $s =~ s/W/w/g; - $s =~ s/X/x/g; - $s =~ s/Y/y/g; - $s =~ s/Z/z/g; - } - - return $s; -} - -sub extended_upper_case { - local($caller, $s) = @_; - - $s =~ tr/a-z/A-Z/; - return $s unless $s =~ /[\xC3-\xC5][\x80-\xBF]/; - - $s =~ s/\xC3\xA0/\xC3\x80/g; - $s =~ s/\xC3\xA1/\xC3\x81/g; - $s =~ s/\xC3\xA2/\xC3\x82/g; - $s =~ s/\xC3\xA3/\xC3\x83/g; - $s =~ s/\xC3\xA4/\xC3\x84/g; - $s =~ s/\xC3\xA5/\xC3\x85/g; - $s =~ s/\xC3\xA6/\xC3\x86/g; - $s =~ s/\xC3\xA7/\xC3\x87/g; - $s =~ s/\xC3\xA8/\xC3\x88/g; - $s =~ s/\xC3\xA9/\xC3\x89/g; - $s =~ s/\xC3\xAA/\xC3\x8A/g; - $s =~ s/\xC3\xAB/\xC3\x8B/g; - $s =~ s/\xC3\xAC/\xC3\x8C/g; - $s =~ s/\xC3\xAD/\xC3\x8D/g; - $s =~ s/\xC3\xAE/\xC3\x8E/g; - $s =~ s/\xC3\xAF/\xC3\x8F/g; - $s =~ s/\xC3\xB0/\xC3\x90/g; - $s =~ s/\xC3\xB1/\xC3\x91/g; - $s =~ s/\xC3\xB2/\xC3\x92/g; - $s =~ s/\xC3\xB3/\xC3\x93/g; - $s =~ s/\xC3\xB4/\xC3\x94/g; - $s =~ s/\xC3\xB5/\xC3\x95/g; - $s =~ s/\xC3\xB6/\xC3\x96/g; - $s =~ s/\xC3\xB8/\xC3\x98/g; - $s =~ s/\xC3\xB9/\xC3\x99/g; - $s =~ s/\xC3\xBA/\xC3\x9A/g; - $s =~ s/\xC3\xBB/\xC3\x9B/g; - $s =~ s/\xC3\xBC/\xC3\x9C/g; - $s =~ s/\xC3\xBD/\xC3\x9D/g; - $s =~ s/\xC3\xBE/\xC3\x9E/g; - - $s =~ s/\xC5\x91/\xC5\x90/g; - $s =~ s/\xC5\xA1/\xC5\xA0/g; - $s =~ s/\xC5\xB1/\xC5\xB0/g; - return $s unless $s =~ /[\xC3-\xC5][\x80-\xBF]/; - - return $s; -} - -sub extended_first_upper_case { - local($caller, $s) = @_; - - if (($first_char, $rest) = ($s =~ /^([\x00-\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF])(.*)$/)) { - return $caller->extended_upper_case($first_char) . $rest; - } else { - return $s; - } -} - -sub repair_doubly_converted_utf8_strings { - local($caller, $s) = @_; - - if ($s =~ /\xC3[\x82-\x85]\xC2[\x80-\xBF]/) { - $s =~ s/\xC3\x82\xC2([\x80-\xBF])/\xC2$1/g; - $s =~ s/\xC3\x83\xC2([\x80-\xBF])/\xC3$1/g; - $s =~ s/\xC3\x84\xC2([\x80-\xBF])/\xC4$1/g; - $s =~ s/\xC3\x85\xC2([\x80-\xBF])/\xC5$1/g; - } - return $s; -} - -sub repair_misconverted_windows_to_utf8_strings { - local($caller, $s) = @_; - - # correcting conversions of UTF8 using Latin1-to-UTF converter - if ($s =~ /\xC3\xA2\xC2\x80\xC2[\x90-\xEF]/) { - my $result = ""; - while (($pre,$last_c,$post) = ($s =~ /^(.*?)\xC3\xA2\xC2\x80\xC2([\x90-\xEF])(.*)$/s)) { - $result .= "$pre\xE2\x80$last_c"; - $s = $post; - } - $result .= $s; - $s = $result; - } - # correcting conversions of Windows1252-to-UTF8 using Latin1-to-UTF converter - if ($s =~ /\xC2[\x80-\x9F]/) { - my $result = ""; - while (($pre,$c_windows,$post) = ($s =~ /^(.*?)\xC2([\x80-\x9F])(.*)$/s)) { - $c_utf8 = $caller->windows1252_to_utf8($c_windows, 0); - $result .= ($c_utf8 eq "?") ? ($pre . "\xC2" . $c_windows) : "$pre$c_utf8"; - $s = $post; - } - $result .= $s; - $s = $result; - } - if ($s =~ /\xC3/) { - $s =~ s/\xC3\xA2\xE2\x80\x9A\xC2\xAC/\xE2\x82\xAC/g; # x80 -> Euro sign - # x81 codepoint undefined in Windows 1252 - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\xA1/\xE2\x80\x9A/g; # x82 -> single low-9 quotation mark - $s =~ s/\xC3\x86\xE2\x80\x99/\xC6\x92/g; # x83 -> Latin small letter f with hook - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\xBE/\xE2\x80\x9E/g; # x84 -> double low-9 quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA6/\xE2\x80\xA6/g; # x85 -> horizontal ellipsis - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA0/\xE2\x80\xA0/g; # x86 -> dagger - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA1/\xE2\x80\xA1/g; # x87 -> double dagger - $s =~ s/\xC3\x8B\xE2\x80\xA0/\xCB\x86/g; # x88 -> modifier letter circumflex accent - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xB0/\xE2\x80\xB0/g; # x89 -> per mille sign - $s =~ s/\xC3\x85\xC2\xA0/\xC5\xA0/g; # x8A -> Latin capital letter S with caron - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xB9/\xE2\x80\xB9/g; # x8B -> single left-pointing angle quotation mark - $s =~ s/\xC3\x85\xE2\x80\x99/\xC5\x92/g; # x8C -> Latin capital ligature OE - # x8D codepoint undefined in Windows 1252 - $s =~ s/\xC3\x85\xC2\xBD/\xC5\xBD/g; # x8E -> Latin capital letter Z with caron - # x8F codepoint undefined in Windows 1252 - # x90 codepoint undefined in Windows 1252 - $s =~ s/\xC3\xA2\xE2\x82\xAC\xCB\x9C/\xE2\x80\x98/g; # x91 a-circumflex+euro+small tilde -> left single quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x84\xA2/\xE2\x80\x99/g; # x92 a-circumflex+euro+trademark -> right single quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\x93/\xE2\x80\x9C/g; # x93 a-circumflex+euro+Latin small ligature oe -> left double quotation mark - # x94 maps through undefined intermediate code point - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA2/\xE2\x80\xA2/g; # x95 a-circumflex+euro+cent sign -> bullet - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x80\x9C/\xE2\x80\x93/g; # x96 a-circumflex+euro+left double quotation mark -> en dash - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x80\x9D/\xE2\x80\x94/g; # x97 a-circumflex+euro+right double quotation mark -> em dash - $s =~ s/\xC3\x8B\xC5\x93/\xCB\x9C/g; # x98 Latin capital e diaeresis+Latin small ligature oe -> small tilde - $s =~ s/\xC3\xA2\xE2\x80\x9E\xC2\xA2/\xE2\x84\xA2/g; # x99 -> trade mark sign - $s =~ s/\xC3\x85\xC2\xA1/\xC5\xA1/g; # x9A -> Latin small letter s with caron - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xBA/\xE2\x80\xBA/g; # x9B -> single right-pointing angle quotation mark - $s =~ s/\xC3\x85\xE2\x80\x9C/\xC5\x93/g; # x9C -> Latin small ligature oe - # x9D codepoint undefined in Windows 1252 - $s =~ s/\xC3\x85\xC2\xBE/\xC5\xBE/g; # x9E -> Latin small letter z with caron - $s =~ s/\xC3\x85\xC2\xB8/\xC5\xB8/g; # x9F -> Latin capital letter Y with diaeresis - $s =~ s/\xC3\xAF\xC2\xBF\xC2\xBD/\xEF\xBF\xBD/g; # replacement character - } - - return $s; -} - -sub latin1_to_utf { - local($caller, $s) = @_; - - my $result = ""; - while (($pre,$c,$post) = ($s =~ /^(.*?)([\x80-\xFF])(.*)$/s)) { - $result .= $pre; - if ($c =~ /^[\x80-\xBF]$/) { - $result .= "\xC2$c"; - } elsif ($c =~ /^[\xC0-\xFF]$/) { - $c =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c"; - } - $s = $post; - } - $result .= $s; - return $result; -} - -sub character_type_is_letter_type { - local($caller, $char_type) = @_; - - return ($char_type =~ /\b((CJK|hiragana|kana|katakana)\s+character|diacritic|letter|syllable)\b/); -} - -sub character_type { - local($caller, $c) = @_; - - if ($c =~ /^[\x00-\x7F]/) { - return "XML tag" if $c =~ /^<.*>$/; - return "ASCII Latin letter" if $c =~ /^[a-z]$/i; - return "ASCII digit" if $c =~ /^[0-9]$/i; - return "ASCII whitespace" if $c =~ /^[\x09-\x0D\x20]$/; - return "ASCII control-character" if $c =~ /^[\x00-\x1F\x7F]$/; - return "ASCII currency" if $c eq "\$"; - return "ASCII punctuation"; - } elsif ($c =~ /^[\xC0-\xDF]/) { - return "non-UTF8 (invalid)" unless $c =~ /^[\xC0-\xDF][\x80-\xBF]$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /[\xC0-\xC1]/; - return "non-ASCII control-character" if $c =~ /\xC2[\x80-\x9F]/; - return "non-ASCII whitespace" if $c =~ /\xC2\xA0/; - return "non-ASCII currency" if $c =~ /\xC2[\xA2-\xA5]/; - return "fraction" if $c =~ /\xC2[\xBC-\xBE]/; # NEW - return "superscript digit" if $c =~ /\xC2[\xB2\xB3\xB9]/; - return "non-ASCII Latin letter" if $c =~ /\xC2\xB5/; # micro sign - return "non-ASCII punctuation" if $c =~ /\xC2[\xA0-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xC3[\x97\xB7]/; - return "non-ASCII Latin letter" if $c =~ /\xC3[\x80-\xBF]/; - return "Latin ligature letter" if $c =~ /\xC4[\xB2\xB3]/; - return "Latin ligature letter" if $c =~ /\xC5[\x92\x93]/; - return "non-ASCII Latin letter" if $c =~ /[\xC4-\xC8]/; - return "non-ASCII Latin letter" if $c =~ /\xC9[\x80-\x8F]/; - return "IPA" if $c =~ /\xC9[\x90-\xBF]/; - return "IPA" if $c =~ /\xCA[\x80-\xBF]/; - return "IPA" if $c =~ /\xCB[\x80-\xBF]/; - return "combining-diacritic" if $c =~ /\xCC[\x80-\xBF]/; - return "combining-diacritic" if $c =~ /\xCD[\x80-\xAF]/; - return "Greek punctuation" if $c =~ /\xCD[\xBE]/; # Greek question mark - return "Greek punctuation" if $c =~ /\xCE[\x87]/; # Greek semicolon - return "Greek letter" if $c =~ /\xCD[\xB0-\xBF]/; - return "Greek letter" if $c =~ /\xCE/; - return "Greek letter" if $c =~ /\xCF[\x80-\xA1\xB3\xB7\xB8\xBA\xBB]/; - return "Coptic letter" if $c =~ /\xCF[\xA2-\xAF]/; - return "Cyrillic letter" if $c =~ /[\xD0-\xD3]/; - return "Cyrillic letter" if $c =~ /\xD4[\x80-\xAF]/; - return "Armenian punctuation" if $c =~ /\xD5[\x9A-\x9F]/; - return "Armenian punctuation" if $c =~ /\xD6[\x89-\x8F]/; - return "Armenian letter" if $c =~ /\xD4[\xB0-\xBF]/; - return "Armenian letter" if $c =~ /\xD5/; - return "Armenian letter" if $c =~ /\xD6[\x80-\x8F]/; - return "Hebrew accent" if $c =~ /\xD6[\x91-\xAE]/; - return "Hebrew punctuation" if $c =~ /\xD6\xBE/; - return "Hebrew punctuation" if $c =~ /\xD7[\x80\x83\x86\xB3\xB4]/; - return "Hebrew point" if $c =~ /\xD6[\xB0-\xBF]/; - return "Hebrew point" if $c =~ /\xD7[\x81\x82\x87]/; - return "Hebrew letter" if $c =~ /\xD7[\x90-\xB2]/; - return "other Hebrew" if $c =~ /\xD6[\x90-\xBF]/; - return "other Hebrew" if $c =~ /\xD7/; - return "Arabic currency" if $c =~ /\xD8\x8B/; # Afghani sign - return "Arabic punctuation" if $c =~ /\xD8[\x89-\x8D\x9B\x9E\x9F]/; - return "Arabic punctuation" if $c =~ /\xD9[\xAA-\xAD]/; - return "Arabic punctuation" if $c =~ /\xDB[\x94]/; - return "Arabic tatweel" if $c =~ /\xD9\x80/; - return "Arabic letter" if $c =~ /\xD8[\xA0-\xBF]/; - return "Arabic letter" if $c =~ /\xD9[\x81-\x9F]/; - return "Arabic letter" if $c =~ /\xD9[\xAE-\xBF]/; - return "Arabic letter" if $c =~ /\xDA[\x80-\xBF]/; - return "Arabic letter" if $c =~ /\xDB[\x80-\x95]/; - return "Arabic Indic digit" if $c =~ /\xD9[\xA0-\xA9]/; - return "Arabic Indic digit" if $c =~ /\xDB[\xB0-\xB9]/; - return "other Arabic" if $c =~ /[\xD8-\xDB]/; - return "Syriac punctuation" if $c =~ /\xDC[\x80-\x8F]/; - return "Syriac letter" if $c =~ /\xDC[\x90-\xAF]/; - return "Syriac diacritic" if $c =~ /\xDC[\xB0-\xBF]/; - return "Syriac diacritic" if $c =~ /\xDD[\x80-\x8A]/; - return "Thaana letter" if $c =~ /\xDE/; - } elsif ($c =~ /^[\xE0-\xEF]/) { - return "non-UTF8 (invalid)" unless $c =~ /^[\xE0-\xEF][\x80-\xBF]{2,2}$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /\xE0[\x80-\x9F]/; - return "Arabic letter" if $c =~ /\xE0\xA2[\xA0-\xBF]/; # extended letters - return "other Arabic" if $c =~ /\xE0\xA3/; # extended characters - return "Devanagari punctuation" if $c =~ /\xE0\xA5[\xA4\xA5]/; # danda, double danda - return "Devanagari digit" if $c =~ /\xE0\xA5[\xA6-\xAF]/; - return "Devanagari letter" if $c =~ /\xE0[\xA4-\xA5]/; - return "Bengali digit" if $c =~ /\xE0\xA7[\xA6-\xAF]/; - return "Bengali currency" if $c =~ /\xE0\xA7[\xB2-\xB9]/; - return "Bengali letter" if $c =~ /\xE0[\xA6-\xA7]/; - return "Gurmukhi digit" if $c =~ /\xE0\xA9[\xA6-\xAF]/; - return "Gurmukhi letter" if $c =~ /\xE0[\xA8-\xA9]/; - return "Gujarati digit" if $c =~ /\xE0\xAB[\xA6-\xAF]/; - return "Gujarati letter" if $c =~ /\xE0[\xAA-\xAB]/; - return "Oriya digit" if $c =~ /\xE0\xAD[\xA6-\xAF]/; - return "Oriya fraction" if $c =~ /\xE0\xAD[\xB2-\xB7]/; - return "Oriya letter" if $c =~ /\xE0[\xAC-\xAD]/; - return "Tamil digit" if $c =~ /\xE0\xAF[\xA6-\xAF]/; - return "Tamil number" if $c =~ /\xE0\xAF[\xB0-\xB2]/; # number (10, 100, 1000) - return "Tamil letter" if $c =~ /\xE0[\xAE-\xAF]/; - return "Telegu digit" if $c =~ /\xE0\xB1[\xA6-\xAF]/; - return "Telegu fraction" if $c =~ /\xE0\xB1[\xB8-\xBE]/; - return "Telegu letter" if $c =~ /\xE0[\xB0-\xB1]/; - return "Kannada digit" if $c =~ /\xE0\xB3[\xA6-\xAF]/; - return "Kannada letter" if $c =~ /\xE0[\xB2-\xB3]/; - return "Malayalam digit" if $c =~ /\xE0\xB5[\x98-\x9E\xA6-\xB8]/; - return "Malayalam punctuation" if $c =~ /\xE0\xB5\xB9/; # date mark - return "Malayalam letter" if $c =~ /\xE0[\xB4-\xB5]/; - return "Sinhala digit" if $c =~ /\xE0\xB7[\xA6-\xAF]/; - return "Sinhala punctuation" if $c =~ /\xE0\xB7\xB4/; - return "Sinhala letter" if $c =~ /\xE0[\xB6-\xB7]/; - return "Thai currency" if $c =~ /\xE0\xB8\xBF/; - return "Thai digit" if $c =~ /\xE0\xB9[\x90-\x99]/; - return "Thai character" if $c =~ /\xE0[\xB8-\xB9]/; - return "Lao punctuation" if $c =~ /\xE0\xBA\xAF/; # Lao ellipsis - return "Lao digit" if $c =~ /\xE0\xBB[\x90-\x99]/; - return "Lao character" if $c =~ /\xE0[\xBA-\xBB]/; - return "Tibetan punctuation" if $c =~ /\xE0\xBC[\x81-\x94]/; - return "Tibetan sign" if $c =~ /\xE0\xBC[\x95-\x9F]/; - return "Tibetan digit" if $c =~ /\xE0\xBC[\xA0-\xB3]/; - return "Tibetan punctuation" if $c =~ /\xE0\xBC[\xB4-\xBD]/; - return "Tibetan letter" if $c =~ /\xE0[\xBC-\xBF]/; - return "Myanmar digit" if $c =~ /\xE1\x81[\x80-\x89]/; - return "Myanmar digit" if $c =~ /\xE1\x82[\x90-\x99]/; # Myanmar Shan digits - return "Myanmar punctuation" if $c =~ /\xE1\x81[\x8A-\x8B]/; - return "Myanmar letter" if $c =~ /\xE1[\x80-\x81]/; - return "Myanmar letter" if $c =~ /\xE1\x82[\x80-\x9F]/; - return "Georgian punctuation" if $c =~ /\xE1\x83\xBB/; - return "Georgian letter" if $c =~ /\xE1\x82[\xA0-\xBF]/; - return "Georgian letter" if $c =~ /\xE1\x83/; - return "Georgian letter" if $c =~ /\xE1\xB2[\x90-\xBF]/; # Georgian Mtavruli capital letters - return "Georgian letter" if $c =~ /\xE2\xB4[\x80-\xAF]/; # Georgian small letters (Khutsuri) - return "Korean Hangul letter" if $c =~ /\xE1[\x84-\x87]/; - return "Ethiopic punctuation" if $c =~ /\xE1\x8D[\xA0-\xA8]/; - return "Ethiopic digit" if $c =~ /\xE1\x8D[\xA9-\xB1]/; - return "Ethiopic number" if $c =~ /\xE1\x8D[\xB2-\xBC]/; - return "Ethiopic syllable" if $c =~ /\xE1[\x88-\x8D]/; - return "Cherokee letter" if $c =~ /\xE1\x8E[\xA0-\xBF]/; - return "Cherokee letter" if $c =~ /\xE1\x8F/; - return "Canadian punctuation" if $c =~ /\xE1\x90\x80/; # Canadian Syllabics hyphen - return "Canadian punctuation" if $c =~ /\xE1\x99\xAE/; # Canadian Syllabics full stop - return "Canadian syllable" if $c =~ /\xE1[\x90-\x99]/; - return "Canadian syllable" if $c =~ /\xE1\xA2[\xB0-\xBF]/; - return "Canadian syllable" if $c =~ /\xE1\xA3/; - return "Ogham whitespace" if $c =~ /\xE1\x9A\x80/; - return "Ogham letter" if $c =~ /\xE1\x9A[\x81-\x9A]/; - return "Ogham punctuation" if $c =~ /\xE1\x9A[\x9B-\x9C]/; - return "Runic punctuation" if $c =~ /\xE1\x9B[\xAB-\xAD]/; - return "Runic letter" if $c =~ /\xE1\x9A[\xA0-\xBF]/; - return "Runic letter" if $c =~ /\xE1\x9B/; - return "Khmer currency" if $c =~ /\xE1\x9F\x9B/; - return "Khmer digit" if $c =~ /\xE1\x9F[\xA0-\xA9]/; - return "Khmer letter" if $c =~ /\xE1[\x9E-\x9F]/; - return "Mongolian punctuation" if $c =~ /\xE1\xA0[\x80-\x8A]/; - return "Mongolian digit" if $c =~ /\xE1\xA0[\x90-\x99]/; - return "Mongolian letter" if $c =~ /\xE1[\xA0-\xA1]/; - return "Mongolian letter" if $c =~ /\xE1\xA2[\x80-\xAF]/; - return "Buginese letter" if $c =~ /\xE1\xA8[\x80-\x9B]/; - return "Buginese punctuation" if $c =~ /\xE1\xA8[\x9E-\x9F]/; - return "Balinese letter" if $c =~ /\xE1\xAC/; - return "Balinese letter" if $c =~ /\xE1\xAD[\x80-\x8F]/; - return "Balinese digit" if $c =~ /\xE1\xAD[\x90-\x99]/; - return "Balinese puncutation" if $c =~ /\xE1\xAD[\x9A-\xA0]/; - return "Balinese symbol" if $c =~ /\xE1\xAD[\xA1-\xBF]/; - return "Sundanese digit" if $c =~ /\xE1\xAE[\xB0-\xB9]/; - return "Sundanese letter" if $c =~ /\xE1\xAE/; - return "Cyrillic letter" if $c =~ /\xE1\xB2[\x80-\x8F]/; - return "Sundanese punctuation" if $c =~ /\xE1\xB3[\x80-\x8F]/; - return "IPA" if $c =~ /\xE1[\xB4-\xB6]/; - return "non-ASCII Latin letter" if $c =~ /\xE1[\xB8-\xBB]/; - return "Greek letter" if $c =~ /\xE1[\xBC-\xBF]/; - return "non-ASCII whitespace" if $c =~ /\xE2\x80[\x80-\x8A\xAF]/; - return "zero-width space" if $c =~ /\xE2\x80\x8B/; - return "zero-width non-space" if $c =~ /\xE2\x80\x8C/; - return "zero-width joiner" if $c =~ /\xE2\x80\x8D/; - return "directional mark" if $c =~ /\xE2\x80[\x8E-\x8F\xAA-\xAE]/; - return "non-ASCII punctuation" if $c =~ /\xE2\x80[\x90-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xE2\x81[\x80-\x9E]/; - return "superscript letter" if $c =~ /\xE2\x81[\xB1\xBF]/; - return "superscript digit" if $c =~ /\xE2\x81[\xB0-\xB9]/; - return "superscript punctuation" if $c =~ /\xE2\x81[\xBA-\xBE]/; - return "subscript digit" if $c =~ /\xE2\x82[\x80-\x89]/; - return "subscript punctuation" if $c =~ /\xE2\x82[\x8A-\x8E]/; - return "non-ASCII currency" if $c =~ /\xE2\x82[\xA0-\xBF]/; - return "letterlike symbol" if $c =~ /\xE2\x84/; - return "letterlike symbol" if $c =~ /\xE2\x85[\x80-\x8F]/; - return "fraction" if $c =~ /\xE2\x85[\x90-\x9E]/; # NEW - return "Roman number" if $c =~ /\xE2\x85[\xA0-\xBF]/; # NEW - return "arrow symbol" if $c =~ /\xE2\x86[\x90-\xBF]/; - return "arrow symbol" if $c =~ /\xE2\x87/; - return "mathematical operator" if $c =~ /\xE2[\x88-\x8B]/; - return "technical symbol" if $c =~ /\xE2[\x8C-\x8F]/; - return "enclosed alphanumeric" if $c =~ /\xE2\x91[\xA0-\xBF]/; - return "enclosed alphanumeric" if $c =~ /\xE2[\x92-\x93]/; - return "box drawing" if $c =~ /\xE2[\x94-\x95]/; - return "geometric shape" if $c =~ /\xE2\x96[\xA0-\xBF]/; - return "geometric shape" if $c =~ /\xE2\x97/; - return "pictograph" if $c =~ /\xE2[\x98-\x9E]/; - return "arrow symbol" if $c =~ /\xE2\xAC[\x80-\x91\xB0-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAC[\x92-\xAF]/; - return "arrow symbol" if $c =~ /\xE2\xAD[\x80-\x8F\x9A-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAD[\x90-\x99]/; - return "arrow symbol" if $c =~ /\xE2\xAE[\x80-\xB9]/; - return "geometric shape" if $c =~ /\xE2\xAE[\xBA-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAF[\x80-\x88\x8A-\x8F]/; - return "symbol" if $c =~ /\xE2[\xAC-\xAF]/; - return "Coptic fraction" if $c =~ /\xE2\xB3\xBD/; - return "Coptic punctuation" if $c =~ /\xE2\xB3[\xB9-\xBF]/; - return "Coptic letter" if $c =~ /\xE2[\xB2-\xB3]/; - return "Georgian letter" if $c =~ /\xE2\xB4[\x80-\xAF]/; - return "Tifinagh punctuation" if $c =~ /\xE2\xB5\xB0/; - return "Tifinagh letter" if $c =~ /\xE2\xB4[\xB0-\xBF]/; - return "Tifinagh letter" if $c =~ /\xE2\xB5/; - return "Ethiopic syllable" if $c =~ /\xE2\xB6/; - return "Ethiopic syllable" if $c =~ /\xE2\xB7[\x80-\x9F]/; - return "non-ASCII punctuation" if $c =~ /\xE3\x80[\x80-\x91\x94-\x9F\xB0\xBB-\xBD]/; - return "symbol" if $c =~ /\xE3\x80[\x91\x92\xA0\xB6\xB7]/; - return "Japanese hiragana character" if $c =~ /\xE3\x81/; - return "Japanese hiragana character" if $c =~ /\xE3\x82[\x80-\x9F]/; - return "Japanese katakana character" if $c =~ /\xE3\x82[\xA0-\xBF]/; - return "Japanese katakana character" if $c =~ /\xE3\x83/; - return "Bopomofo letter" if $c =~ /\xE3\x84[\x80-\xAF]/; - return "Korean Hangul letter" if $c =~ /\xE3\x84[\xB0-\xBF]/; - return "Korean Hangul letter" if $c =~ /\xE3\x85/; - return "Korean Hangul letter" if $c =~ /\xE3\x86[\x80-\x8F]/; - return "Bopomofo letter" if $c =~ /\xE3\x86[\xA0-\xBF]/; - return "CJK stroke" if $c =~ /\xE3\x87[\x80-\xAF]/; - return "Japanese kana character" if $c =~ /\xE3\x87[\xB0-\xBF]/; - return "CJK symbol" if $c =~ /\xE3[\x88-\x8B]/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8D[\xB1-\xBA]/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8E/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8F[\x80-\x9F\xBF]/; - return "CJK character" if $c =~ /\xE4[\xB8-\xBF]/; - return "CJK character" if $c =~ /[\xE5-\xE9]/; - return "Yi syllable" if $c =~ /\xEA[\x80-\x92]/; - return "Lisu letter" if $c =~ /\xEA\x93[\x90-\xBD]/; - return "Lisu punctuation" if $c =~ /\xEA\x93[\xBE-\xBF]/; - return "Cyrillic letter" if $c =~ /\xEA\x99/; - return "Cyrillic letter" if $c =~ /\xEA\x9A[\x80-\x9F]/; - return "modifier tone" if $c =~ /\xEA\x9C[\x80-\xA1]/; - return "Javanese punctuation" if $c =~ /\xEA\xA7[\x81-\x8D\x9E-\x9F]/; - return "Javanese digit" if $c =~ /\xEA\xA7[\x90-\x99]/; - return "Javanese letter" if $c =~ /\xEA\xA6/; - return "Javanese letter" if $c =~ /\xEA\xA7[\x80-\x9F]/; - return "Ethiopic syllable" if $c =~ /\xEA\xAC[\x80-\xAF]/; - return "Cherokee letter" if $c =~ /\xEA\xAD[\xB0-\xBF]/; - return "Cherokee letter" if $c =~ /\xEA\xAE/; - return "Meetai Mayek digit" if $c =~ /\xEA\xAF[\xB0-\xB9]/; - return "Meetai Mayek letter" if $c =~ /\xEA\xAF/; - return "Korean Hangul syllable" if $c =~ /\xEA[\xB0-\xBF]/; - return "Korean Hangul syllable" if $c =~ /[\xEB-\xEC]/; - return "Korean Hangul syllable" if $c =~ /\xED[\x80-\x9E]/; - return "Klingon letter" if $c =~ /\xEF\xA3[\x90-\xA9]/; - return "Klingon digit" if $c =~ /\xEF\xA3[\xB0-\xB9]/; - return "Klingon punctuation" if $c =~ /\xEF\xA3[\xBD-\xBE]/; - return "Klingon symbol" if $c =~ /\xEF\xA3\xBF/; - return "private use character" if $c =~ /\xEE/; - return "Latin typographic ligature" if $c =~ /\xEF\xAC[\x80-\x86]/; - return "Hebrew presentation letter" if $c =~ /\xEF\xAC[\x9D-\xBF]/; - return "Hebrew presentation letter" if $c =~ /\xEF\xAD[\x80-\x8F]/; - return "Arabic presentation letter" if $c =~ /\xEF\xAD[\x90-\xBF]/; - return "Arabic presentation letter" if $c =~ /\xEF[\xAE-\xB7]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB8[\x90-\x99]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB8[\xB0-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB9[\x80-\xAB]/; - return "Arabic presentation letter" if $c =~ /\xEF\xB9[\xB0-\xBF]/; - return "Arabic presentation letter" if $c =~ /\xEF\xBA/; - return "Arabic presentation letter" if $c =~ /\xEF\xBB[\x80-\xBC]/; - return "byte-order mark/zero-width no-break space" if $c eq "\xEF\xBB\xBF"; - return "fullwidth currency" if $c =~ /\xEF\xBC\x84/; - return "fullwidth digit" if $c =~ /\xEF\xBC[\x90-\x99]/; - return "fullwidth Latin letter" if $c =~ /\xEF\xBC[\xA1-\xBA]/; - return "fullwidth Latin letter" if $c =~ /\xEF\xBD[\x81-\x9A]/; - return "fullwidth punctuation" if $c =~ /\xEF\xBC/; - return "fullwidth punctuation" if $c =~ /\xEF\xBD[\x9B-\xA4]/; - return "halfwidth Japanese punctuation" if $c =~ /\xEF\xBD[\xA1-\xA4]/; - return "halfwidth Japanese katakana character" if $c =~ /\xEF\xBD[\xA5-\xBF]/; - return "halfwidth Japanese katakana character" if $c =~ /\xEF\xBE[\x80-\x9F]/; - return "fullwidth currency" if $c =~ /\xEF\xBF[\xA0-\xA6]/; - return "replacement character" if $c eq "\xEF\xBF\xBD"; - } elsif ($c =~ /[\xF0-\xF7]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xF0-\xF7][\x80-\xBF]{3,3}$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /\xF0[\x80-\x8F]/; - return "Linear B syllable" if $c =~ /\xF0\x90\x80/; - return "Linear B syllable" if $c =~ /\xF0\x90\x81[\x80-\x8F]/; - return "Linear B symbol" if $c =~ /\xF0\x90\x81[\x90-\x9F]/; - return "Linear B ideogram" if $c =~ /\xF0\x90[\x82-\x83]/; - return "Gothic letter" if $c =~ /\xF0\x90\x8C[\xB0-\xBF]/; - return "Gothic letter" if $c =~ /\xF0\x90\x8D[\x80-\x8F]/; - return "Phoenician letter" if $c =~ /\xF0\x90\xA4[\x80-\x95]/; - return "Phoenician number" if $c =~ /\xF0\x90\xA4[\x96-\x9B]/; - return "Phoenician punctuation" if $c =~ /\xF0\x90\xA4\x9F/; # word separator - return "Old Hungarian number" if $c =~ /\xF0\x90\xB3[\xBA-\xBF]/; - return "Old Hungarian letter" if $c =~ /\xF0\x90[\xB2-\xB3]/; - return "Cuneiform digit" if $c =~ /\xF0\x92\x90/; # numberic sign - return "Cuneiform digit" if $c =~ /\xF0\x92\x91[\x80-\xAF]/; # numberic sign - return "Cuneiform punctuation" if $c =~ /\xF0\x92\x91[\xB0-\xBF]/; - return "Cuneiform sign" if $c =~ /\xF0\x92[\x80-\x95]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x81\xA8/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x82[\xAD-\xB6]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x86[\x90\xBC-\xBF]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x87[\x80-\x84]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8D[\xA2-\xAB]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8E[\x86-\x92]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8F[\xBA-\xBF]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x90[\x80-\x83]/; - return "Egyptian hieroglyph" if $c =~ /\xF0\x93[\x80-\x90]/; - return "enclosed alphanumeric" if $c =~ /\xF0\x9F[\x84-\x87]/; - return "Mahjong symbol" if $c =~ /\xF0\x9F\x80[\x80-\xAF]/; - return "Domino symbol" if $c =~ /\xF0\x9F\x80[\xB0-\xBF]/; - return "Domino symbol" if $c =~ /\xF0\x9F\x81/; - return "Domino symbol" if $c =~ /\xF0\x9F\x82[\x80-\x9F]/; - return "Playing card symbol" if $c =~ /\xF0\x9F\x82[\xA0-\xBF]/; - return "Playing card symbol" if $c =~ /\xF0\x9F\x83/; - return "CJK symbol" if $c =~ /\xF0\x9F[\x88-\x8B]/; - return "pictograph" if $c =~ /\xF0\x9F[\x8C-\x9B]/; - return "geometric shape" if $c =~ /\xF0\x9F[\x9E-\x9F]/; - return "non-ASCII punctuation" if $c =~ /\xF0\x9F[\xA0-\xA3]/; - return "pictograph" if $c =~ /\xF0\x9F[\xA4-\xAB]/; - return "CJK character" if $c =~ /\xF0[\xA0-\xAF]/; - return "tag" if $c =~ /\xF3\xA0[\x80-\x81]/; - return "variation selector" if $c =~ /\xF3\xA0[\x84-\x87]/; - return "private use character" if $c =~ /\xF3[\xB0-\xBF]/; - return "private use character" if $c =~ /\xF4[\x80-\x8F]/; - # ... - } elsif ($c =~ /[\xF8-\xFB]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xF8-\xFB][\x80-\xBF]{4,4}$/; - } elsif ($c =~ /[\xFC-\xFD]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xFC-\xFD][\x80-\xBF]{5,5}$/; - } elsif ($c =~ /\xFE/) { - return "non-UTF8 (invalid)" unless $c =~ /\xFE][\x80-\xBF]{6,6}$/; - } else { - return "non-UTF8 (invalid)"; - } - return "other character"; -} - -1; - - diff --git a/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/js/chunk-vendors.cd7b5e68.js b/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/js/chunk-vendors.cd7b5e68.js deleted file mode 100644 index 4fbef84ee16fcc6287d4c440225563de539d8612..0000000000000000000000000000000000000000 --- a/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/js/chunk-vendors.cd7b5e68.js +++ /dev/null @@ -1,71 +0,0 @@ -/*! - -========================================================= -* Vue Notus - v1.1.0 based on Tailwind Starter Kit by Creative Tim -========================================================= - -* Product Page: https://www.creative-tim.com/product/vue-notus -* Copyright 2021 Creative Tim (https://www.creative-tim.com) -* Licensed under MIT (https://github.com/creativetimofficial/vue-notus/blob/main/LICENSE.md) - -* Tailwind Starter Kit Page: https://www.creative-tim.com/learning-lab/tailwind-starter-kit/presentation - -* Coded by Creative Tim - -========================================================= - -* The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -*/ -(self["webpackChunkvue_notus"]=self["webpackChunkvue_notus"]||[]).push([[998],{7543:function(e,t,n){"use strict";var r=n(1947);t.Z=o;var i=r(n(9649)),s=r(n(8317));function o(){return{install:function(e){e.vMdParser.use(i.default),e.use((0,s.default)())}}}},9649:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=s;var i=r(n(2960));function s(e){e.extendMarkdown((function(e){e.use(i.default)}))}},8317:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=l;var i=r(n(640));function s(e){return e.classList.contains("v-md-copy-code-btn")}function o(e){return e.classList.contains("v-md-pre-wrapper")?e:o(e.parentNode)}function a(e){var t="v-md-editor-preview";return e.classList.contains(t)?e:e.querySelector("."+t)}function l(){return{install:function(e){e.mixins||(e.mixins=[]),e.mixins.push({emits:["copy-code-success"],mounted:function(){var e=this;this.$nextTick((function(){var t=a(e.$el);t.addEventListener("click",e.handleCopyCodeClick)}))},beforeUnmount:function(){var e=a(this.$el);e.removeEventListener("click",this.handleCopyCodeClick)},methods:{handleCopyCodeClick:function(e){var t=e.target;if(s(t)){var n=o(t.parentNode);if(n){var r=n.querySelector("code").innerText;(0,i.default)(r),this.$emit("copy-code-success",r)}}}}})}}}},1233:function(e,t){"use strict";function n(e,t){e.insert((function(){var e=":",n=":";return{text:""+e+t+n}}))}t.__esModule=!0,t["default"]=n},7988:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=o;var i=r(n(326)),s=r(n(1233));function o(e){var t=e.emojiJson,n=e.parser;return function(e){var r=void 0===e?{}:e,o=r.name,a=void 0===o?"emoji":o,l=r.icon,c=void 0===l?"v-md-icon-emoji":l,u=r.text,d=r.title,h=void 0===d?function(e){return e.langConfig.emoji}:d,p=r.customEmoji,f=(0,i.default)({commandName:a,title:h,text:u,icon:c,emojiJson:t});return{install:function(e){"v-md-editor"===e.name&&(e.command(a,s.default),e.toolbar(a,f),e.lang.add({"zh-CN":{emoji:"插入emoji表情"},"en-US":{emoji:"Insert emoji"}})),e.vMdParser.use(n,{customEmoji:p})}}}}},8043:function(e,t,n){"use strict";var r=n(1947);t.Z=void 0;var i=r(n(2676)),s=r(n(7988)),o=r(n(8741)),a=(0,s.default)({emojiJson:i.default,parser:o.default});t.Z=a},3225:function(e,t){"use strict";function n(e){return function(t,n){void 0===n&&(n={}),t.extendMarkdown((function(t){t.use(e),n.customEmoji&&(t.renderer.rules.emoji=function(e,t){return''})}))}}t.__esModule=!0,t["default"]=n},8741:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=void 0;var i=r(n(6308)),s=r(n(3225)),o=(0,s.default)(i.default);t["default"]=o},326:function(e,t){"use strict";function n(e,t){return Object.keys(e).map((function(n){return{name:n,text:e[n],class:"v-md-emoji-panel-item",action:function(e){e.execCommand(t,n)}}}))}function r(e){var t=e.commandName,r=e.emojiJson,i=e.text,s=e.title,o=e.icon;return{title:s,icon:o,text:i,menus:{mode:"panel",items:n(r,t)}}}t.__esModule=!0,t.generatorMenuItems=n,t["default"]=r},5245:function(e,t,n){"use strict";var r=n(1947);t.Z=void 0;var i=r(n(7763)),s=r(n(9975)),o=(0,i.default)(s.default);t.Z=o},7763:function(e,t){"use strict";function n(e){return function(t){return{install:function(n){n.vMdParser.use(e,t)}}}}t.__esModule=!0,t["default"]=n},9975:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=void 0;var i=r(n(8106)),s="undefined"===typeof window;s||window.katex||console.error("Please import resources katex from cdn");var o=(0,i.default)(s?null:window.katex);t["default"]=o},8106:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=o;var i=r(n(9514)),s=r(n(6325));function o(e){return function(t,n){t.extendMarkdown((function(t){e&&t.use(s.default,(0,i.default)({},n,{katex:e}))}))}}},3375:function(e,t,n){"use strict";var r=n(1947);t.Z=void 0;var i=r(n(7307)),s="undefined"===typeof window;s||window.mermaid||console.error("Please import resources mermaid from cdn");var o=(0,i.default)(s?null:window.mermaid);t.Z=o},7307:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=u;var i=r(n(4684)),s=r(n(4569)),o=r(n(1812)),a=n(1166),l=n(7060);function c(e){var t="v-md-editor-preview";return e.classList.contains(t)?e:e.querySelector("."+t)}function u(e){function t(){return n.apply(this,arguments)}function n(){return n=(0,s.default)(i.default.mark((function t(){var n,r,s;return i.default.wrap((function(t){while(1)switch(t.prev=t.next){case 0:if(l.inBrowser){t.next=2;break}return t.abrupt("return");case 2:return t.next=4,this.$nextTick();case 4:if(n=c(this.$el),r=n.querySelectorAll(".v-md-mermaid"),r.length){t.next=8;break}return t.abrupt("return");case 8:s=!1,r.forEach((function(t){try{s=e.parse(t.innerText)}catch(n){n.str||console.log(n)}s&&e.init(null,t)}));case 10:case"end":return t.stop()}}),t,this)}))),n.apply(this,arguments)}return function(n){var r=void 0===n?{}:n,i=r.mermaidInitializeOptions,s=void 0===i?{}:i,l={altFontFamily:"sans-serif",flowchart:{htmlLabels:!0,useMaxWidth:!0},fontFamily:"sans-serif",gantt:{leftPadding:75,rightPadding:20},securityLevel:"loose",sequence:{boxMargin:8,diagramMarginX:8,diagramMarginY:8,useMaxWidth:!0},startOnLoad:!1};return(0,a.deepAssign)(l,s),{install:function(n){n.vMdParser.use(o.default),n.mixins||(n.mixins=[]);var r={created:function(){e.initialize(l)},watch:{html:{immediate:!0,handler:t}}};"v-md-editor"===n.name?n.Preview.mixins.push(r):n.mixins.push(r)}}}}},1812:function(e,t,n){"use strict";var r=n(1947);t.__esModule=!0,t["default"]=s;var i=r(n(3596));function s(e){e.extendMarkdown((function(e){e&&e.use(i.default)}))}},2104:function(e,t,n){(function(t,r){e.exports=r(n(821))})("undefined"!==typeof self&&self,(function(e){return function(e){var t={};function n(r){if(t[r])return t[r].exports;var i=t[r]={i:r,l:!1,exports:{}};return e[r].call(i.exports,i,i.exports,n),i.l=!0,i.exports}return n.m=e,n.c=t,n.d=function(e,t,r){n.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:r})},n.r=function(e){"undefined"!==typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},n.t=function(e,t){if(1&t&&(e=n(e)),8&t)return e;if(4&t&&"object"===typeof e&&e&&e.__esModule)return e;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var i in e)n.d(r,i,function(t){return e[t]}.bind(null,i));return r},n.n=function(e){var t=e&&e.__esModule?function(){return e["default"]}:function(){return e};return n.d(t,"a",t),t},n.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},n.p="",n(n.s=9)}([function(t,n){t.exports=e},,function(e,t,n){"use strict";n.d(t,"f",(function(){return i})),n.d(t,"a",(function(){return o})),n.d(t,"c",(function(){return a})),n.d(t,"d",(function(){return l})),n.d(t,"e",(function(){return c})),n.d(t,"b",(function(){return u}));var r=Object.prototype.toString,i=function(e){return"[object Object]"===r.call(e)};function s(e,t){return Object.keys(t).forEach((function(n){e[n]=t[n]})),e}function o(e){for(var t={},n=0;nn?"down":"up",c=o/100*(r-n),u=function e(){n+=c,"down"===l&&n>=r||"up"===l&&n<=r?(i(r),window.cancelAnimationFrame(t),a&&window.requestAnimationFrame(a)):(i(n),window.requestAnimationFrame(e))};window.requestAnimationFrame(u)}function s(e){var t=e.scrollTarget,n=e.scrollToTop,s=e.percent,o=void 0===s?10:s,a=e.onScrollEnd,l=Object(r["a"])(t);i({currentScrollTop:l,scrollToTop:n,scrollFn:function(e){return Object(r["b"])(t,e)},percent:o,onScrollEnd:a})}},function(e,t,n){"use strict";n.r(t);var r=n(0);function i(e,t,n,i,s,o){return Object(r["openBlock"])(),Object(r["createBlock"])("div",{class:"v-md-editor-preview",style:{tabSize:e.tabSize,"-moz-tab-size":e.tabSize,"-o-tab-size":e.tabSize},onClick:t[1]||(t[1]=function(){return e.handlePreviewClick.apply(e,arguments)})},[Object(r["createVNode"])("div",{class:[e.previewClass],innerHTML:e.html},null,10,["innerHTML"])],4)}var s=n(7),o=n(13),a=function(){function e(){this.lang=new o["a"]}var t=e.prototype;return t.defaultMarkdownLoader=function(e){return e},t.use=function(e,t){return"function"===typeof e?e(this,t):e.install(this,t),this},t.theme=function(e){this.themeConfig=e},t.extendMarkdown=function(e){if(!this.themeConfig)return console.error("Please use theme before using plugins");var t=this.themeConfig.markdownParser;e(t)},t.parse=function(e){var t,n=this.themeConfig.markdownParser,r=(null==n||null==(t=n.render)?void 0:t.bind(n))||this.defaultMarkdownLoader;return"function"===typeof r&&r!==this.defaultMarkdownLoader||console.error("Please configure your markdown parser"),r(e)},e}(),l=n(16),c={name:"v-md-preview",mixins:[l["a"]],props:{text:{type:String,default:""},theme:Object,beforeChange:Function},emits:["change"],data:function(){return{html:""}},watch:{text:function(){this.handleTextChange()},langConfig:function(){this.handleTextChange()}},computed:{vMdParser:function(){return this.$options.vMdParser},previewClass:function(){return this.vMdParser.themeConfig.previewClass},langConfig:function(){return this.vMdParser.lang.langConfig}},created:function(){this.handleTextChange()},methods:{handleTextChange:function(){var e=this,t=function(t){e.html=s["a"].process(e.$options.vMdParser.parse(t)),e.$emit("change",t,e.html)};this.beforeChange?this.beforeChange(this.text,t):t(this.text)}}},u=new a;u.lang.config=Object(r["reactive"])(u.lang.config),c.vMdParser=new a;var d=c;d.render=i;var h=d,p=(n(18),"2.3.15"),f=function(e){e.component(h.name,h)};h.version=p,h.install=f,h.xss=s["a"],h.use=function(e,t){return"function"===typeof e?e(h,t):e.install(h,t),h};t["default"]=h},,function(e,t,n){var r=n(19),i=n(22),s=n(26);function o(e,t){var n=new s(t);return n.process(e)}for(var a in t=e.exports=o,t.filterXSS=o,t.FilterXSS=s,r)t[a]=r[a];for(var a in i)t[a]=i[a];function l(){return"undefined"!==typeof self&&"undefined"!==typeof DedicatedWorkerGlobalScope&&self instanceof DedicatedWorkerGlobalScope}"undefined"!==typeof window&&(window.filterXSS=e.exports),l()&&(self.filterXSS=e.exports)},,function(e,t,n){"use strict";n.d(t,"a",(function(){return a}));var r=n(2),i=Object.prototype.hasOwnProperty;function s(e,t,n){var s=t[n];void 0!==s&&null!==s&&(i.call(e,n)&&Object(r["f"])(s)?e[n]=o(Object(e[n]),t[n]):e[n]=s)}function o(e,t){return Object.keys(t).forEach((function(n){s(e,t,n)})),e}var a=function(){function e(e){void 0===e&&(e={}),this.config={lang:"zh-CN",langConfig:{"zh-CN":{}}},this.options=e}var t=e.prototype;return t.use=function(e,t){var n;this.config.lang=e,this.add((n={},n[e]=t,n)),this.options.afterUse&&this.options.afterUse(e,t)},t.add=function(e){void 0===e&&(e={}),o(this.config.langConfig,e)},e}()},function(e,t,n){var r=n(20),i=n(24);function s(e,t){var n=new i(t);return n.process(e)}for(var o in t=e.exports=s,t.FilterCSS=i,r)t[o]=r[o];"undefined"!==typeof window&&(window.filterCSS=e.exports)},function(e,t){e.exports={indexOf:function(e,t){var n,r;if(Array.prototype.indexOf)return e.indexOf(t);for(n=0,r=e.length;n/g,m=/"/g,b=/"/g,_=/&#([a-zA-Z0-9]*);?/gim,y=/:?/gim,v=/&newline;?/gim,E=/((j\s*a\s*v\s*a|v\s*b|l\s*i\s*v\s*e)\s*s\s*c\s*r\s*i\s*p\s*t\s*|m\s*o\s*c\s*h\s*a)\:/gi,x=/e\s*x\s*p\s*r\s*e\s*s\s*s\s*i\s*o\s*n\s*\(.*/gi,S=/u\s*r\s*l\s*\(.*/gi;function w(e){return e.replace(m,""")}function T(e){return e.replace(b,'"')}function A(e){return e.replace(_,(function(e,t){return"x"===t[0]||"X"===t[0]?String.fromCharCode(parseInt(t.substr(1),16)):String.fromCharCode(parseInt(t,10))}))}function C(e){return e.replace(y,":").replace(v," ")}function I(e){for(var t="",n=0,r=e.length;n/g;function D(e){var t=e.split("");return t=t.filter((function(e){var t=e.charCodeAt(0);return 127!==t&&(!(t<=31)||(10===t||13===t))})),t.join("")}t.whiteList=o(),t.getDefaultWhiteList=o,t.onTag=l,t.onIgnoreTag=c,t.onTagAttr=u,t.onIgnoreTagAttr=d,t.safeAttrValue=p,t.escapeHtml=h,t.escapeQuote=w,t.unescapeQuote=T,t.escapeHtmlEntities=A,t.escapeDangerHtml5Entities=C,t.clearNonPrintableCharacter=I,t.friendlyAttrValue=R,t.escapeAttrValue=k,t.onIgnoreTagStripAll=P,t.StripTagBody=O,t.stripCommentTag=N,t.stripBlankChar=D,t.cssFilter=a,t.getDefaultCSSWhiteList=i},function(e,t){function n(){var e={"align-content":!1,"align-items":!1,"align-self":!1,"alignment-adjust":!1,"alignment-baseline":!1,all:!1,"anchor-point":!1,animation:!1,"animation-delay":!1,"animation-direction":!1,"animation-duration":!1,"animation-fill-mode":!1,"animation-iteration-count":!1,"animation-name":!1,"animation-play-state":!1,"animation-timing-function":!1,azimuth:!1,"backface-visibility":!1,background:!0,"background-attachment":!0,"background-clip":!0,"background-color":!0,"background-image":!0,"background-origin":!0,"background-position":!0,"background-repeat":!0,"background-size":!0,"baseline-shift":!1,binding:!1,bleed:!1,"bookmark-label":!1,"bookmark-level":!1,"bookmark-state":!1,border:!0,"border-bottom":!0,"border-bottom-color":!0,"border-bottom-left-radius":!0,"border-bottom-right-radius":!0,"border-bottom-style":!0,"border-bottom-width":!0,"border-collapse":!0,"border-color":!0,"border-image":!0,"border-image-outset":!0,"border-image-repeat":!0,"border-image-slice":!0,"border-image-source":!0,"border-image-width":!0,"border-left":!0,"border-left-color":!0,"border-left-style":!0,"border-left-width":!0,"border-radius":!0,"border-right":!0,"border-right-color":!0,"border-right-style":!0,"border-right-width":!0,"border-spacing":!0,"border-style":!0,"border-top":!0,"border-top-color":!0,"border-top-left-radius":!0,"border-top-right-radius":!0,"border-top-style":!0,"border-top-width":!0,"border-width":!0,bottom:!1,"box-decoration-break":!0,"box-shadow":!0,"box-sizing":!0,"box-snap":!0,"box-suppress":!0,"break-after":!0,"break-before":!0,"break-inside":!0,"caption-side":!1,chains:!1,clear:!0,clip:!1,"clip-path":!1,"clip-rule":!1,color:!0,"color-interpolation-filters":!0,"column-count":!1,"column-fill":!1,"column-gap":!1,"column-rule":!1,"column-rule-color":!1,"column-rule-style":!1,"column-rule-width":!1,"column-span":!1,"column-width":!1,columns:!1,contain:!1,content:!1,"counter-increment":!1,"counter-reset":!1,"counter-set":!1,crop:!1,cue:!1,"cue-after":!1,"cue-before":!1,cursor:!1,direction:!1,display:!0,"display-inside":!0,"display-list":!0,"display-outside":!0,"dominant-baseline":!1,elevation:!1,"empty-cells":!1,filter:!1,flex:!1,"flex-basis":!1,"flex-direction":!1,"flex-flow":!1,"flex-grow":!1,"flex-shrink":!1,"flex-wrap":!1,float:!1,"float-offset":!1,"flood-color":!1,"flood-opacity":!1,"flow-from":!1,"flow-into":!1,font:!0,"font-family":!0,"font-feature-settings":!0,"font-kerning":!0,"font-language-override":!0,"font-size":!0,"font-size-adjust":!0,"font-stretch":!0,"font-style":!0,"font-synthesis":!0,"font-variant":!0,"font-variant-alternates":!0,"font-variant-caps":!0,"font-variant-east-asian":!0,"font-variant-ligatures":!0,"font-variant-numeric":!0,"font-variant-position":!0,"font-weight":!0,grid:!1,"grid-area":!1,"grid-auto-columns":!1,"grid-auto-flow":!1,"grid-auto-rows":!1,"grid-column":!1,"grid-column-end":!1,"grid-column-start":!1,"grid-row":!1,"grid-row-end":!1,"grid-row-start":!1,"grid-template":!1,"grid-template-areas":!1,"grid-template-columns":!1,"grid-template-rows":!1,"hanging-punctuation":!1,height:!0,hyphens:!1,icon:!1,"image-orientation":!1,"image-resolution":!1,"ime-mode":!1,"initial-letters":!1,"inline-box-align":!1,"justify-content":!1,"justify-items":!1,"justify-self":!1,left:!1,"letter-spacing":!0,"lighting-color":!0,"line-box-contain":!1,"line-break":!1,"line-grid":!1,"line-height":!1,"line-snap":!1,"line-stacking":!1,"line-stacking-ruby":!1,"line-stacking-shift":!1,"line-stacking-strategy":!1,"list-style":!0,"list-style-image":!0,"list-style-position":!0,"list-style-type":!0,margin:!0,"margin-bottom":!0,"margin-left":!0,"margin-right":!0,"margin-top":!0,"marker-offset":!1,"marker-side":!1,marks:!1,mask:!1,"mask-box":!1,"mask-box-outset":!1,"mask-box-repeat":!1,"mask-box-slice":!1,"mask-box-source":!1,"mask-box-width":!1,"mask-clip":!1,"mask-image":!1,"mask-origin":!1,"mask-position":!1,"mask-repeat":!1,"mask-size":!1,"mask-source-type":!1,"mask-type":!1,"max-height":!0,"max-lines":!1,"max-width":!0,"min-height":!0,"min-width":!0,"move-to":!1,"nav-down":!1,"nav-index":!1,"nav-left":!1,"nav-right":!1,"nav-up":!1,"object-fit":!1,"object-position":!1,opacity:!1,order:!1,orphans:!1,outline:!1,"outline-color":!1,"outline-offset":!1,"outline-style":!1,"outline-width":!1,overflow:!1,"overflow-wrap":!1,"overflow-x":!1,"overflow-y":!1,padding:!0,"padding-bottom":!0,"padding-left":!0,"padding-right":!0,"padding-top":!0,page:!1,"page-break-after":!1,"page-break-before":!1,"page-break-inside":!1,"page-policy":!1,pause:!1,"pause-after":!1,"pause-before":!1,perspective:!1,"perspective-origin":!1,pitch:!1,"pitch-range":!1,"play-during":!1,position:!1,"presentation-level":!1,quotes:!1,"region-fragment":!1,resize:!1,rest:!1,"rest-after":!1,"rest-before":!1,richness:!1,right:!1,rotation:!1,"rotation-point":!1,"ruby-align":!1,"ruby-merge":!1,"ruby-position":!1,"shape-image-threshold":!1,"shape-outside":!1,"shape-margin":!1,size:!1,speak:!1,"speak-as":!1,"speak-header":!1,"speak-numeral":!1,"speak-punctuation":!1,"speech-rate":!1,stress:!1,"string-set":!1,"tab-size":!1,"table-layout":!1,"text-align":!0,"text-align-last":!0,"text-combine-upright":!0,"text-decoration":!0,"text-decoration-color":!0,"text-decoration-line":!0,"text-decoration-skip":!0,"text-decoration-style":!0,"text-emphasis":!0,"text-emphasis-color":!0,"text-emphasis-position":!0,"text-emphasis-style":!0,"text-height":!0,"text-indent":!0,"text-justify":!0,"text-orientation":!0,"text-overflow":!0,"text-shadow":!0,"text-space-collapse":!0,"text-transform":!0,"text-underline-position":!0,"text-wrap":!0,top:!1,transform:!1,"transform-origin":!1,"transform-style":!1,transition:!1,"transition-delay":!1,"transition-duration":!1,"transition-property":!1,"transition-timing-function":!1,"unicode-bidi":!1,"vertical-align":!1,visibility:!1,"voice-balance":!1,"voice-duration":!1,"voice-family":!1,"voice-pitch":!1,"voice-range":!1,"voice-rate":!1,"voice-stress":!1,"voice-volume":!1,volume:!1,"white-space":!1,widows:!1,width:!0,"will-change":!1,"word-break":!0,"word-spacing":!0,"word-wrap":!0,"wrap-flow":!1,"wrap-through":!1,"writing-mode":!1,"z-index":!1};return e}function r(e,t,n){}function i(e,t,n){}var s=/javascript\s*\:/gim;function o(e,t){return s.test(t)?"":t}t.whiteList=n(),t.getDefaultWhiteList=n,t.onAttr=r,t.onIgnoreAttr=i,t.safeAttrValue=o},function(e,t){e.exports={indexOf:function(e,t){var n,r;if(Array.prototype.indexOf)return e.indexOf(t);for(n=0,r=e.length;n"===p){r+=n(e.slice(o,a)),h=e.slice(a,c+1),d=i(h),r+=t(a,r.length,d,h,s(h)),o=c+1,a=!1;continue}if('"'===p||"'"===p){var f=1,g=e.charAt(c-f);while(""===g.trim()||"="===g){if("="===g){l=p;continue e}g=e.charAt(c-++f)}}}else if(p===l){l=!1;continue}}return o0;t--){var n=e[t];if(" "!==n)return"="===n?t:-1}}function d(e){return'"'===e[0]&&'"'===e[e.length-1]||"'"===e[0]&&"'"===e[e.length-1]}function h(e){return d(e)?e.substr(1,e.length-2):e}t.parseTag=o,t.parseAttr=l},,function(e,t,n){var r=n(20),i=n(25);n(21);function s(e){return void 0===e||null===e}function o(e){var t={};for(var n in e)t[n]=e[n];return t}function a(e){e=o(e||{}),e.whiteList=e.whiteList||r.whiteList,e.onAttr=e.onAttr||r.onAttr,e.onIgnoreAttr=e.onIgnoreAttr||r.onIgnoreAttr,e.safeAttrValue=e.safeAttrValue||r.safeAttrValue,this.options=e}a.prototype.process=function(e){if(e=e||"",e=e.toString(),!e)return"";var t=this,n=t.options,r=n.whiteList,o=n.onAttr,a=n.onIgnoreAttr,l=n.safeAttrValue,c=i(e,(function(e,t,n,i,c){var u=r[n],d=!1;if(!0===u?d=u:"function"===typeof u?d=u(i):u instanceof RegExp&&(d=u.test(i)),!0!==d&&(d=!1),i=l(n,i),i){var h={position:t,sourcePosition:e,source:c,isWhite:d};if(d){var p=o(n,i,h);return s(p)?n+":"+i:p}p=a(n,i,h);return s(p)?void 0:p}}));return c},e.exports=a},function(e,t,n){var r=n(21);function i(e,t){e=r.trimRight(e),";"!==e[e.length-1]&&(e+=";");var n=e.length,i=!1,s=0,o=0,a="";function l(){if(!i){var n=r.trim(e.slice(s,o)),l=n.indexOf(":");if(-1!==l){var c=r.trim(n.slice(0,l)),u=r.trim(n.slice(l+1));if(c){var d=t(s,a.length,c,u,n);d&&(a+=d+"; ")}}}s=o+1}for(;o";var y=u(i),v=r[n],E=a(y.html,(function(e,t){var r=-1!==l.indexOf(v,e),i=h(n,e,t,r);if(!c(i))return i;if(r)return t=f(n,e,t,m),t?e+'="'+t+'"':e;i=p(n,e,t,r);return c(i)?void 0:i}));i="<"+n;return E&&(i+=" "+E),y.closing&&(i+=" /"),i+=">",i}_=d(n,i,b);return c(_)?g(i):_}),g);return b&&(_=b.remove(_)),_},e.exports=h}])["default"]}))},1986:function(e){!function(t,n){e.exports=n()}("undefined"!=typeof self&&self,(function(){return function(e){var t={};function n(r){if(t[r])return t[r].exports;var i=t[r]={i:r,l:!1,exports:{}};return e[r].call(i.exports,i,i.exports,n),i.l=!0,i.exports}return n.m=e,n.c=t,n.d=function(e,t,r){n.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:r})},n.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},n.t=function(e,t){if(1&t&&(e=n(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var i in e)n.d(r,i,function(t){return e[t]}.bind(null,i));return r},n.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return n.d(t,"a",t),t},n.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},n.p="",n(n.s=86)}([function(e,t,n){"use strict";var r=Object.prototype.hasOwnProperty;function i(e,t){return r.call(e,t)}function s(e){return!(e>=55296&&e<=57343)&&!(e>=64976&&e<=65007)&&65535!=(65535&e)&&65534!=(65535&e)&&!(e>=0&&e<=8)&&11!==e&&!(e>=14&&e<=31)&&!(e>=127&&e<=159)&&!(e>1114111)}function o(e){if(e>65535){var t=55296+((e-=65536)>>10),n=56320+(1023&e);return String.fromCharCode(t,n)}return String.fromCharCode(e)}var a=/\\([!"#$%&'()*+,\-.\/:;<=>?@[\\\]^_`{|}~])/g,l=new RegExp(a.source+"|"+/&([a-z#][a-z0-9]{1,31});/gi.source,"gi"),c=/^#((?:x[a-f0-9]{1,8}|[0-9]{1,8}))/i,u=n(7),d=/[&<>"]/,h=/[&<>"]/g,p={"&":"&","<":"<",">":">",'"':"""};function f(e){return p[e]}var g=/[.?*+^$[\]\\(){}|-]/g,m=n(3);t.lib={},t.lib.mdurl=n(8),t.lib.ucmicro=n(26),t.assign=function(e){var t=Array.prototype.slice.call(arguments,1);return t.forEach((function(t){if(t){if("object"!=typeof t)throw new TypeError(t+"must be object");Object.keys(t).forEach((function(n){e[n]=t[n]}))}})),e},t.isString=function(e){return"[object String]"===function(e){return Object.prototype.toString.call(e)}(e)},t.has=i,t.unescapeMd=function(e){return e.indexOf("\\")<0?e:e.replace(a,"$1")},t.unescapeAll=function(e){return e.indexOf("\\")<0&&e.indexOf("&")<0?e:e.replace(l,(function(e,t,n){return t||function(e,t){var n=0;return i(u,t)?u[t]:35===t.charCodeAt(0)&&c.test(t)&&s(n="x"===t[1].toLowerCase()?parseInt(t.slice(2),16):parseInt(t.slice(1),10))?o(n):e}(e,n)}))},t.isValidEntityCode=s,t.fromCodePoint=o,t.escapeHtml=function(e){return d.test(e)?e.replace(h,f):e},t.arrayReplaceAt=function(e,t,n){return[].concat(e.slice(0,t),n,e.slice(t+1))},t.isSpace=function(e){switch(e){case 9:case 32:return!0}return!1},t.isWhiteSpace=function(e){if(e>=8192&&e<=8202)return!0;switch(e){case 9:case 10:case 11:case 12:case 13:case 32:case 160:case 5760:case 8239:case 8287:case 12288:return!0}return!1},t.isMdAsciiPunct=function(e){switch(e){case 33:case 34:case 35:case 36:case 37:case 38:case 39:case 40:case 41:case 42:case 43:case 44:case 45:case 46:case 47:case 58:case 59:case 60:case 61:case 62:case 63:case 64:case 91:case 92:case 93:case 94:case 95:case 96:case 123:case 124:case 125:case 126:return!0;default:return!1}},t.isPunctChar=function(e){return m.test(e)},t.escapeRE=function(e){return e.replace(g,"\\$&")},t.normalizeReference=function(e){return e=e.trim().replace(/\s+/g," "),"Ṿ"==="ẞ".toLowerCase()&&(e=e.replace(/ẞ/g,"ß")),e.toLowerCase().toUpperCase()}},function(e,t,n){"use strict";function r(){return(r=Object.assign||function(e){for(var t=1;t'+r+""}}t.b=function(){var e=new i.a;return e.set({html:!0,breaks:!0,linkify:!1,typographer:!0}),e}},function(e,t){e.exports=/[!-#%-\*,-\/:;\?@\[-\]_\{\}\xA1\xA7\xAB\xB6\xB7\xBB\xBF\u037E\u0387\u055A-\u055F\u0589\u058A\u05BE\u05C0\u05C3\u05C6\u05F3\u05F4\u0609\u060A\u060C\u060D\u061B\u061E\u061F\u066A-\u066D\u06D4\u0700-\u070D\u07F7-\u07F9\u0830-\u083E\u085E\u0964\u0965\u0970\u09FD\u0A76\u0AF0\u0C84\u0DF4\u0E4F\u0E5A\u0E5B\u0F04-\u0F12\u0F14\u0F3A-\u0F3D\u0F85\u0FD0-\u0FD4\u0FD9\u0FDA\u104A-\u104F\u10FB\u1360-\u1368\u1400\u166D\u166E\u169B\u169C\u16EB-\u16ED\u1735\u1736\u17D4-\u17D6\u17D8-\u17DA\u1800-\u180A\u1944\u1945\u1A1E\u1A1F\u1AA0-\u1AA6\u1AA8-\u1AAD\u1B5A-\u1B60\u1BFC-\u1BFF\u1C3B-\u1C3F\u1C7E\u1C7F\u1CC0-\u1CC7\u1CD3\u2010-\u2027\u2030-\u2043\u2045-\u2051\u2053-\u205E\u207D\u207E\u208D\u208E\u2308-\u230B\u2329\u232A\u2768-\u2775\u27C5\u27C6\u27E6-\u27EF\u2983-\u2998\u29D8-\u29DB\u29FC\u29FD\u2CF9-\u2CFC\u2CFE\u2CFF\u2D70\u2E00-\u2E2E\u2E30-\u2E4E\u3001-\u3003\u3008-\u3011\u3014-\u301F\u3030\u303D\u30A0\u30FB\uA4FE\uA4FF\uA60D-\uA60F\uA673\uA67E\uA6F2-\uA6F7\uA874-\uA877\uA8CE\uA8CF\uA8F8-\uA8FA\uA8FC\uA92E\uA92F\uA95F\uA9C1-\uA9CD\uA9DE\uA9DF\uAA5C-\uAA5F\uAADE\uAADF\uAAF0\uAAF1\uABEB\uFD3E\uFD3F\uFE10-\uFE19\uFE30-\uFE52\uFE54-\uFE61\uFE63\uFE68\uFE6A\uFE6B\uFF01-\uFF03\uFF05-\uFF0A\uFF0C-\uFF0F\uFF1A\uFF1B\uFF1F\uFF20\uFF3B-\uFF3D\uFF3F\uFF5B\uFF5D\uFF5F-\uFF65]|\uD800[\uDD00-\uDD02\uDF9F\uDFD0]|\uD801\uDD6F|\uD802[\uDC57\uDD1F\uDD3F\uDE50-\uDE58\uDE7F\uDEF0-\uDEF6\uDF39-\uDF3F\uDF99-\uDF9C]|\uD803[\uDF55-\uDF59]|\uD804[\uDC47-\uDC4D\uDCBB\uDCBC\uDCBE-\uDCC1\uDD40-\uDD43\uDD74\uDD75\uDDC5-\uDDC8\uDDCD\uDDDB\uDDDD-\uDDDF\uDE38-\uDE3D\uDEA9]|\uD805[\uDC4B-\uDC4F\uDC5B\uDC5D\uDCC6\uDDC1-\uDDD7\uDE41-\uDE43\uDE60-\uDE6C\uDF3C-\uDF3E]|\uD806[\uDC3B\uDE3F-\uDE46\uDE9A-\uDE9C\uDE9E-\uDEA2]|\uD807[\uDC41-\uDC45\uDC70\uDC71\uDEF7\uDEF8]|\uD809[\uDC70-\uDC74]|\uD81A[\uDE6E\uDE6F\uDEF5\uDF37-\uDF3B\uDF44]|\uD81B[\uDE97-\uDE9A]|\uD82F\uDC9F|\uD836[\uDE87-\uDE8B]|\uD83A[\uDD5E\uDD5F]/},function(e,t,n){"use strict";function r(){this.__rules__=[],this.__cache__=null}r.prototype.__find__=function(e){for(var t=0;t=0&&(n=this.attrs[t][1]),n},r.prototype.attrJoin=function(e,t){var n=this.attrIndex(e);n<0?this.attrPush([e,t]):this.attrs[n][1]=this.attrs[n][1]+" "+t},e.exports=r},function(e,t,n){"use strict";const r=/[\u0000-\u001f]/g,i=/[\s~`!@#$%^&*()\-_+=[\]{}|\\;:"'“”‘’–—<>,.?/]+/g,s=/[\u0300-\u036F]/g;e.exports=function(e){return e.normalize("NFKD").replace(s,"").replace(r,"").replace(i,"-").replace(/\-{2,}/g,"-").replace(/^\-+|\-+$/g,"").replace(/^(\d)/,"_$1").toLowerCase()}},function(e,t,n){"use strict";e.exports=n(21)},function(e,t,n){"use strict";e.exports.encode=n(22),e.exports.decode=n(23),e.exports.format=n(24),e.exports.parse=n(25)},function(e,t){e.exports=/[\0-\uD7FF\uE000-\uFFFF]|[\uD800-\uDBFF][\uDC00-\uDFFF]|[\uD800-\uDBFF](?![\uDC00-\uDFFF])|(?:[^\uD800-\uDBFF]|^)[\uDC00-\uDFFF]/},function(e,t){e.exports=/[\0-\x1F\x7F-\x9F]/},function(e,t){e.exports=/[ \xA0\u1680\u2000-\u200A\u2028\u2029\u202F\u205F\u3000]/},function(e,t,n){"use strict";var r="<[A-Za-z][A-Za-z0-9\\-]*(?:\\s+[a-zA-Z_:][a-zA-Z0-9:._-]*(?:\\s*=\\s*(?:[^\"'=<>`\\x00-\\x20]+|'[^']*'|\"[^\"]*\"))?)*\\s*\\/?>",i="<\\/[A-Za-z][A-Za-z0-9\\-]*\\s*>",s=new RegExp("^(?:"+r+"|"+i+"|\x3c!----\x3e|\x3c!--(?:-?[^>-])(?:-?[^-])*--\x3e|<[?][\\s\\S]*?[?]>|]*>|)"),o=new RegExp("^(?:"+r+"|"+i+")");e.exports.HTML_TAG_RE=s,e.exports.HTML_OPEN_CLOSE_TAG_RE=o},function(e,t,n){"use strict";function r(e,t){var n,r,i,s,o,a=[],l=t.length;for(n=0;n=0;n--)95!==(r=t[n]).marker&&42!==r.marker||-1!==r.end&&(i=t[r.end],a=n>0&&t[n-1].end===r.end+1&&t[n-1].marker===r.marker&&t[n-1].token===r.token-1&&t[r.end+1].token===i.token+1,o=String.fromCharCode(r.marker),(s=e.tokens[r.token]).type=a?"strong_open":"em_open",s.tag=a?"strong":"em",s.nesting=1,s.markup=a?o+o:o,s.content="",(s=e.tokens[i.token]).type=a?"strong_close":"em_close",s.tag=a?"strong":"em",s.nesting=-1,s.markup=a?o+o:o,s.content="",a&&(e.tokens[t[n-1].token].content="",e.tokens[t[r.end+1].token].content="",n--))}e.exports.tokenize=function(e,t){var n,r,i=e.pos,s=e.src.charCodeAt(i);if(t)return!1;if(95!==s&&42!==s)return!1;for(r=e.scanDelims(e.pos,42===s),n=0;n=0?u[d]:u[u.length+d]);var u,d;if(void 0===c)return r;for(let h in n)if("shift"!==h&&"position"!==h){if(void 0===c[h])return r;if("children"===h&&o(n.children)){if(0===c.children.length)return r;let e,t=n.children,i=c.children;if(t.every((e=>void 0!==e.position))){if(e=t.every((e=>s(i,e.position,e).match)),e){let e=l(t).position;r.j=e>=0?e:i.length+e}}else for(let n=0;ns(i,n,e).match)),e){r.j=n;break}if(!1===e)return r}else switch(typeof n[h]){case"boolean":case"number":case"string":if(c[h]!==n[h])return r;break;case"function":if(!n[h](c[h]))return r;break;case"object":if(a(n[h])){if(!1===n[h].every((e=>e(c[h]))))return r;break}default:throw new Error(`Unknown type of pattern test (key: ${h}). Test should be of type boolean, number, string, function or array of functions.`)}}return r.match=!0,r}function o(e){return Array.isArray(e)&&e.length&&e.every((e=>"object"==typeof e))}function a(e){return Array.isArray(e)&&e.length&&e.every((e=>"function"==typeof e))}function l(e){return e.slice(-1)[0]||{}}e.exports=function(e,t){let n=Object.assign({},i);n=Object.assign(n,t);const o=r(n);e.core.ruler.before("linkify","curly_attributes",(function(e){let t=e.tokens;for(let n=0;n{let r=s(t,n,e);return null!==r.j&&(i=r.j),r.match}))&&(r.transform(t,n,i),"inline attributes"!==r.name&&"inline nesting 0"!==r.name||e--)}}))}},function(e,t,n){"use strict";e.exports=n(20)},function(e,t,n){"use strict";n.r(t),n.d(t,"default",(function(){return g}));var r=n(1),i=n(15),s=n.n(i),o=function(e,t){var n=(void 0===t?{}:t).lineMarkup,r=void 0===n?"data-line":n,i=function(e,t,n,r,i){return i.renderToken(e,t,n)};function s(e){return function(t,n,i,s,o){var a=t[n];return a.attrPush([r,a.map[0]+1]),e(t,n,i,s,o)}}function o(e){return function(t,n,i,s,o){var a=e(t,n,i,s,o),l=t[n].map[0]+1;return"
    '+a+"
    "}}var a={table_open:s,blockquote_open:s,bullet_list_open:s,ordered_list_open:s,reference_open:s,heading_open:s,lheading_open:s,paragraph_open:s,hr:s,html_block:o,code_block:o,fence:o};Object.keys(a).forEach((function(t){var n=e.renderer.rules[t]||i;e.renderer.rules[t]=a[t](n)}))},a=function(e,t){void 0===t&&(t={});var n=t.getMarks;n&&e.core.ruler.push("anchor",(function(e){var t={},r=e.tokens;r.filter((function(e){return"heading_open"===e.type})).forEach((function(e){var i=r[r.indexOf(e)+1].content,s=Number(e.tag.substr(1));t[i]=i in t?Number(t[i])+1:"";var o=n(i,s,t[i]);o&&o.forEach((function(t){var n=t.attr,r=t.value;e.attrPush([n,r])}))}))}))},l={includeLevel:[2,3],containerClass:"table-of-contents",listClass:"table-of-content-list",listItemClass:"table-of-content-list-item",markerPattern:/^\[\[toc\]\]/im,listType:"ul",getAnchorAttrs:function(){return[]},format:void 0,forceFullToc:!1,containerHeaderHtml:void 0,containerFooterHtml:void 0,transformLink:void 0},c=function(e,t){var n,i=Object(r.a)({},l,t),s=i.markerPattern;function o(e,t,n){for(var r,s,a=[],l="",c=t.length,u=e;ur){l+=(s=o(u,t,n))[1],u=s[0];continue}if(p'+a.join("")+""];p==r&&(l+="",a.push(l))}else r=p;var f=h.children.reduce((function(e,t){return e+t.content}),""),g=h.content,m=n[g]=g in n?Number(n[g])+1:"",b=i.getAnchorAttrs(g,p,m);l='
  11. \n ",l+=f,l+="",u++}else u++}return l+=""===l?"":"
  12. ",a.push(l),[u,"<"+i.listType+' class="'+i.listClass+'">'+a.join("")+""]}e.renderer.rules.toc_open=function(e,t){var n='
    ';return i.containerHeaderHtml&&(n+=i.containerHeaderHtml),n},e.renderer.rules.toc_close=function(e,t){var n="";return i.containerFooterHtml&&(n=i.containerFooterHtml),n+"
    "},e.renderer.rules.toc_body=function(e,t){var r={};if(i.forceFullToc){for(var s="",a=0,l=n&&n.tokens&&n.tokens.length;a\x3c!--afterbegin--\x3e'+l+"\x3c!--beforeend--\x3e\x3c!--afterend--\x3e"}},s=e.renderer.rules,o=s.fence,a=s.code_block;e.renderer.rules.fence=i(o),e.renderer.rules.code_block=i(a)},d=function(e,t){var n=t.externalAttrs,r=t.openLinkIcon,i=t.openLinkIconClass,s=!1;e.renderer.rules.link_open=function(e,t,r,i,o){var a=e[t],l=a.attrIndex("href");if(l>=0){var c=a.attrs[l][1];/^https?:/.test(c)&&(Object.keys(n).forEach((function(e){a.attrSet(e,n[e])})),/_blank/i.test(n.target)&&(s=!0))}return o.renderToken(e,t,r)},e.renderer.rules.link_close=function(e,t,n,o,a){return s&&(s=!1,r)?i?''+a.renderToken(e,t,n):''+a.renderToken(e,t,n):a.renderToken(e,t,n)}},h=n(6),p=n.n(h),f=n(2);function g(e){var t=void 0===e?{}:e,n=t.toc,i=t.link,l=t.attrs,h=Object(f.b)();return h.use(d,Object(r.a)({externalAttrs:{target:"_blank"}},i)).use(u,{getWrapperClass:function(e){return"v-md-pre-wrapper v-md-pre-wrapper-"+e}}).use(s.a,Object(r.a)({leftDelimiter:"{{{",rightDelimiter:"}}}"},l,{allowedAttributes:["width","height"].concat(null==l?void 0:l.allowedAttributes)})).use(a,{getMarks:function(e,t,n){return[{attr:"data-v-md-heading",value:p()(e)+(n?"-"+n:"")}]}}).use(c,Object(r.a)({listClass:"v-md-toc",listItemClass:"v-md-toc-item",getAnchorAttrs:function(e,t,n){return[{attr:"data-v-md-anchor",value:p()(e)+(n?"-"+n:"")}]}},n)).use(o,{lineMarkup:"data-v-md-line"}),{previewClass:"markdown-body",extend:function(e){e(h)},markdownParser:h}}},function(e,t,n){"use strict";const r=n(19);function i(e){return e.slice(-1)[0]}e.exports=e=>{const t=new RegExp("^ {0,3}[-*_]{3,} ?"+r.escapeRegExp(e.leftDelimiter)+"[^"+r.escapeRegExp(e.rightDelimiter)+"]");return[{name:"fenced code blocks",tests:[{shift:0,block:!0,info:r.hasDelimiters("end",e)}],transform:(t,n)=>{let i=t[n],s=i.info.lastIndexOf(e.leftDelimiter),o=r.getAttrs(i.info,s,e);r.addAttrs(o,i),i.info=r.removeDelimiter(i.info,e)}},{name:"inline nesting 0",tests:[{shift:0,type:"inline",children:[{shift:-1,type:e=>"image"===e||"code_inline"===e},{shift:0,type:"text",content:r.hasDelimiters("start",e)}]}],transform:(t,n,i)=>{let s=t[n].children[i],o=s.content.indexOf(e.rightDelimiter),a=t[n].children[i-1],l=r.getAttrs(s.content,0,e);r.addAttrs(l,a),s.content.length===o+e.rightDelimiter.length?t[n].children.splice(i,1):s.content=s.content.slice(o+e.rightDelimiter.length)}},{name:"tables",tests:[{shift:0,type:"table_close"},{shift:1,type:"paragraph_open"},{shift:2,type:"inline",content:r.hasDelimiters("only",e)}],transform:(t,n)=>{let i=t[n+2],s=r.getMatchingOpeningToken(t,n),o=r.getAttrs(i.content,0,e);r.addAttrs(o,s),t.splice(n+1,3)}},{name:"inline attributes",tests:[{shift:0,type:"inline",children:[{shift:-1,nesting:-1},{shift:0,type:"text",content:r.hasDelimiters("start",e)}]}],transform:(t,n,i)=>{let s=t[n].children[i],o=s.content,a=r.getAttrs(o,0,e),l=r.getMatchingOpeningToken(t[n].children,i-1);r.addAttrs(a,l),s.content=o.slice(o.indexOf(e.rightDelimiter)+e.rightDelimiter.length)}},{name:"list softbreak",tests:[{shift:-2,type:"list_item_open"},{shift:0,type:"inline",children:[{position:-2,type:"softbreak"},{position:-1,type:"text",content:r.hasDelimiters("only",e)}]}],transform:(t,n,i)=>{let s=t[n].children[i].content,o=r.getAttrs(s,0,e),a=n-2;for(;t[a-1]&&"ordered_list_open"!==t[a-1].type&&"bullet_list_open"!==t[a-1].type;)a--;r.addAttrs(o,t[a-1]),t[n].children=t[n].children.slice(0,-2)}},{name:"list double softbreak",tests:[{shift:0,type:e=>"bullet_list_close"===e||"ordered_list_close"===e},{shift:1,type:"paragraph_open"},{shift:2,type:"inline",content:r.hasDelimiters("only",e),children:e=>1===e.length},{shift:3,type:"paragraph_close"}],transform:(t,n)=>{let i=t[n+2].content,s=r.getAttrs(i,0,e),o=r.getMatchingOpeningToken(t,n);r.addAttrs(s,o),t.splice(n+1,3)}},{name:"list item end",tests:[{shift:-2,type:"list_item_open"},{shift:0,type:"inline",children:[{position:-1,type:"text",content:r.hasDelimiters("end",e)}]}],transform:(t,n,s)=>{let o=t[n].children[s],a=o.content,l=r.getAttrs(a,a.lastIndexOf(e.leftDelimiter),e);r.addAttrs(l,t[n-2]);let c=a.slice(0,a.lastIndexOf(e.leftDelimiter));o.content=" "!==i(c)?c:c.slice(0,-1)}},{name:"\n{.a} softbreak then curly in start",tests:[{shift:0,type:"inline",children:[{position:-2,type:"softbreak"},{position:-1,type:"text",content:r.hasDelimiters("only",e)}]}],transform:(t,n,i)=>{let s=t[n].children[i],o=r.getAttrs(s.content,0,e),a=n+1;for(;t[a+1]&&-1===t[a+1].nesting;)a++;let l=r.getMatchingOpeningToken(t,a);r.addAttrs(o,l),t[n].children=t[n].children.slice(0,-2)}},{name:"horizontal rule",tests:[{shift:0,type:"paragraph_open"},{shift:1,type:"inline",children:e=>1===e.length,content:e=>null!==e.match(t)},{shift:2,type:"paragraph_close"}],transform:(t,n)=>{let i=t[n];i.type="hr",i.tag="hr",i.nesting=0;let s=t[n+1].content,o=s.lastIndexOf(e.leftDelimiter);i.attrs=r.getAttrs(s,o,e),i.markup=s,t.splice(n+1,2)}},{name:"end of block",tests:[{shift:0,type:"inline",children:[{position:-1,content:r.hasDelimiters("end",e),type:e=>"code_inline"!==e}]}],transform:(t,n,s)=>{let o=t[n].children[s],a=o.content,l=r.getAttrs(a,a.lastIndexOf(e.leftDelimiter),e),c=n+1;for(;t[c+1]&&-1===t[c+1].nesting;)c++;let u=r.getMatchingOpeningToken(t,c);r.addAttrs(l,u);let d=a.slice(0,a.lastIndexOf(e.leftDelimiter));o.content=" "!==i(d)?d:d.slice(0,-1)}}]}},function(e,t,n){"use strict";function r(e){return e.replace(/[-/\\^$*+?.()|[\]{}]/g,"\\$&")}t.getAttrs=function(e,t,n){const r=/[^\t\n\f />"'=]/,i=[];let s="",o="",a=!0,l=!1;for(let c=t+n.leftDelimiter.length;c=a+1:e.length>=a}(n.substring(r,i+t.rightDelimiter.length))}},t.removeDelimiter=function(e,t){const n=r(t.leftDelimiter),i=r(t.rightDelimiter);let s=new RegExp("[ \\n]?"+n+"[^"+n+i+"]+"+i+"$"),o=e.search(s);return-1!==o?e.slice(0,o):e},t.escapeRegExp=r,t.getMatchingOpeningToken=function(e,t){if("softbreak"===e[t].type)return!1;if(0===e[t].nesting)return e[t];let n=e[t].level,r=e[t].type.replace("_close","_open");for(;t>=0;--t)if(e[t].type===r&&e[t].level===n)return e[t]};let i=/[&<>"]/,s=/[&<>"]/g,o={"&":"&","<":"<",">":">",'"':"""};function a(e){return o[e]}t.escapeHtml=function(e){return i.test(e)?e.replace(s,a):e}},function(e,t,n){"use strict";var r=n(0),i=n(28),s=n(32),o=n(33),a=n(41),l=n(55),c=n(68),u=n(8),d=n(70),h={default:n(73),zero:n(74),commonmark:n(75)},p=/^(vbscript|javascript|file|data):/,f=/^data:image\/(gif|png|jpeg|webp);/;function g(e){var t=e.trim().toLowerCase();return!p.test(t)||!!f.test(t)}var m=["http:","https:","mailto:"];function b(e){var t=u.parse(e,!0);if(t.hostname&&(!t.protocol||m.indexOf(t.protocol)>=0))try{t.hostname=d.toASCII(t.hostname)}catch(e){}return u.encode(u.format(t))}function _(e){var t=u.parse(e,!0);if(t.hostname&&(!t.protocol||m.indexOf(t.protocol)>=0))try{t.hostname=d.toUnicode(t.hostname)}catch(e){}return u.decode(u.format(t),u.decode.defaultChars+"%")}function y(e,t){if(!(this instanceof y))return new y(e,t);t||r.isString(e)||(t=e||{},e="default"),this.inline=new l,this.block=new a,this.core=new o,this.renderer=new s,this.linkify=new c,this.validateLink=g,this.normalizeLink=b,this.normalizeLinkText=_,this.utils=r,this.helpers=r.assign({},i),this.options={},this.configure(e),t&&this.set(t)}y.prototype.set=function(e){return r.assign(this.options,e),this},y.prototype.configure=function(e){var t,n=this;if(r.isString(e)&&!(e=h[t=e]))throw new Error('Wrong `markdown-it` preset "'+t+'", check name');if(!e)throw new Error("Wrong `markdown-it` preset, can't be empty");return e.options&&n.set(e.options),e.components&&Object.keys(e.components).forEach((function(t){e.components[t].rules&&n[t].ruler.enableOnly(e.components[t].rules),e.components[t].rules2&&n[t].ruler2.enableOnly(e.components[t].rules2)})),this},y.prototype.enable=function(e,t){var n=[];Array.isArray(e)||(e=[e]),["core","block","inline"].forEach((function(t){n=n.concat(this[t].ruler.enable(e,!0))}),this),n=n.concat(this.inline.ruler2.enable(e,!0));var r=e.filter((function(e){return n.indexOf(e)<0}));if(r.length&&!t)throw new Error("MarkdownIt. Failed to enable unknown rule(s): "+r);return this},y.prototype.disable=function(e,t){var n=[];Array.isArray(e)||(e=[e]),["core","block","inline"].forEach((function(t){n=n.concat(this[t].ruler.disable(e,!0))}),this),n=n.concat(this.inline.ruler2.disable(e,!0));var r=e.filter((function(e){return n.indexOf(e)<0}));if(r.length&&!t)throw new Error("MarkdownIt. Failed to disable unknown rule(s): "+r);return this},y.prototype.use=function(e){var t=[this].concat(Array.prototype.slice.call(arguments,1));return e.apply(e,t),this},y.prototype.parse=function(e,t){if("string"!=typeof e)throw new Error("Input data should be a String");var n=new this.core.State(e,this,t);return this.core.process(n),n.tokens},y.prototype.render=function(e,t){return t=t||{},this.renderer.render(this.parse(e,t),this.options,t)},y.prototype.parseInline=function(e,t){var n=new this.core.State(e,this,t);return n.inlineMode=!0,this.core.process(n),n.tokens},y.prototype.renderInline=function(e,t){return t=t||{},this.renderer.render(this.parseInline(e,t),this.options,t)},e.exports=y},function(e){e.exports=JSON.parse('{"Aacute":"Á","aacute":"á","Abreve":"Ă","abreve":"ă","ac":"∾","acd":"∿","acE":"∾̳","Acirc":"Â","acirc":"â","acute":"´","Acy":"А","acy":"а","AElig":"Æ","aelig":"æ","af":"⁡","Afr":"𝔄","afr":"𝔞","Agrave":"À","agrave":"à","alefsym":"ℵ","aleph":"ℵ","Alpha":"Α","alpha":"α","Amacr":"Ā","amacr":"ā","amalg":"⨿","amp":"&","AMP":"&","andand":"⩕","And":"⩓","and":"∧","andd":"⩜","andslope":"⩘","andv":"⩚","ang":"∠","ange":"⦤","angle":"∠","angmsdaa":"⦨","angmsdab":"⦩","angmsdac":"⦪","angmsdad":"⦫","angmsdae":"⦬","angmsdaf":"⦭","angmsdag":"⦮","angmsdah":"⦯","angmsd":"∡","angrt":"∟","angrtvb":"⊾","angrtvbd":"⦝","angsph":"∢","angst":"Å","angzarr":"⍼","Aogon":"Ą","aogon":"ą","Aopf":"𝔸","aopf":"𝕒","apacir":"⩯","ap":"≈","apE":"⩰","ape":"≊","apid":"≋","apos":"\'","ApplyFunction":"⁡","approx":"≈","approxeq":"≊","Aring":"Å","aring":"å","Ascr":"𝒜","ascr":"𝒶","Assign":"≔","ast":"*","asymp":"≈","asympeq":"≍","Atilde":"Ã","atilde":"ã","Auml":"Ä","auml":"ä","awconint":"∳","awint":"⨑","backcong":"≌","backepsilon":"϶","backprime":"‵","backsim":"∽","backsimeq":"⋍","Backslash":"∖","Barv":"⫧","barvee":"⊽","barwed":"⌅","Barwed":"⌆","barwedge":"⌅","bbrk":"⎵","bbrktbrk":"⎶","bcong":"≌","Bcy":"Б","bcy":"б","bdquo":"„","becaus":"∵","because":"∵","Because":"∵","bemptyv":"⦰","bepsi":"϶","bernou":"ℬ","Bernoullis":"ℬ","Beta":"Β","beta":"β","beth":"ℶ","between":"≬","Bfr":"𝔅","bfr":"𝔟","bigcap":"⋂","bigcirc":"◯","bigcup":"⋃","bigodot":"⨀","bigoplus":"⨁","bigotimes":"⨂","bigsqcup":"⨆","bigstar":"★","bigtriangledown":"▽","bigtriangleup":"△","biguplus":"⨄","bigvee":"⋁","bigwedge":"⋀","bkarow":"⤍","blacklozenge":"⧫","blacksquare":"▪","blacktriangle":"▴","blacktriangledown":"▾","blacktriangleleft":"◂","blacktriangleright":"▸","blank":"␣","blk12":"▒","blk14":"░","blk34":"▓","block":"█","bne":"=⃥","bnequiv":"≡⃥","bNot":"⫭","bnot":"⌐","Bopf":"𝔹","bopf":"𝕓","bot":"⊥","bottom":"⊥","bowtie":"⋈","boxbox":"⧉","boxdl":"┐","boxdL":"╕","boxDl":"╖","boxDL":"╗","boxdr":"┌","boxdR":"╒","boxDr":"╓","boxDR":"╔","boxh":"─","boxH":"═","boxhd":"┬","boxHd":"╤","boxhD":"╥","boxHD":"╦","boxhu":"┴","boxHu":"╧","boxhU":"╨","boxHU":"╩","boxminus":"⊟","boxplus":"⊞","boxtimes":"⊠","boxul":"┘","boxuL":"╛","boxUl":"╜","boxUL":"╝","boxur":"└","boxuR":"╘","boxUr":"╙","boxUR":"╚","boxv":"│","boxV":"║","boxvh":"┼","boxvH":"╪","boxVh":"╫","boxVH":"╬","boxvl":"┤","boxvL":"╡","boxVl":"╢","boxVL":"╣","boxvr":"├","boxvR":"╞","boxVr":"╟","boxVR":"╠","bprime":"‵","breve":"˘","Breve":"˘","brvbar":"¦","bscr":"𝒷","Bscr":"ℬ","bsemi":"⁏","bsim":"∽","bsime":"⋍","bsolb":"⧅","bsol":"\\\\","bsolhsub":"⟈","bull":"•","bullet":"•","bump":"≎","bumpE":"⪮","bumpe":"≏","Bumpeq":"≎","bumpeq":"≏","Cacute":"Ć","cacute":"ć","capand":"⩄","capbrcup":"⩉","capcap":"⩋","cap":"∩","Cap":"⋒","capcup":"⩇","capdot":"⩀","CapitalDifferentialD":"ⅅ","caps":"∩︀","caret":"⁁","caron":"ˇ","Cayleys":"ℭ","ccaps":"⩍","Ccaron":"Č","ccaron":"č","Ccedil":"Ç","ccedil":"ç","Ccirc":"Ĉ","ccirc":"ĉ","Cconint":"∰","ccups":"⩌","ccupssm":"⩐","Cdot":"Ċ","cdot":"ċ","cedil":"¸","Cedilla":"¸","cemptyv":"⦲","cent":"¢","centerdot":"·","CenterDot":"·","cfr":"𝔠","Cfr":"ℭ","CHcy":"Ч","chcy":"ч","check":"✓","checkmark":"✓","Chi":"Χ","chi":"χ","circ":"ˆ","circeq":"≗","circlearrowleft":"↺","circlearrowright":"↻","circledast":"⊛","circledcirc":"⊚","circleddash":"⊝","CircleDot":"⊙","circledR":"®","circledS":"Ⓢ","CircleMinus":"⊖","CirclePlus":"⊕","CircleTimes":"⊗","cir":"○","cirE":"⧃","cire":"≗","cirfnint":"⨐","cirmid":"⫯","cirscir":"⧂","ClockwiseContourIntegral":"∲","CloseCurlyDoubleQuote":"”","CloseCurlyQuote":"’","clubs":"♣","clubsuit":"♣","colon":":","Colon":"∷","Colone":"⩴","colone":"≔","coloneq":"≔","comma":",","commat":"@","comp":"∁","compfn":"∘","complement":"∁","complexes":"ℂ","cong":"≅","congdot":"⩭","Congruent":"≡","conint":"∮","Conint":"∯","ContourIntegral":"∮","copf":"𝕔","Copf":"ℂ","coprod":"∐","Coproduct":"∐","copy":"©","COPY":"©","copysr":"℗","CounterClockwiseContourIntegral":"∳","crarr":"↵","cross":"✗","Cross":"⨯","Cscr":"𝒞","cscr":"𝒸","csub":"⫏","csube":"⫑","csup":"⫐","csupe":"⫒","ctdot":"⋯","cudarrl":"⤸","cudarrr":"⤵","cuepr":"⋞","cuesc":"⋟","cularr":"↶","cularrp":"⤽","cupbrcap":"⩈","cupcap":"⩆","CupCap":"≍","cup":"∪","Cup":"⋓","cupcup":"⩊","cupdot":"⊍","cupor":"⩅","cups":"∪︀","curarr":"↷","curarrm":"⤼","curlyeqprec":"⋞","curlyeqsucc":"⋟","curlyvee":"⋎","curlywedge":"⋏","curren":"¤","curvearrowleft":"↶","curvearrowright":"↷","cuvee":"⋎","cuwed":"⋏","cwconint":"∲","cwint":"∱","cylcty":"⌭","dagger":"†","Dagger":"‡","daleth":"ℸ","darr":"↓","Darr":"↡","dArr":"⇓","dash":"‐","Dashv":"⫤","dashv":"⊣","dbkarow":"⤏","dblac":"˝","Dcaron":"Ď","dcaron":"ď","Dcy":"Д","dcy":"д","ddagger":"‡","ddarr":"⇊","DD":"ⅅ","dd":"ⅆ","DDotrahd":"⤑","ddotseq":"⩷","deg":"°","Del":"∇","Delta":"Δ","delta":"δ","demptyv":"⦱","dfisht":"⥿","Dfr":"𝔇","dfr":"𝔡","dHar":"⥥","dharl":"⇃","dharr":"⇂","DiacriticalAcute":"´","DiacriticalDot":"˙","DiacriticalDoubleAcute":"˝","DiacriticalGrave":"`","DiacriticalTilde":"˜","diam":"⋄","diamond":"⋄","Diamond":"⋄","diamondsuit":"♦","diams":"♦","die":"¨","DifferentialD":"ⅆ","digamma":"ϝ","disin":"⋲","div":"÷","divide":"÷","divideontimes":"⋇","divonx":"⋇","DJcy":"Ђ","djcy":"ђ","dlcorn":"⌞","dlcrop":"⌍","dollar":"$","Dopf":"𝔻","dopf":"𝕕","Dot":"¨","dot":"˙","DotDot":"⃜","doteq":"≐","doteqdot":"≑","DotEqual":"≐","dotminus":"∸","dotplus":"∔","dotsquare":"⊡","doublebarwedge":"⌆","DoubleContourIntegral":"∯","DoubleDot":"¨","DoubleDownArrow":"⇓","DoubleLeftArrow":"⇐","DoubleLeftRightArrow":"⇔","DoubleLeftTee":"⫤","DoubleLongLeftArrow":"⟸","DoubleLongLeftRightArrow":"⟺","DoubleLongRightArrow":"⟹","DoubleRightArrow":"⇒","DoubleRightTee":"⊨","DoubleUpArrow":"⇑","DoubleUpDownArrow":"⇕","DoubleVerticalBar":"∥","DownArrowBar":"⤓","downarrow":"↓","DownArrow":"↓","Downarrow":"⇓","DownArrowUpArrow":"⇵","DownBreve":"̑","downdownarrows":"⇊","downharpoonleft":"⇃","downharpoonright":"⇂","DownLeftRightVector":"⥐","DownLeftTeeVector":"⥞","DownLeftVectorBar":"⥖","DownLeftVector":"↽","DownRightTeeVector":"⥟","DownRightVectorBar":"⥗","DownRightVector":"⇁","DownTeeArrow":"↧","DownTee":"⊤","drbkarow":"⤐","drcorn":"⌟","drcrop":"⌌","Dscr":"𝒟","dscr":"𝒹","DScy":"Ѕ","dscy":"ѕ","dsol":"⧶","Dstrok":"Đ","dstrok":"đ","dtdot":"⋱","dtri":"▿","dtrif":"▾","duarr":"⇵","duhar":"⥯","dwangle":"⦦","DZcy":"Џ","dzcy":"џ","dzigrarr":"⟿","Eacute":"É","eacute":"é","easter":"⩮","Ecaron":"Ě","ecaron":"ě","Ecirc":"Ê","ecirc":"ê","ecir":"≖","ecolon":"≕","Ecy":"Э","ecy":"э","eDDot":"⩷","Edot":"Ė","edot":"ė","eDot":"≑","ee":"ⅇ","efDot":"≒","Efr":"𝔈","efr":"𝔢","eg":"⪚","Egrave":"È","egrave":"è","egs":"⪖","egsdot":"⪘","el":"⪙","Element":"∈","elinters":"⏧","ell":"ℓ","els":"⪕","elsdot":"⪗","Emacr":"Ē","emacr":"ē","empty":"∅","emptyset":"∅","EmptySmallSquare":"◻","emptyv":"∅","EmptyVerySmallSquare":"▫","emsp13":" ","emsp14":" ","emsp":" ","ENG":"Ŋ","eng":"ŋ","ensp":" ","Eogon":"Ę","eogon":"ę","Eopf":"𝔼","eopf":"𝕖","epar":"⋕","eparsl":"⧣","eplus":"⩱","epsi":"ε","Epsilon":"Ε","epsilon":"ε","epsiv":"ϵ","eqcirc":"≖","eqcolon":"≕","eqsim":"≂","eqslantgtr":"⪖","eqslantless":"⪕","Equal":"⩵","equals":"=","EqualTilde":"≂","equest":"≟","Equilibrium":"⇌","equiv":"≡","equivDD":"⩸","eqvparsl":"⧥","erarr":"⥱","erDot":"≓","escr":"ℯ","Escr":"ℰ","esdot":"≐","Esim":"⩳","esim":"≂","Eta":"Η","eta":"η","ETH":"Ð","eth":"ð","Euml":"Ë","euml":"ë","euro":"€","excl":"!","exist":"∃","Exists":"∃","expectation":"ℰ","exponentiale":"ⅇ","ExponentialE":"ⅇ","fallingdotseq":"≒","Fcy":"Ф","fcy":"ф","female":"♀","ffilig":"ffi","fflig":"ff","ffllig":"ffl","Ffr":"𝔉","ffr":"𝔣","filig":"fi","FilledSmallSquare":"◼","FilledVerySmallSquare":"▪","fjlig":"fj","flat":"♭","fllig":"fl","fltns":"▱","fnof":"ƒ","Fopf":"𝔽","fopf":"𝕗","forall":"∀","ForAll":"∀","fork":"⋔","forkv":"⫙","Fouriertrf":"ℱ","fpartint":"⨍","frac12":"½","frac13":"⅓","frac14":"¼","frac15":"⅕","frac16":"⅙","frac18":"⅛","frac23":"⅔","frac25":"⅖","frac34":"¾","frac35":"⅗","frac38":"⅜","frac45":"⅘","frac56":"⅚","frac58":"⅝","frac78":"⅞","frasl":"⁄","frown":"⌢","fscr":"𝒻","Fscr":"ℱ","gacute":"ǵ","Gamma":"Γ","gamma":"γ","Gammad":"Ϝ","gammad":"ϝ","gap":"⪆","Gbreve":"Ğ","gbreve":"ğ","Gcedil":"Ģ","Gcirc":"Ĝ","gcirc":"ĝ","Gcy":"Г","gcy":"г","Gdot":"Ġ","gdot":"ġ","ge":"≥","gE":"≧","gEl":"⪌","gel":"⋛","geq":"≥","geqq":"≧","geqslant":"⩾","gescc":"⪩","ges":"⩾","gesdot":"⪀","gesdoto":"⪂","gesdotol":"⪄","gesl":"⋛︀","gesles":"⪔","Gfr":"𝔊","gfr":"𝔤","gg":"≫","Gg":"⋙","ggg":"⋙","gimel":"ℷ","GJcy":"Ѓ","gjcy":"ѓ","gla":"⪥","gl":"≷","glE":"⪒","glj":"⪤","gnap":"⪊","gnapprox":"⪊","gne":"⪈","gnE":"≩","gneq":"⪈","gneqq":"≩","gnsim":"⋧","Gopf":"𝔾","gopf":"𝕘","grave":"`","GreaterEqual":"≥","GreaterEqualLess":"⋛","GreaterFullEqual":"≧","GreaterGreater":"⪢","GreaterLess":"≷","GreaterSlantEqual":"⩾","GreaterTilde":"≳","Gscr":"𝒢","gscr":"ℊ","gsim":"≳","gsime":"⪎","gsiml":"⪐","gtcc":"⪧","gtcir":"⩺","gt":">","GT":">","Gt":"≫","gtdot":"⋗","gtlPar":"⦕","gtquest":"⩼","gtrapprox":"⪆","gtrarr":"⥸","gtrdot":"⋗","gtreqless":"⋛","gtreqqless":"⪌","gtrless":"≷","gtrsim":"≳","gvertneqq":"≩︀","gvnE":"≩︀","Hacek":"ˇ","hairsp":" ","half":"½","hamilt":"ℋ","HARDcy":"Ъ","hardcy":"ъ","harrcir":"⥈","harr":"↔","hArr":"⇔","harrw":"↭","Hat":"^","hbar":"ℏ","Hcirc":"Ĥ","hcirc":"ĥ","hearts":"♥","heartsuit":"♥","hellip":"…","hercon":"⊹","hfr":"𝔥","Hfr":"ℌ","HilbertSpace":"ℋ","hksearow":"⤥","hkswarow":"⤦","hoarr":"⇿","homtht":"∻","hookleftarrow":"↩","hookrightarrow":"↪","hopf":"𝕙","Hopf":"ℍ","horbar":"―","HorizontalLine":"─","hscr":"𝒽","Hscr":"ℋ","hslash":"ℏ","Hstrok":"Ħ","hstrok":"ħ","HumpDownHump":"≎","HumpEqual":"≏","hybull":"⁃","hyphen":"‐","Iacute":"Í","iacute":"í","ic":"⁣","Icirc":"Î","icirc":"î","Icy":"И","icy":"и","Idot":"İ","IEcy":"Е","iecy":"е","iexcl":"¡","iff":"⇔","ifr":"𝔦","Ifr":"ℑ","Igrave":"Ì","igrave":"ì","ii":"ⅈ","iiiint":"⨌","iiint":"∭","iinfin":"⧜","iiota":"℩","IJlig":"IJ","ijlig":"ij","Imacr":"Ī","imacr":"ī","image":"ℑ","ImaginaryI":"ⅈ","imagline":"ℐ","imagpart":"ℑ","imath":"ı","Im":"ℑ","imof":"⊷","imped":"Ƶ","Implies":"⇒","incare":"℅","in":"∈","infin":"∞","infintie":"⧝","inodot":"ı","intcal":"⊺","int":"∫","Int":"∬","integers":"ℤ","Integral":"∫","intercal":"⊺","Intersection":"⋂","intlarhk":"⨗","intprod":"⨼","InvisibleComma":"⁣","InvisibleTimes":"⁢","IOcy":"Ё","iocy":"ё","Iogon":"Į","iogon":"į","Iopf":"𝕀","iopf":"𝕚","Iota":"Ι","iota":"ι","iprod":"⨼","iquest":"¿","iscr":"𝒾","Iscr":"ℐ","isin":"∈","isindot":"⋵","isinE":"⋹","isins":"⋴","isinsv":"⋳","isinv":"∈","it":"⁢","Itilde":"Ĩ","itilde":"ĩ","Iukcy":"І","iukcy":"і","Iuml":"Ï","iuml":"ï","Jcirc":"Ĵ","jcirc":"ĵ","Jcy":"Й","jcy":"й","Jfr":"𝔍","jfr":"𝔧","jmath":"ȷ","Jopf":"𝕁","jopf":"𝕛","Jscr":"𝒥","jscr":"𝒿","Jsercy":"Ј","jsercy":"ј","Jukcy":"Є","jukcy":"є","Kappa":"Κ","kappa":"κ","kappav":"ϰ","Kcedil":"Ķ","kcedil":"ķ","Kcy":"К","kcy":"к","Kfr":"𝔎","kfr":"𝔨","kgreen":"ĸ","KHcy":"Х","khcy":"х","KJcy":"Ќ","kjcy":"ќ","Kopf":"𝕂","kopf":"𝕜","Kscr":"𝒦","kscr":"𝓀","lAarr":"⇚","Lacute":"Ĺ","lacute":"ĺ","laemptyv":"⦴","lagran":"ℒ","Lambda":"Λ","lambda":"λ","lang":"⟨","Lang":"⟪","langd":"⦑","langle":"⟨","lap":"⪅","Laplacetrf":"ℒ","laquo":"«","larrb":"⇤","larrbfs":"⤟","larr":"←","Larr":"↞","lArr":"⇐","larrfs":"⤝","larrhk":"↩","larrlp":"↫","larrpl":"⤹","larrsim":"⥳","larrtl":"↢","latail":"⤙","lAtail":"⤛","lat":"⪫","late":"⪭","lates":"⪭︀","lbarr":"⤌","lBarr":"⤎","lbbrk":"❲","lbrace":"{","lbrack":"[","lbrke":"⦋","lbrksld":"⦏","lbrkslu":"⦍","Lcaron":"Ľ","lcaron":"ľ","Lcedil":"Ļ","lcedil":"ļ","lceil":"⌈","lcub":"{","Lcy":"Л","lcy":"л","ldca":"⤶","ldquo":"“","ldquor":"„","ldrdhar":"⥧","ldrushar":"⥋","ldsh":"↲","le":"≤","lE":"≦","LeftAngleBracket":"⟨","LeftArrowBar":"⇤","leftarrow":"←","LeftArrow":"←","Leftarrow":"⇐","LeftArrowRightArrow":"⇆","leftarrowtail":"↢","LeftCeiling":"⌈","LeftDoubleBracket":"⟦","LeftDownTeeVector":"⥡","LeftDownVectorBar":"⥙","LeftDownVector":"⇃","LeftFloor":"⌊","leftharpoondown":"↽","leftharpoonup":"↼","leftleftarrows":"⇇","leftrightarrow":"↔","LeftRightArrow":"↔","Leftrightarrow":"⇔","leftrightarrows":"⇆","leftrightharpoons":"⇋","leftrightsquigarrow":"↭","LeftRightVector":"⥎","LeftTeeArrow":"↤","LeftTee":"⊣","LeftTeeVector":"⥚","leftthreetimes":"⋋","LeftTriangleBar":"⧏","LeftTriangle":"⊲","LeftTriangleEqual":"⊴","LeftUpDownVector":"⥑","LeftUpTeeVector":"⥠","LeftUpVectorBar":"⥘","LeftUpVector":"↿","LeftVectorBar":"⥒","LeftVector":"↼","lEg":"⪋","leg":"⋚","leq":"≤","leqq":"≦","leqslant":"⩽","lescc":"⪨","les":"⩽","lesdot":"⩿","lesdoto":"⪁","lesdotor":"⪃","lesg":"⋚︀","lesges":"⪓","lessapprox":"⪅","lessdot":"⋖","lesseqgtr":"⋚","lesseqqgtr":"⪋","LessEqualGreater":"⋚","LessFullEqual":"≦","LessGreater":"≶","lessgtr":"≶","LessLess":"⪡","lesssim":"≲","LessSlantEqual":"⩽","LessTilde":"≲","lfisht":"⥼","lfloor":"⌊","Lfr":"𝔏","lfr":"𝔩","lg":"≶","lgE":"⪑","lHar":"⥢","lhard":"↽","lharu":"↼","lharul":"⥪","lhblk":"▄","LJcy":"Љ","ljcy":"љ","llarr":"⇇","ll":"≪","Ll":"⋘","llcorner":"⌞","Lleftarrow":"⇚","llhard":"⥫","lltri":"◺","Lmidot":"Ŀ","lmidot":"ŀ","lmoustache":"⎰","lmoust":"⎰","lnap":"⪉","lnapprox":"⪉","lne":"⪇","lnE":"≨","lneq":"⪇","lneqq":"≨","lnsim":"⋦","loang":"⟬","loarr":"⇽","lobrk":"⟦","longleftarrow":"⟵","LongLeftArrow":"⟵","Longleftarrow":"⟸","longleftrightarrow":"⟷","LongLeftRightArrow":"⟷","Longleftrightarrow":"⟺","longmapsto":"⟼","longrightarrow":"⟶","LongRightArrow":"⟶","Longrightarrow":"⟹","looparrowleft":"↫","looparrowright":"↬","lopar":"⦅","Lopf":"𝕃","lopf":"𝕝","loplus":"⨭","lotimes":"⨴","lowast":"∗","lowbar":"_","LowerLeftArrow":"↙","LowerRightArrow":"↘","loz":"◊","lozenge":"◊","lozf":"⧫","lpar":"(","lparlt":"⦓","lrarr":"⇆","lrcorner":"⌟","lrhar":"⇋","lrhard":"⥭","lrm":"‎","lrtri":"⊿","lsaquo":"‹","lscr":"𝓁","Lscr":"ℒ","lsh":"↰","Lsh":"↰","lsim":"≲","lsime":"⪍","lsimg":"⪏","lsqb":"[","lsquo":"‘","lsquor":"‚","Lstrok":"Ł","lstrok":"ł","ltcc":"⪦","ltcir":"⩹","lt":"<","LT":"<","Lt":"≪","ltdot":"⋖","lthree":"⋋","ltimes":"⋉","ltlarr":"⥶","ltquest":"⩻","ltri":"◃","ltrie":"⊴","ltrif":"◂","ltrPar":"⦖","lurdshar":"⥊","luruhar":"⥦","lvertneqq":"≨︀","lvnE":"≨︀","macr":"¯","male":"♂","malt":"✠","maltese":"✠","Map":"⤅","map":"↦","mapsto":"↦","mapstodown":"↧","mapstoleft":"↤","mapstoup":"↥","marker":"▮","mcomma":"⨩","Mcy":"М","mcy":"м","mdash":"—","mDDot":"∺","measuredangle":"∡","MediumSpace":" ","Mellintrf":"ℳ","Mfr":"𝔐","mfr":"𝔪","mho":"℧","micro":"µ","midast":"*","midcir":"⫰","mid":"∣","middot":"·","minusb":"⊟","minus":"−","minusd":"∸","minusdu":"⨪","MinusPlus":"∓","mlcp":"⫛","mldr":"…","mnplus":"∓","models":"⊧","Mopf":"𝕄","mopf":"𝕞","mp":"∓","mscr":"𝓂","Mscr":"ℳ","mstpos":"∾","Mu":"Μ","mu":"μ","multimap":"⊸","mumap":"⊸","nabla":"∇","Nacute":"Ń","nacute":"ń","nang":"∠⃒","nap":"≉","napE":"⩰̸","napid":"≋̸","napos":"ʼn","napprox":"≉","natural":"♮","naturals":"ℕ","natur":"♮","nbsp":" ","nbump":"≎̸","nbumpe":"≏̸","ncap":"⩃","Ncaron":"Ň","ncaron":"ň","Ncedil":"Ņ","ncedil":"ņ","ncong":"≇","ncongdot":"⩭̸","ncup":"⩂","Ncy":"Н","ncy":"н","ndash":"–","nearhk":"⤤","nearr":"↗","neArr":"⇗","nearrow":"↗","ne":"≠","nedot":"≐̸","NegativeMediumSpace":"​","NegativeThickSpace":"​","NegativeThinSpace":"​","NegativeVeryThinSpace":"​","nequiv":"≢","nesear":"⤨","nesim":"≂̸","NestedGreaterGreater":"≫","NestedLessLess":"≪","NewLine":"\\n","nexist":"∄","nexists":"∄","Nfr":"𝔑","nfr":"𝔫","ngE":"≧̸","nge":"≱","ngeq":"≱","ngeqq":"≧̸","ngeqslant":"⩾̸","nges":"⩾̸","nGg":"⋙̸","ngsim":"≵","nGt":"≫⃒","ngt":"≯","ngtr":"≯","nGtv":"≫̸","nharr":"↮","nhArr":"⇎","nhpar":"⫲","ni":"∋","nis":"⋼","nisd":"⋺","niv":"∋","NJcy":"Њ","njcy":"њ","nlarr":"↚","nlArr":"⇍","nldr":"‥","nlE":"≦̸","nle":"≰","nleftarrow":"↚","nLeftarrow":"⇍","nleftrightarrow":"↮","nLeftrightarrow":"⇎","nleq":"≰","nleqq":"≦̸","nleqslant":"⩽̸","nles":"⩽̸","nless":"≮","nLl":"⋘̸","nlsim":"≴","nLt":"≪⃒","nlt":"≮","nltri":"⋪","nltrie":"⋬","nLtv":"≪̸","nmid":"∤","NoBreak":"⁠","NonBreakingSpace":" ","nopf":"𝕟","Nopf":"ℕ","Not":"⫬","not":"¬","NotCongruent":"≢","NotCupCap":"≭","NotDoubleVerticalBar":"∦","NotElement":"∉","NotEqual":"≠","NotEqualTilde":"≂̸","NotExists":"∄","NotGreater":"≯","NotGreaterEqual":"≱","NotGreaterFullEqual":"≧̸","NotGreaterGreater":"≫̸","NotGreaterLess":"≹","NotGreaterSlantEqual":"⩾̸","NotGreaterTilde":"≵","NotHumpDownHump":"≎̸","NotHumpEqual":"≏̸","notin":"∉","notindot":"⋵̸","notinE":"⋹̸","notinva":"∉","notinvb":"⋷","notinvc":"⋶","NotLeftTriangleBar":"⧏̸","NotLeftTriangle":"⋪","NotLeftTriangleEqual":"⋬","NotLess":"≮","NotLessEqual":"≰","NotLessGreater":"≸","NotLessLess":"≪̸","NotLessSlantEqual":"⩽̸","NotLessTilde":"≴","NotNestedGreaterGreater":"⪢̸","NotNestedLessLess":"⪡̸","notni":"∌","notniva":"∌","notnivb":"⋾","notnivc":"⋽","NotPrecedes":"⊀","NotPrecedesEqual":"⪯̸","NotPrecedesSlantEqual":"⋠","NotReverseElement":"∌","NotRightTriangleBar":"⧐̸","NotRightTriangle":"⋫","NotRightTriangleEqual":"⋭","NotSquareSubset":"⊏̸","NotSquareSubsetEqual":"⋢","NotSquareSuperset":"⊐̸","NotSquareSupersetEqual":"⋣","NotSubset":"⊂⃒","NotSubsetEqual":"⊈","NotSucceeds":"⊁","NotSucceedsEqual":"⪰̸","NotSucceedsSlantEqual":"⋡","NotSucceedsTilde":"≿̸","NotSuperset":"⊃⃒","NotSupersetEqual":"⊉","NotTilde":"≁","NotTildeEqual":"≄","NotTildeFullEqual":"≇","NotTildeTilde":"≉","NotVerticalBar":"∤","nparallel":"∦","npar":"∦","nparsl":"⫽⃥","npart":"∂̸","npolint":"⨔","npr":"⊀","nprcue":"⋠","nprec":"⊀","npreceq":"⪯̸","npre":"⪯̸","nrarrc":"⤳̸","nrarr":"↛","nrArr":"⇏","nrarrw":"↝̸","nrightarrow":"↛","nRightarrow":"⇏","nrtri":"⋫","nrtrie":"⋭","nsc":"⊁","nsccue":"⋡","nsce":"⪰̸","Nscr":"𝒩","nscr":"𝓃","nshortmid":"∤","nshortparallel":"∦","nsim":"≁","nsime":"≄","nsimeq":"≄","nsmid":"∤","nspar":"∦","nsqsube":"⋢","nsqsupe":"⋣","nsub":"⊄","nsubE":"⫅̸","nsube":"⊈","nsubset":"⊂⃒","nsubseteq":"⊈","nsubseteqq":"⫅̸","nsucc":"⊁","nsucceq":"⪰̸","nsup":"⊅","nsupE":"⫆̸","nsupe":"⊉","nsupset":"⊃⃒","nsupseteq":"⊉","nsupseteqq":"⫆̸","ntgl":"≹","Ntilde":"Ñ","ntilde":"ñ","ntlg":"≸","ntriangleleft":"⋪","ntrianglelefteq":"⋬","ntriangleright":"⋫","ntrianglerighteq":"⋭","Nu":"Ν","nu":"ν","num":"#","numero":"№","numsp":" ","nvap":"≍⃒","nvdash":"⊬","nvDash":"⊭","nVdash":"⊮","nVDash":"⊯","nvge":"≥⃒","nvgt":">⃒","nvHarr":"⤄","nvinfin":"⧞","nvlArr":"⤂","nvle":"≤⃒","nvlt":"<⃒","nvltrie":"⊴⃒","nvrArr":"⤃","nvrtrie":"⊵⃒","nvsim":"∼⃒","nwarhk":"⤣","nwarr":"↖","nwArr":"⇖","nwarrow":"↖","nwnear":"⤧","Oacute":"Ó","oacute":"ó","oast":"⊛","Ocirc":"Ô","ocirc":"ô","ocir":"⊚","Ocy":"О","ocy":"о","odash":"⊝","Odblac":"Ő","odblac":"ő","odiv":"⨸","odot":"⊙","odsold":"⦼","OElig":"Œ","oelig":"œ","ofcir":"⦿","Ofr":"𝔒","ofr":"𝔬","ogon":"˛","Ograve":"Ò","ograve":"ò","ogt":"⧁","ohbar":"⦵","ohm":"Ω","oint":"∮","olarr":"↺","olcir":"⦾","olcross":"⦻","oline":"‾","olt":"⧀","Omacr":"Ō","omacr":"ō","Omega":"Ω","omega":"ω","Omicron":"Ο","omicron":"ο","omid":"⦶","ominus":"⊖","Oopf":"𝕆","oopf":"𝕠","opar":"⦷","OpenCurlyDoubleQuote":"“","OpenCurlyQuote":"‘","operp":"⦹","oplus":"⊕","orarr":"↻","Or":"⩔","or":"∨","ord":"⩝","order":"ℴ","orderof":"ℴ","ordf":"ª","ordm":"º","origof":"⊶","oror":"⩖","orslope":"⩗","orv":"⩛","oS":"Ⓢ","Oscr":"𝒪","oscr":"ℴ","Oslash":"Ø","oslash":"ø","osol":"⊘","Otilde":"Õ","otilde":"õ","otimesas":"⨶","Otimes":"⨷","otimes":"⊗","Ouml":"Ö","ouml":"ö","ovbar":"⌽","OverBar":"‾","OverBrace":"⏞","OverBracket":"⎴","OverParenthesis":"⏜","para":"¶","parallel":"∥","par":"∥","parsim":"⫳","parsl":"⫽","part":"∂","PartialD":"∂","Pcy":"П","pcy":"п","percnt":"%","period":".","permil":"‰","perp":"⊥","pertenk":"‱","Pfr":"𝔓","pfr":"𝔭","Phi":"Φ","phi":"φ","phiv":"ϕ","phmmat":"ℳ","phone":"☎","Pi":"Π","pi":"π","pitchfork":"⋔","piv":"ϖ","planck":"ℏ","planckh":"ℎ","plankv":"ℏ","plusacir":"⨣","plusb":"⊞","pluscir":"⨢","plus":"+","plusdo":"∔","plusdu":"⨥","pluse":"⩲","PlusMinus":"±","plusmn":"±","plussim":"⨦","plustwo":"⨧","pm":"±","Poincareplane":"ℌ","pointint":"⨕","popf":"𝕡","Popf":"ℙ","pound":"£","prap":"⪷","Pr":"⪻","pr":"≺","prcue":"≼","precapprox":"⪷","prec":"≺","preccurlyeq":"≼","Precedes":"≺","PrecedesEqual":"⪯","PrecedesSlantEqual":"≼","PrecedesTilde":"≾","preceq":"⪯","precnapprox":"⪹","precneqq":"⪵","precnsim":"⋨","pre":"⪯","prE":"⪳","precsim":"≾","prime":"′","Prime":"″","primes":"ℙ","prnap":"⪹","prnE":"⪵","prnsim":"⋨","prod":"∏","Product":"∏","profalar":"⌮","profline":"⌒","profsurf":"⌓","prop":"∝","Proportional":"∝","Proportion":"∷","propto":"∝","prsim":"≾","prurel":"⊰","Pscr":"𝒫","pscr":"𝓅","Psi":"Ψ","psi":"ψ","puncsp":" ","Qfr":"𝔔","qfr":"𝔮","qint":"⨌","qopf":"𝕢","Qopf":"ℚ","qprime":"⁗","Qscr":"𝒬","qscr":"𝓆","quaternions":"ℍ","quatint":"⨖","quest":"?","questeq":"≟","quot":"\\"","QUOT":"\\"","rAarr":"⇛","race":"∽̱","Racute":"Ŕ","racute":"ŕ","radic":"√","raemptyv":"⦳","rang":"⟩","Rang":"⟫","rangd":"⦒","range":"⦥","rangle":"⟩","raquo":"»","rarrap":"⥵","rarrb":"⇥","rarrbfs":"⤠","rarrc":"⤳","rarr":"→","Rarr":"↠","rArr":"⇒","rarrfs":"⤞","rarrhk":"↪","rarrlp":"↬","rarrpl":"⥅","rarrsim":"⥴","Rarrtl":"⤖","rarrtl":"↣","rarrw":"↝","ratail":"⤚","rAtail":"⤜","ratio":"∶","rationals":"ℚ","rbarr":"⤍","rBarr":"⤏","RBarr":"⤐","rbbrk":"❳","rbrace":"}","rbrack":"]","rbrke":"⦌","rbrksld":"⦎","rbrkslu":"⦐","Rcaron":"Ř","rcaron":"ř","Rcedil":"Ŗ","rcedil":"ŗ","rceil":"⌉","rcub":"}","Rcy":"Р","rcy":"р","rdca":"⤷","rdldhar":"⥩","rdquo":"”","rdquor":"”","rdsh":"↳","real":"ℜ","realine":"ℛ","realpart":"ℜ","reals":"ℝ","Re":"ℜ","rect":"▭","reg":"®","REG":"®","ReverseElement":"∋","ReverseEquilibrium":"⇋","ReverseUpEquilibrium":"⥯","rfisht":"⥽","rfloor":"⌋","rfr":"𝔯","Rfr":"ℜ","rHar":"⥤","rhard":"⇁","rharu":"⇀","rharul":"⥬","Rho":"Ρ","rho":"ρ","rhov":"ϱ","RightAngleBracket":"⟩","RightArrowBar":"⇥","rightarrow":"→","RightArrow":"→","Rightarrow":"⇒","RightArrowLeftArrow":"⇄","rightarrowtail":"↣","RightCeiling":"⌉","RightDoubleBracket":"⟧","RightDownTeeVector":"⥝","RightDownVectorBar":"⥕","RightDownVector":"⇂","RightFloor":"⌋","rightharpoondown":"⇁","rightharpoonup":"⇀","rightleftarrows":"⇄","rightleftharpoons":"⇌","rightrightarrows":"⇉","rightsquigarrow":"↝","RightTeeArrow":"↦","RightTee":"⊢","RightTeeVector":"⥛","rightthreetimes":"⋌","RightTriangleBar":"⧐","RightTriangle":"⊳","RightTriangleEqual":"⊵","RightUpDownVector":"⥏","RightUpTeeVector":"⥜","RightUpVectorBar":"⥔","RightUpVector":"↾","RightVectorBar":"⥓","RightVector":"⇀","ring":"˚","risingdotseq":"≓","rlarr":"⇄","rlhar":"⇌","rlm":"‏","rmoustache":"⎱","rmoust":"⎱","rnmid":"⫮","roang":"⟭","roarr":"⇾","robrk":"⟧","ropar":"⦆","ropf":"𝕣","Ropf":"ℝ","roplus":"⨮","rotimes":"⨵","RoundImplies":"⥰","rpar":")","rpargt":"⦔","rppolint":"⨒","rrarr":"⇉","Rrightarrow":"⇛","rsaquo":"›","rscr":"𝓇","Rscr":"ℛ","rsh":"↱","Rsh":"↱","rsqb":"]","rsquo":"’","rsquor":"’","rthree":"⋌","rtimes":"⋊","rtri":"▹","rtrie":"⊵","rtrif":"▸","rtriltri":"⧎","RuleDelayed":"⧴","ruluhar":"⥨","rx":"℞","Sacute":"Ś","sacute":"ś","sbquo":"‚","scap":"⪸","Scaron":"Š","scaron":"š","Sc":"⪼","sc":"≻","sccue":"≽","sce":"⪰","scE":"⪴","Scedil":"Ş","scedil":"ş","Scirc":"Ŝ","scirc":"ŝ","scnap":"⪺","scnE":"⪶","scnsim":"⋩","scpolint":"⨓","scsim":"≿","Scy":"С","scy":"с","sdotb":"⊡","sdot":"⋅","sdote":"⩦","searhk":"⤥","searr":"↘","seArr":"⇘","searrow":"↘","sect":"§","semi":";","seswar":"⤩","setminus":"∖","setmn":"∖","sext":"✶","Sfr":"𝔖","sfr":"𝔰","sfrown":"⌢","sharp":"♯","SHCHcy":"Щ","shchcy":"щ","SHcy":"Ш","shcy":"ш","ShortDownArrow":"↓","ShortLeftArrow":"←","shortmid":"∣","shortparallel":"∥","ShortRightArrow":"→","ShortUpArrow":"↑","shy":"­","Sigma":"Σ","sigma":"σ","sigmaf":"ς","sigmav":"ς","sim":"∼","simdot":"⩪","sime":"≃","simeq":"≃","simg":"⪞","simgE":"⪠","siml":"⪝","simlE":"⪟","simne":"≆","simplus":"⨤","simrarr":"⥲","slarr":"←","SmallCircle":"∘","smallsetminus":"∖","smashp":"⨳","smeparsl":"⧤","smid":"∣","smile":"⌣","smt":"⪪","smte":"⪬","smtes":"⪬︀","SOFTcy":"Ь","softcy":"ь","solbar":"⌿","solb":"⧄","sol":"/","Sopf":"𝕊","sopf":"𝕤","spades":"♠","spadesuit":"♠","spar":"∥","sqcap":"⊓","sqcaps":"⊓︀","sqcup":"⊔","sqcups":"⊔︀","Sqrt":"√","sqsub":"⊏","sqsube":"⊑","sqsubset":"⊏","sqsubseteq":"⊑","sqsup":"⊐","sqsupe":"⊒","sqsupset":"⊐","sqsupseteq":"⊒","square":"□","Square":"□","SquareIntersection":"⊓","SquareSubset":"⊏","SquareSubsetEqual":"⊑","SquareSuperset":"⊐","SquareSupersetEqual":"⊒","SquareUnion":"⊔","squarf":"▪","squ":"□","squf":"▪","srarr":"→","Sscr":"𝒮","sscr":"𝓈","ssetmn":"∖","ssmile":"⌣","sstarf":"⋆","Star":"⋆","star":"☆","starf":"★","straightepsilon":"ϵ","straightphi":"ϕ","strns":"¯","sub":"⊂","Sub":"⋐","subdot":"⪽","subE":"⫅","sube":"⊆","subedot":"⫃","submult":"⫁","subnE":"⫋","subne":"⊊","subplus":"⪿","subrarr":"⥹","subset":"⊂","Subset":"⋐","subseteq":"⊆","subseteqq":"⫅","SubsetEqual":"⊆","subsetneq":"⊊","subsetneqq":"⫋","subsim":"⫇","subsub":"⫕","subsup":"⫓","succapprox":"⪸","succ":"≻","succcurlyeq":"≽","Succeeds":"≻","SucceedsEqual":"⪰","SucceedsSlantEqual":"≽","SucceedsTilde":"≿","succeq":"⪰","succnapprox":"⪺","succneqq":"⪶","succnsim":"⋩","succsim":"≿","SuchThat":"∋","sum":"∑","Sum":"∑","sung":"♪","sup1":"¹","sup2":"²","sup3":"³","sup":"⊃","Sup":"⋑","supdot":"⪾","supdsub":"⫘","supE":"⫆","supe":"⊇","supedot":"⫄","Superset":"⊃","SupersetEqual":"⊇","suphsol":"⟉","suphsub":"⫗","suplarr":"⥻","supmult":"⫂","supnE":"⫌","supne":"⊋","supplus":"⫀","supset":"⊃","Supset":"⋑","supseteq":"⊇","supseteqq":"⫆","supsetneq":"⊋","supsetneqq":"⫌","supsim":"⫈","supsub":"⫔","supsup":"⫖","swarhk":"⤦","swarr":"↙","swArr":"⇙","swarrow":"↙","swnwar":"⤪","szlig":"ß","Tab":"\\t","target":"⌖","Tau":"Τ","tau":"τ","tbrk":"⎴","Tcaron":"Ť","tcaron":"ť","Tcedil":"Ţ","tcedil":"ţ","Tcy":"Т","tcy":"т","tdot":"⃛","telrec":"⌕","Tfr":"𝔗","tfr":"𝔱","there4":"∴","therefore":"∴","Therefore":"∴","Theta":"Θ","theta":"θ","thetasym":"ϑ","thetav":"ϑ","thickapprox":"≈","thicksim":"∼","ThickSpace":"  ","ThinSpace":" ","thinsp":" ","thkap":"≈","thksim":"∼","THORN":"Þ","thorn":"þ","tilde":"˜","Tilde":"∼","TildeEqual":"≃","TildeFullEqual":"≅","TildeTilde":"≈","timesbar":"⨱","timesb":"⊠","times":"×","timesd":"⨰","tint":"∭","toea":"⤨","topbot":"⌶","topcir":"⫱","top":"⊤","Topf":"𝕋","topf":"𝕥","topfork":"⫚","tosa":"⤩","tprime":"‴","trade":"™","TRADE":"™","triangle":"▵","triangledown":"▿","triangleleft":"◃","trianglelefteq":"⊴","triangleq":"≜","triangleright":"▹","trianglerighteq":"⊵","tridot":"◬","trie":"≜","triminus":"⨺","TripleDot":"⃛","triplus":"⨹","trisb":"⧍","tritime":"⨻","trpezium":"⏢","Tscr":"𝒯","tscr":"𝓉","TScy":"Ц","tscy":"ц","TSHcy":"Ћ","tshcy":"ћ","Tstrok":"Ŧ","tstrok":"ŧ","twixt":"≬","twoheadleftarrow":"↞","twoheadrightarrow":"↠","Uacute":"Ú","uacute":"ú","uarr":"↑","Uarr":"↟","uArr":"⇑","Uarrocir":"⥉","Ubrcy":"Ў","ubrcy":"ў","Ubreve":"Ŭ","ubreve":"ŭ","Ucirc":"Û","ucirc":"û","Ucy":"У","ucy":"у","udarr":"⇅","Udblac":"Ű","udblac":"ű","udhar":"⥮","ufisht":"⥾","Ufr":"𝔘","ufr":"𝔲","Ugrave":"Ù","ugrave":"ù","uHar":"⥣","uharl":"↿","uharr":"↾","uhblk":"▀","ulcorn":"⌜","ulcorner":"⌜","ulcrop":"⌏","ultri":"◸","Umacr":"Ū","umacr":"ū","uml":"¨","UnderBar":"_","UnderBrace":"⏟","UnderBracket":"⎵","UnderParenthesis":"⏝","Union":"⋃","UnionPlus":"⊎","Uogon":"Ų","uogon":"ų","Uopf":"𝕌","uopf":"𝕦","UpArrowBar":"⤒","uparrow":"↑","UpArrow":"↑","Uparrow":"⇑","UpArrowDownArrow":"⇅","updownarrow":"↕","UpDownArrow":"↕","Updownarrow":"⇕","UpEquilibrium":"⥮","upharpoonleft":"↿","upharpoonright":"↾","uplus":"⊎","UpperLeftArrow":"↖","UpperRightArrow":"↗","upsi":"υ","Upsi":"ϒ","upsih":"ϒ","Upsilon":"Υ","upsilon":"υ","UpTeeArrow":"↥","UpTee":"⊥","upuparrows":"⇈","urcorn":"⌝","urcorner":"⌝","urcrop":"⌎","Uring":"Ů","uring":"ů","urtri":"◹","Uscr":"𝒰","uscr":"𝓊","utdot":"⋰","Utilde":"Ũ","utilde":"ũ","utri":"▵","utrif":"▴","uuarr":"⇈","Uuml":"Ü","uuml":"ü","uwangle":"⦧","vangrt":"⦜","varepsilon":"ϵ","varkappa":"ϰ","varnothing":"∅","varphi":"ϕ","varpi":"ϖ","varpropto":"∝","varr":"↕","vArr":"⇕","varrho":"ϱ","varsigma":"ς","varsubsetneq":"⊊︀","varsubsetneqq":"⫋︀","varsupsetneq":"⊋︀","varsupsetneqq":"⫌︀","vartheta":"ϑ","vartriangleleft":"⊲","vartriangleright":"⊳","vBar":"⫨","Vbar":"⫫","vBarv":"⫩","Vcy":"В","vcy":"в","vdash":"⊢","vDash":"⊨","Vdash":"⊩","VDash":"⊫","Vdashl":"⫦","veebar":"⊻","vee":"∨","Vee":"⋁","veeeq":"≚","vellip":"⋮","verbar":"|","Verbar":"‖","vert":"|","Vert":"‖","VerticalBar":"∣","VerticalLine":"|","VerticalSeparator":"❘","VerticalTilde":"≀","VeryThinSpace":" ","Vfr":"𝔙","vfr":"𝔳","vltri":"⊲","vnsub":"⊂⃒","vnsup":"⊃⃒","Vopf":"𝕍","vopf":"𝕧","vprop":"∝","vrtri":"⊳","Vscr":"𝒱","vscr":"𝓋","vsubnE":"⫋︀","vsubne":"⊊︀","vsupnE":"⫌︀","vsupne":"⊋︀","Vvdash":"⊪","vzigzag":"⦚","Wcirc":"Ŵ","wcirc":"ŵ","wedbar":"⩟","wedge":"∧","Wedge":"⋀","wedgeq":"≙","weierp":"℘","Wfr":"𝔚","wfr":"𝔴","Wopf":"𝕎","wopf":"𝕨","wp":"℘","wr":"≀","wreath":"≀","Wscr":"𝒲","wscr":"𝓌","xcap":"⋂","xcirc":"◯","xcup":"⋃","xdtri":"▽","Xfr":"𝔛","xfr":"𝔵","xharr":"⟷","xhArr":"⟺","Xi":"Ξ","xi":"ξ","xlarr":"⟵","xlArr":"⟸","xmap":"⟼","xnis":"⋻","xodot":"⨀","Xopf":"𝕏","xopf":"𝕩","xoplus":"⨁","xotime":"⨂","xrarr":"⟶","xrArr":"⟹","Xscr":"𝒳","xscr":"𝓍","xsqcup":"⨆","xuplus":"⨄","xutri":"△","xvee":"⋁","xwedge":"⋀","Yacute":"Ý","yacute":"ý","YAcy":"Я","yacy":"я","Ycirc":"Ŷ","ycirc":"ŷ","Ycy":"Ы","ycy":"ы","yen":"¥","Yfr":"𝔜","yfr":"𝔶","YIcy":"Ї","yicy":"ї","Yopf":"𝕐","yopf":"𝕪","Yscr":"𝒴","yscr":"𝓎","YUcy":"Ю","yucy":"ю","yuml":"ÿ","Yuml":"Ÿ","Zacute":"Ź","zacute":"ź","Zcaron":"Ž","zcaron":"ž","Zcy":"З","zcy":"з","Zdot":"Ż","zdot":"ż","zeetrf":"ℨ","ZeroWidthSpace":"​","Zeta":"Ζ","zeta":"ζ","zfr":"𝔷","Zfr":"ℨ","ZHcy":"Ж","zhcy":"ж","zigrarr":"⇝","zopf":"𝕫","Zopf":"ℤ","Zscr":"𝒵","zscr":"𝓏","zwj":"‍","zwnj":"‌"}')},function(e,t,n){"use strict";var r={};function i(e,t,n){var s,o,a,l,c,u="";for("string"!=typeof t&&(n=t,t=i.defaultChars),void 0===n&&(n=!0),c=function(e){var t,n,i=r[e];if(i)return i;for(i=r[e]=[],t=0;t<128;t++)n=String.fromCharCode(t),/^[0-9a-z]$/i.test(n)?i.push(n):i.push("%"+("0"+t.toString(16).toUpperCase()).slice(-2));for(t=0;t=55296&&a<=57343){if(a>=55296&&a<=56319&&s+1=56320&&l<=57343){u+=encodeURIComponent(e[s]+e[s+1]),s++;continue}u+="%EF%BF%BD"}else u+=encodeURIComponent(e[s]);return u}i.defaultChars=";/?:@&=+$,-_.!~*'()#",i.componentChars="-_.!~*'()",e.exports=i},function(e,t,n){"use strict";var r={};function i(e,t){var n;return"string"!=typeof t&&(t=i.defaultChars),n=function(e){var t,n,i=r[e];if(i)return i;for(i=r[e]=[],t=0;t<128;t++)n=String.fromCharCode(t),i.push(n);for(t=0;t=55296&&l<=57343?"���":String.fromCharCode(l),t+=6):240==(248&i)&&t+91114111?c+="����":(l-=65536,c+=String.fromCharCode(55296+(l>>10),56320+(1023&l))),t+=9):c+="�";return c}))}i.defaultChars=";/?:@&=+$,#",i.componentChars="",e.exports=i},function(e,t,n){"use strict";e.exports=function(e){var t="";return t+=e.protocol||"",t+=e.slashes?"//":"",t+=e.auth?e.auth+"@":"",e.hostname&&-1!==e.hostname.indexOf(":")?t+="["+e.hostname+"]":t+=e.hostname||"",t+=e.port?":"+e.port:"",t+=e.pathname||"",t+=e.search||"",t+(e.hash||"")}},function(e,t,n){"use strict";function r(){this.protocol=null,this.slashes=null,this.auth=null,this.port=null,this.hostname=null,this.hash=null,this.search=null,this.pathname=null}var i=/^([a-z0-9.+-]+:)/i,s=/:[0-9]*$/,o=/^(\/\/?(?!\/)[^\?\s]*)(\?[^\s]*)?$/,a=["{","}","|","\\","^","`"].concat(["<",">",'"',"`"," ","\r","\n","\t"]),l=["'"].concat(a),c=["%","/","?",";","#"].concat(l),u=["/","?","#"],d=/^[+a-z0-9A-Z_-]{0,63}$/,h=/^([+a-z0-9A-Z_-]{0,63})(.*)$/,p={javascript:!0,"javascript:":!0},f={http:!0,https:!0,ftp:!0,gopher:!0,file:!0,"http:":!0,"https:":!0,"ftp:":!0,"gopher:":!0,"file:":!0};r.prototype.parse=function(e,t){var n,r,s,a,l,g=e;if(g=g.trim(),!t&&1===e.split("#").length){var m=o.exec(g);if(m)return this.pathname=m[1],m[2]&&(this.search=m[2]),this}var b=i.exec(g);if(b&&(s=(b=b[0]).toLowerCase(),this.protocol=b,g=g.substr(b.length)),(t||b||g.match(/^\/\/[^@\/]+@[^@\/]+/))&&(!(l="//"===g.substr(0,2))||b&&p[b]||(g=g.substr(2),this.slashes=!0)),!p[b]&&(l||b&&!f[b])){var _,y,v=-1;for(n=0;n127?T+="x":T+=w[A];if(!T.match(d)){var I=S.slice(0,n),R=S.slice(n+1),k=w.match(h);k&&(I.push(k[1]),R.unshift(k[2])),R.length&&(g=R.join(".")+g),this.hostname=I.join(".");break}}}}this.hostname.length>255&&(this.hostname=""),x&&(this.hostname=this.hostname.substr(1,this.hostname.length-2))}var P=g.indexOf("#");-1!==P&&(this.hash=g.substr(P),g=g.slice(0,P));var O=g.indexOf("?");return-1!==O&&(this.search=g.substr(O),g=g.slice(0,O)),g&&(this.pathname=g),f[s]&&this.hostname&&!this.pathname&&(this.pathname=""),this},r.prototype.parseHost=function(e){var t=s.exec(e);t&&(":"!==(t=t[0])&&(this.port=t.substr(1)),e=e.substr(0,e.length-t.length)),e&&(this.hostname=e)},e.exports=function(e,t){if(e&&e instanceof r)return e;var n=new r;return n.parse(e,t),n}},function(e,t,n){"use strict";t.Any=n(9),t.Cc=n(10),t.Cf=n(27),t.P=n(3),t.Z=n(11)},function(e,t){e.exports=/[\xAD\u0600-\u0605\u061C\u06DD\u070F\u08E2\u180E\u200B-\u200F\u202A-\u202E\u2060-\u2064\u2066-\u206F\uFEFF\uFFF9-\uFFFB]|\uD804[\uDCBD\uDCCD]|\uD82F[\uDCA0-\uDCA3]|\uD834[\uDD73-\uDD7A]|\uDB40[\uDC01\uDC20-\uDC7F]/},function(e,t,n){"use strict";t.parseLinkLabel=n(29),t.parseLinkDestination=n(30),t.parseLinkTitle=n(31)},function(e,t,n){"use strict";e.exports=function(e,t,n){var r,i,s,o,a=-1,l=e.posMax,c=e.pos;for(e.pos=t+1,r=1;e.pos32)return a;if(41===i){if(0===s)break;s--}t++}return o===t||0!==s||(a.str=r(e.slice(o,t)),a.lines=0,a.pos=t,a.ok=!0),a}},function(e,t,n){"use strict";var r=n(0).unescapeAll;e.exports=function(e,t,n){var i,s,o=0,a=t,l={ok:!1,pos:0,lines:0,str:""};if(t>=n)return l;if(34!==(s=e.charCodeAt(t))&&39!==s&&40!==s)return l;for(t++,40===s&&(s=41);t"+s(e[t].content)+""},o.code_block=function(e,t,n,r,i){var o=e[t];return""+s(e[t].content)+"\n"},o.fence=function(e,t,n,r,o){var a,l,c,u,d,h=e[t],p=h.info?i(h.info).trim():"",f="",g="";return p&&(f=(c=p.split(/(\s+)/g))[0],g=c.slice(2).join("")),0===(a=n.highlight&&n.highlight(h.content,f,g)||s(h.content)).indexOf(""+a+"\n"):"
    "+a+"
    \n"},o.image=function(e,t,n,r,i){var s=e[t];return s.attrs[s.attrIndex("alt")][1]=i.renderInlineAsText(s.children,n,r),i.renderToken(e,t,n)},o.hardbreak=function(e,t,n){return n.xhtmlOut?"
    \n":"
    \n"},o.softbreak=function(e,t,n){return n.breaks?n.xhtmlOut?"
    \n":"
    \n":"\n"},o.text=function(e,t){return s(e[t].content)},o.html_block=function(e,t){return e[t].content},o.html_inline=function(e,t){return e[t].content},a.prototype.renderAttrs=function(e){var t,n,r;if(!e.attrs)return"";for(r="",t=0,n=e.attrs.length;t\n":">")},a.prototype.renderInline=function(e,t,n){for(var r,i="",s=this.rules,o=0,a=e.length;o/i.test(e)}e.exports=function(e){var t,n,s,o,a,l,c,u,d,h,p,f,g,m,b,_,y,v,E=e.tokens;if(e.md.options.linkify)for(n=0,s=E.length;n=0;t--)if("link_close"!==(l=o[t]).type){if("html_inline"===l.type&&(v=l.content,/^\s]/i.test(v)&&g>0&&g--,i(l.content)&&g++),!(g>0)&&"text"===l.type&&e.md.linkify.test(l.content)){for(d=l.content,y=e.md.linkify.match(d),c=[],f=l.level,p=0,u=0;up&&((a=new e.Token("text","",0)).content=d.slice(p,h),a.level=f,c.push(a)),(a=new e.Token("link_open","a",1)).attrs=[["href",b]],a.level=f++,a.markup="linkify",a.info="auto",c.push(a),(a=new e.Token("text","",0)).content=_,a.level=f,c.push(a),(a=new e.Token("link_close","a",-1)).level=--f,a.markup="linkify",a.info="auto",c.push(a),p=y[u].lastIndex);p=0;t--)"text"!==(n=e[t]).type||r||(n.content=n.content.replace(s,a)),"link_open"===n.type&&"auto"===n.info&&r--,"link_close"===n.type&&"auto"===n.info&&r++}function c(e){var t,n,i=0;for(t=e.length-1;t>=0;t--)"text"!==(n=e[t]).type||i||r.test(n.content)&&(n.content=n.content.replace(/\+-/g,"±").replace(/\.{2,}/g,"…").replace(/([?!])…/g,"$1..").replace(/([?!]){4,}/g,"$1$1$1").replace(/,{2,}/g,",").replace(/(^|[^-])---(?=[^-]|$)/gm,"$1—").replace(/(^|\s)--(?=\s|$)/gm,"$1–").replace(/(^|[^-\s])--(?=[^-\s]|$)/gm,"$1–")),"link_open"===n.type&&"auto"===n.info&&i--,"link_close"===n.type&&"auto"===n.info&&i++}e.exports=function(e){var t;if(e.md.options.typographer)for(t=e.tokens.length-1;t>=0;t--)"inline"===e.tokens[t].type&&(i.test(e.tokens[t].content)&&l(e.tokens[t].children),r.test(e.tokens[t].content)&&c(e.tokens[t].children))}},function(e,t,n){"use strict";var r=n(0).isWhiteSpace,i=n(0).isPunctChar,s=n(0).isMdAsciiPunct,o=/['"]/,a=/['"]/g;function l(e,t,n){return e.substr(0,t)+n+e.substr(t+1)}function c(e,t){var n,o,c,u,d,h,p,f,g,m,b,_,y,v,E,x,S,w,T,A,C;for(T=[],n=0;n=0&&!(T[S].level<=p);S--);if(T.length=S+1,"text"===o.type){d=0,h=(c=o.content).length;e:for(;d=0)g=c.charCodeAt(u.index-1);else for(S=n-1;S>=0&&"softbreak"!==e[S].type&&"hardbreak"!==e[S].type;S--)if(e[S].content){g=e[S].content.charCodeAt(e[S].content.length-1);break}if(m=32,d=48&&g<=57&&(x=E=!1),E&&x&&(E=b,x=_),E||x){if(x)for(S=T.length-1;S>=0&&(f=T[S],!(T[S].level=0;t--)"inline"===e.tokens[t].type&&o.test(e.tokens[t].content)&&c(e.tokens[t].children,e)}},function(e,t,n){"use strict";var r=n(5);function i(e,t,n){this.src=e,this.env=n,this.tokens=[],this.inlineMode=!1,this.md=t}i.prototype.Token=r,e.exports=i},function(e,t,n){"use strict";var r=n(4),i=[["table",n(42),["paragraph","reference"]],["code",n(43)],["fence",n(44),["paragraph","reference","blockquote","list"]],["blockquote",n(45),["paragraph","reference","blockquote","list"]],["hr",n(46),["paragraph","reference","blockquote","list"]],["list",n(47),["paragraph","reference","blockquote"]],["reference",n(48)],["html_block",n(49),["paragraph","reference","blockquote"]],["heading",n(51),["paragraph","reference","blockquote"]],["lheading",n(52)],["paragraph",n(53)]];function s(){this.ruler=new r;for(var e=0;e=n))&&!(e.sCount[o]=l){e.line=n;break}for(r=0;rn)return!1;if(h=t+1,e.sCount[h]=4)return!1;if((c=e.bMarks[h]+e.tShift[h])>=e.eMarks[h])return!1;if(124!==(S=e.src.charCodeAt(c++))&&45!==S&&58!==S)return!1;if(c>=e.eMarks[h])return!1;if(124!==(w=e.src.charCodeAt(c++))&&45!==w&&58!==w&&!r(w))return!1;if(45===S&&r(w))return!1;for(;c=4)return!1;if((p=s(l)).length&&""===p[0]&&p.shift(),p.length&&""===p[p.length-1]&&p.pop(),0===(f=p.length)||f!==m.length)return!1;if(o)return!0;for(v=e.parentType,e.parentType="table",x=e.md.block.ruler.getRules("blockquote"),(g=e.push("table_open","table",1)).map=_=[t,0],(g=e.push("thead_open","thead",1)).map=[t,t+1],(g=e.push("tr_open","tr",1)).map=[t,t+1],u=0;u=4)break;for((p=s(l)).length&&""===p[0]&&p.shift(),p.length&&""===p[p.length-1]&&p.pop(),h===t+2&&((g=e.push("tbody_open","tbody",1)).map=y=[t+2,0]),(g=e.push("tr_open","tr",1)).map=[h,h+1],u=0;u=4))break;i=++r}return e.line=i,(s=e.push("code_block","code",0)).content=e.getLines(t,i,4+e.blkIndent,!1)+"\n",s.map=[t,e.line],!0}},function(e,t,n){"use strict";e.exports=function(e,t,n,r){var i,s,o,a,l,c,u,d=!1,h=e.bMarks[t]+e.tShift[t],p=e.eMarks[t];if(e.sCount[t]-e.blkIndent>=4)return!1;if(h+3>p)return!1;if(126!==(i=e.src.charCodeAt(h))&&96!==i)return!1;if(l=h,(s=(h=e.skipChars(h,i))-l)<3)return!1;if(u=e.src.slice(l,h),o=e.src.slice(h,p),96===i&&o.indexOf(String.fromCharCode(i))>=0)return!1;if(r)return!0;for(a=t;!(++a>=n)&&!((h=l=e.bMarks[a]+e.tShift[a])<(p=e.eMarks[a])&&e.sCount[a]=4||(h=e.skipChars(h,i))-l=4)return!1;if(62!==e.src.charCodeAt(A++))return!1;if(i)return!0;for(l=p=e.sCount[t]+1,32===e.src.charCodeAt(A)?(A++,l++,p++,s=!1,v=!0):9===e.src.charCodeAt(A)?(v=!0,(e.bsCount[t]+p)%4==3?(A++,l++,p++,s=!1):s=!0):v=!1,f=[e.bMarks[t]],e.bMarks[t]=A;A=C,_=[e.sCount[t]],e.sCount[t]=p-l,y=[e.tShift[t]],e.tShift[t]=A-e.bMarks[t],x=e.md.block.ruler.getRules("blockquote"),b=e.parentType,e.parentType="blockquote",h=t+1;h=(C=e.eMarks[h])));h++)if(62!==e.src.charCodeAt(A++)||w){if(u)break;for(E=!1,a=0,c=x.length;a=C,g.push(e.bsCount[h]),e.bsCount[h]=e.sCount[h]+1+(v?1:0),_.push(e.sCount[h]),e.sCount[h]=p-l,y.push(e.tShift[h]),e.tShift[h]=A-e.bMarks[h]}for(m=e.blkIndent,e.blkIndent=0,(S=e.push("blockquote_open","blockquote",1)).markup=">",S.map=d=[t,0],e.md.block.tokenize(e,t,h),(S=e.push("blockquote_close","blockquote",-1)).markup=">",e.lineMax=T,e.parentType=b,d[1]=e.line,a=0;a=4)return!1;if(42!==(s=e.src.charCodeAt(c++))&&45!==s&&95!==s)return!1;for(o=1;c=o)return-1;if((n=e.src.charCodeAt(s++))<48||n>57)return-1;for(;;){if(s>=o)return-1;if(!((n=e.src.charCodeAt(s++))>=48&&n<=57)){if(41===n||46===n)break;return-1}if(s-i>=10)return-1}return s=4)return!1;if(e.listIndent>=0&&e.sCount[t]-e.listIndent>=4&&e.sCount[t]=e.blkIndent&&(M=!0),(I=s(e,t))>=0){if(h=!0,k=e.bMarks[t]+e.tShift[t],_=Number(e.src.slice(k,I-1)),M&&1!==_)return!1}else{if(!((I=i(e,t))>=0))return!1;h=!1}if(M&&e.skipSpaces(I)>=e.eMarks[t])return!1;if(b=e.src.charCodeAt(I-1),r)return!0;for(m=e.tokens.length,h?(N=e.push("ordered_list_open","ol",1),1!==_&&(N.attrs=[["start",_]])):N=e.push("bullet_list_open","ul",1),N.map=g=[t,0],N.markup=String.fromCharCode(b),v=t,R=!1,O=e.md.block.ruler.getRules("list"),S=e.parentType,e.parentType="list";v=y?1:E-d)>4&&(u=1),c=d+u,(N=e.push("list_item_open","li",1)).markup=String.fromCharCode(b),N.map=p=[t,0],h&&(N.info=e.src.slice(k,I-1)),A=e.tight,T=e.tShift[t],w=e.sCount[t],x=e.listIndent,e.listIndent=e.blkIndent,e.blkIndent=c,e.tight=!0,e.tShift[t]=a-e.bMarks[t],e.sCount[t]=E,a>=y&&e.isEmpty(t+1)?e.line=Math.min(e.line+2,n):e.md.block.tokenize(e,t,n,!0),e.tight&&!R||(D=!1),R=e.line-t>1&&e.isEmpty(e.line-1),e.blkIndent=e.listIndent,e.listIndent=x,e.tShift[t]=T,e.sCount[t]=w,e.tight=A,(N=e.push("list_item_close","li",-1)).markup=String.fromCharCode(b),v=t=e.line,p[1]=v,a=e.bMarks[t],v>=n)break;if(e.sCount[v]=4)break;for(P=!1,l=0,f=O.length;l=4)return!1;if(91!==e.src.charCodeAt(S))return!1;for(;++S3||e.sCount[T]<0)){for(y=!1,d=0,h=v.length;d|$))/i,/<\/(script|pre|style|textarea)>/i,!0],[/^/,!0],[/^<\?/,/\?>/,!0],[/^/,!0],[/^/,!0],[new RegExp("^|$))","i"),/^$/,!0],[new RegExp(i.source+"\\s*$"),/^$/,!1]];e.exports=function(e,t,n,r){var i,o,a,l,c=e.bMarks[t]+e.tShift[t],u=e.eMarks[t];if(e.sCount[t]-e.blkIndent>=4)return!1;if(!e.md.options.html)return!1;if(60!==e.src.charCodeAt(c))return!1;for(l=e.src.slice(c,u),i=0;i=4)return!1;if(35!==(s=e.src.charCodeAt(c))||c>=u)return!1;for(o=1,s=e.src.charCodeAt(++c);35===s&&c6||cc&&r(e.src.charCodeAt(a-1))&&(u=a),e.line=t+1,(l=e.push("heading_open","h"+String(o),1)).markup="########".slice(0,o),l.map=[t,e.line],(l=e.push("inline","",0)).content=e.src.slice(c,u).trim(),l.map=[t,e.line],l.children=[],(l=e.push("heading_close","h"+String(o),-1)).markup="########".slice(0,o)),!0)}},function(e,t,n){"use strict";e.exports=function(e,t,n){var r,i,s,o,a,l,c,u,d,h,p=t+1,f=e.md.block.ruler.getRules("paragraph");if(e.sCount[t]-e.blkIndent>=4)return!1;for(h=e.parentType,e.parentType="paragraph";p3)){if(e.sCount[p]>=e.blkIndent&&(l=e.bMarks[p]+e.tShift[p])<(c=e.eMarks[p])&&(45===(d=e.src.charCodeAt(l))||61===d)&&(l=e.skipChars(l,d),(l=e.skipSpaces(l))>=c)){u=61===d?1:2;break}if(!(e.sCount[p]<0)){for(i=!1,s=0,o=f.length;s3||e.sCount[l]<0)){for(r=!1,i=0,s=c.length;i0&&this.level++,this.tokens.push(i),i},s.prototype.isEmpty=function(e){return this.bMarks[e]+this.tShift[e]>=this.eMarks[e]},s.prototype.skipEmptyLines=function(e){for(var t=this.lineMax;et;)if(!i(this.src.charCodeAt(--e)))return e+1;return e},s.prototype.skipChars=function(e,t){for(var n=this.src.length;en;)if(t!==this.src.charCodeAt(--e))return e+1;return e},s.prototype.getLines=function(e,t,n,r){var s,o,a,l,c,u,d,h=e;if(e>=t)return"";for(u=new Array(t-e),s=0;hn?new Array(o-n+1).join(" ")+this.src.slice(l,c):this.src.slice(l,c)}return u.join("")},s.prototype.Token=r,e.exports=s},function(e,t,n){"use strict";var r=n(4),i=[["text",n(56)],["newline",n(57)],["escape",n(58)],["backticks",n(59)],["strikethrough",n(13).tokenize],["emphasis",n(14).tokenize],["link",n(60)],["image",n(61)],["autolink",n(62)],["html_inline",n(63)],["entity",n(64)]],s=[["balance_pairs",n(65)],["strikethrough",n(13).postProcess],["emphasis",n(14).postProcess],["text_collapse",n(66)]];function o(){var e;for(this.ruler=new r,e=0;e=s)break}else e.pending+=e.src[e.pos++]}e.pending&&e.pushPending()},o.prototype.parse=function(e,t,n,r){var i,s,o,a=new this.State(e,t,n,r);for(this.tokenize(a),o=(s=this.ruler2.getRules("")).length,i=0;i=0&&32===e.pending.charCodeAt(n))if(n>=1&&32===e.pending.charCodeAt(n-1)){for(s=n-1;s>=1&&32===e.pending.charCodeAt(s-1);)s--;e.pending=e.pending.slice(0,s),e.push("hardbreak","br",0)}else e.pending=e.pending.slice(0,-1),e.push("softbreak","br",0);else e.push("softbreak","br",0);for(o++;o?@[]^_`{|}~-".split("").forEach((function(e){i[e.charCodeAt(0)]=1})),e.exports=function(e,t){var n,s=e.pos,o=e.posMax;if(92!==e.src.charCodeAt(s))return!1;if(++s=g)return!1;if(m=c,(u=e.md.helpers.parseLinkDestination(e.src,c,e.posMax)).ok){for(h=e.md.normalizeLink(u.str),e.md.validateLink(h)?c=u.pos:h="",m=c;c=g||41!==e.src.charCodeAt(c))&&(b=!0),c++}if(b){if(void 0===e.env.references)return!1;if(c=0?o=e.src.slice(m,c++):c=a+1):c=a+1,o||(o=e.src.slice(l,a)),!(d=e.env.references[r(o)]))return e.pos=f,!1;h=d.href,p=d.title}return t||(e.pos=l,e.posMax=a,e.push("link_open","a",1).attrs=n=[["href",h]],p&&n.push(["title",p]),e.md.inline.tokenize(e),e.push("link_close","a",-1)),e.pos=c,e.posMax=g,!0}},function(e,t,n){"use strict";var r=n(0).normalizeReference,i=n(0).isSpace;e.exports=function(e,t){var n,s,o,a,l,c,u,d,h,p,f,g,m,b="",_=e.pos,y=e.posMax;if(33!==e.src.charCodeAt(e.pos))return!1;if(91!==e.src.charCodeAt(e.pos+1))return!1;if(c=e.pos+2,(l=e.md.helpers.parseLinkLabel(e,e.pos+1,!1))<0)return!1;if((u=l+1)=y)return!1;for(m=u,(h=e.md.helpers.parseLinkDestination(e.src,u,e.posMax)).ok&&(b=e.md.normalizeLink(h.str),e.md.validateLink(b)?u=h.pos:b=""),m=u;u=y||41!==e.src.charCodeAt(u))return e.pos=_,!1;u++}else{if(void 0===e.env.references)return!1;if(u=0?a=e.src.slice(m,u++):u=l+1):u=l+1,a||(a=e.src.slice(c,l)),!(d=e.env.references[r(a)]))return e.pos=_,!1;b=d.href,p=d.title}return t||(o=e.src.slice(c,l),e.md.inline.parse(o,e.md,e.env,g=[]),(f=e.push("image","img",0)).attrs=n=[["src",b],["alt",""]],f.children=g,f.content=o,p&&n.push(["title",p])),e.pos=u,e.posMax=y,!0}},function(e,t,n){"use strict";var r=/^([a-zA-Z0-9.!#$%&'*+\/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*)$/,i=/^([a-zA-Z][a-zA-Z0-9+.\-]{1,31}):([^<>\x00-\x20]*)$/;e.exports=function(e,t){var n,s,o,a,l,c,u=e.pos;if(60!==e.src.charCodeAt(u))return!1;for(l=e.pos,c=e.posMax;;){if(++u>=c)return!1;if(60===(a=e.src.charCodeAt(u)))return!1;if(62===a)break}return n=e.src.slice(l+1,u),i.test(n)?(s=e.md.normalizeLink(n),!!e.md.validateLink(s)&&(t||((o=e.push("link_open","a",1)).attrs=[["href",s]],o.markup="autolink",o.info="auto",(o=e.push("text","",0)).content=e.md.normalizeLinkText(n),(o=e.push("link_close","a",-1)).markup="autolink",o.info="auto"),e.pos+=n.length+2,!0)):!!r.test(n)&&(s=e.md.normalizeLink("mailto:"+n),!!e.md.validateLink(s)&&(t||((o=e.push("link_open","a",1)).attrs=[["href",s]],o.markup="autolink",o.info="auto",(o=e.push("text","",0)).content=e.md.normalizeLinkText(n),(o=e.push("link_close","a",-1)).markup="autolink",o.info="auto"),e.pos+=n.length+2,!0))}},function(e,t,n){"use strict";var r=n(12).HTML_TAG_RE;e.exports=function(e,t){var n,i,s,o=e.pos;return!!e.md.options.html&&(s=e.posMax,!(60!==e.src.charCodeAt(o)||o+2>=s)&&!(33!==(n=e.src.charCodeAt(o+1))&&63!==n&&47!==n&&!function(e){var t=32|e;return t>=97&&t<=122}(n))&&!!(i=e.src.slice(o).match(r))&&(t||(e.push("html_inline","",0).content=e.src.slice(o,o+i[0].length)),e.pos+=i[0].length,!0))}},function(e,t,n){"use strict";var r=n(7),i=n(0).has,s=n(0).isValidEntityCode,o=n(0).fromCodePoint,a=/^&#((?:x[a-f0-9]{1,6}|[0-9]{1,7}));/i,l=/^&([a-z][a-z0-9]{1,31});/i;e.exports=function(e,t){var n,c,u=e.pos,d=e.posMax;if(38!==e.src.charCodeAt(u))return!1;if(u+1o;r-=f[r]+1)if((s=t[r]).marker===i.marker&&s.open&&s.end<0&&(l=!1,(s.close||i.open)&&(s.length+i.length)%3==0&&(s.length%3==0&&i.length%3==0||(l=!0)),!l)){c=r>0&&!t[r-1].open?f[r-1]+1:0,f[n]=n-r+c,f[r]=c,i.open=!1,s.end=n,s.close=!1,a=-1,p=-2;break}-1!==a&&(u[i.marker][(i.open?3:0)+(i.length||0)%3]=a)}}}e.exports=function(e){var t,n=e.tokens_meta,i=e.tokens_meta.length;for(r(0,e.delimiters),t=0;t0&&r++,"text"===i[t].type&&t+10&&(this.level++,this._prev_delimiters.push(this.delimiters),this.delimiters=[],s={delimiters:this.delimiters}),this.pendingLevel=this.level,this.tokens.push(i),this.tokens_meta.push(s),i},a.prototype.scanDelims=function(e,t){var n,r,a,l,c,u,d,h,p,f=e,g=!0,m=!0,b=this.posMax,_=this.src.charCodeAt(e);for(n=e>0?this.src.charCodeAt(e-1):32;f=3&&":"===e[t-3]||t>=3&&"/"===e[t-3]?0:r.match(n.re.no_http)[0].length:0}},"mailto:":{validate:function(e,t,n){var r=e.slice(t);return n.re.mailto||(n.re.mailto=new RegExp("^"+n.re.src_email_name+"@"+n.re.src_host_strict,"i")),n.re.mailto.test(r)?r.match(n.re.mailto)[0].length:0}}},c="biz|com|edu|gov|net|org|pro|web|xxx|aero|asia|coop|info|museum|name|shop|рф".split("|");function u(e){var t=e.re=n(69)(e.__opts__),r=e.__tlds__.slice();function a(e){return e.replace("%TLDS%",t.src_tlds)}e.onCompile(),e.__tlds_replaced__||r.push("a[cdefgilmnoqrstuwxz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvwxyz]|d[ejkmoz]|e[cegrstu]|f[ijkmor]|g[abdefghilmnpqrstuwy]|h[kmnrtu]|i[delmnoqrst]|j[emop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdeghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrstwy]|qa|r[eosuw]|s[abcdeghijklmnortuvxyz]|t[cdfghjklmnortvwz]|u[agksyz]|v[aceginu]|w[fs]|y[et]|z[amw]"),r.push(t.src_xn),t.src_tlds=r.join("|"),t.email_fuzzy=RegExp(a(t.tpl_email_fuzzy),"i"),t.link_fuzzy=RegExp(a(t.tpl_link_fuzzy),"i"),t.link_no_ip_fuzzy=RegExp(a(t.tpl_link_no_ip_fuzzy),"i"),t.host_fuzzy_test=RegExp(a(t.tpl_host_fuzzy_test),"i");var l=[];function c(e,t){throw new Error('(LinkifyIt) Invalid schema "'+e+'": '+t)}e.__compiled__={},Object.keys(e.__schemas__).forEach((function(t){var n=e.__schemas__[t];if(null!==n){var r={validate:null,link:null};if(e.__compiled__[t]=r,"[object Object]"===i(n))return function(e){return"[object RegExp]"===i(e)}(n.validate)?r.validate=function(e){return function(t,n){var r=t.slice(n);return e.test(r)?r.match(e)[0].length:0}}(n.validate):s(n.validate)?r.validate=n.validate:c(t,n),void(s(n.normalize)?r.normalize=n.normalize:n.normalize?c(t,n):r.normalize=function(e,t){t.normalize(e)});!function(e){return"[object String]"===i(e)}(n)?c(t,n):l.push(t)}})),l.forEach((function(t){e.__compiled__[e.__schemas__[t]]&&(e.__compiled__[t].validate=e.__compiled__[e.__schemas__[t]].validate,e.__compiled__[t].normalize=e.__compiled__[e.__schemas__[t]].normalize)})),e.__compiled__[""]={validate:null,normalize:function(e,t){t.normalize(e)}};var u=Object.keys(e.__compiled__).filter((function(t){return t.length>0&&e.__compiled__[t]})).map(o).join("|");e.re.schema_test=RegExp("(^|(?!_)(?:[><|]|"+t.src_ZPCc+"))("+u+")","i"),e.re.schema_search=RegExp("(^|(?!_)(?:[><|]|"+t.src_ZPCc+"))("+u+")","ig"),e.re.pretest=RegExp("("+e.re.schema_test.source+")|("+e.re.host_fuzzy_test.source+")|@","i"),function(e){e.__index__=-1,e.__text_cache__=""}(e)}function d(e,t){var n=e.__index__,r=e.__last_index__,i=e.__text_cache__.slice(n,r);this.schema=e.__schema__.toLowerCase(),this.index=n+t,this.lastIndex=r+t,this.raw=i,this.text=i,this.url=i}function h(e,t){var n=new d(e,t);return e.__compiled__[n.schema].normalize(n,e),n}function p(e,t){if(!(this instanceof p))return new p(e,t);var n;t||(n=e,Object.keys(n||{}).reduce((function(e,t){return e||a.hasOwnProperty(t)}),!1)&&(t=e,e={})),this.__opts__=r({},a,t),this.__index__=-1,this.__last_index__=-1,this.__schema__="",this.__text_cache__="",this.__schemas__=r({},l,e),this.__compiled__={},this.__tlds__=c,this.__tlds_replaced__=!1,this.re={},u(this)}p.prototype.add=function(e,t){return this.__schemas__[e]=t,u(this),this},p.prototype.set=function(e){return this.__opts__=r(this.__opts__,e),this},p.prototype.test=function(e){if(this.__text_cache__=e,this.__index__=-1,!e.length)return!1;var t,n,r,i,s,o,a,l;if(this.re.schema_test.test(e))for((a=this.re.schema_search).lastIndex=0;null!==(t=a.exec(e));)if(i=this.testSchemaAt(e,t[2],a.lastIndex)){this.__schema__=t[2],this.__index__=t.index+t[1].length,this.__last_index__=t.index+t[0].length+i;break}return this.__opts__.fuzzyLink&&this.__compiled__["http:"]&&(l=e.search(this.re.host_fuzzy_test))>=0&&(this.__index__<0||l=0&&null!==(r=e.match(this.re.email_fuzzy))&&(s=r.index+r[1].length,o=r.index+r[0].length,(this.__index__<0||sthis.__last_index__)&&(this.__schema__="mailto:",this.__index__=s,this.__last_index__=o)),this.__index__>=0},p.prototype.pretest=function(e){return this.re.pretest.test(e)},p.prototype.testSchemaAt=function(e,t,n){return this.__compiled__[t.toLowerCase()]?this.__compiled__[t.toLowerCase()].validate(e,n,this):0},p.prototype.match=function(e){var t=0,n=[];this.__index__>=0&&this.__text_cache__===e&&(n.push(h(this,t)),t=this.__last_index__);for(var r=t?e.slice(t):e;this.test(r);)n.push(h(this,t)),r=r.slice(this.__last_index__),t+=this.__last_index__;return n.length?n:null},p.prototype.tlds=function(e,t){return e=Array.isArray(e)?e:[e],t?(this.__tlds__=this.__tlds__.concat(e).sort().filter((function(e,t,n){return e!==n[t-1]})).reverse(),u(this),this):(this.__tlds__=e.slice(),this.__tlds_replaced__=!0,u(this),this)},p.prototype.normalize=function(e){e.schema||(e.url="http://"+e.url),"mailto:"!==e.schema||/^mailto:/i.test(e.url)||(e.url="mailto:"+e.url)},p.prototype.onCompile=function(){},e.exports=p},function(e,t,n){"use strict";e.exports=function(e){var t={};return t.src_Any=n(9).source,t.src_Cc=n(10).source,t.src_Z=n(11).source,t.src_P=n(3).source,t.src_ZPCc=[t.src_Z,t.src_P,t.src_Cc].join("|"),t.src_ZCc=[t.src_Z,t.src_Cc].join("|"),t.src_pseudo_letter="(?:(?![><|]|"+t.src_ZPCc+")"+t.src_Any+")",t.src_ip4="(?:(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)",t.src_auth="(?:(?:(?!"+t.src_ZCc+"|[@/\\[\\]()]).)+@)?",t.src_port="(?::(?:6(?:[0-4]\\d{3}|5(?:[0-4]\\d{2}|5(?:[0-2]\\d|3[0-5])))|[1-5]?\\d{1,4}))?",t.src_host_terminator="(?=$|[><|]|"+t.src_ZPCc+")(?!-|_|:\\d|\\.-|\\.(?!$|"+t.src_ZPCc+"))",t.src_path="(?:[/?#](?:(?!"+t.src_ZCc+"|[><|]|[()[\\]{}.,\"'?!\\-]).|\\[(?:(?!"+t.src_ZCc+"|\\]).)*\\]|\\((?:(?!"+t.src_ZCc+"|[)]).)*\\)|\\{(?:(?!"+t.src_ZCc+'|[}]).)*\\}|\\"(?:(?!'+t.src_ZCc+'|["]).)+\\"|\\\'(?:(?!'+t.src_ZCc+"|[']).)+\\'|\\'(?="+t.src_pseudo_letter+"|[-]).|\\.{2,}[a-zA-Z0-9%/&]|\\.(?!"+t.src_ZCc+"|[.]).|"+(e&&e["---"]?"\\-(?!--(?:[^-]|$))(?:-*)|":"\\-+|")+"\\,(?!"+t.src_ZCc+").|\\!+(?!"+t.src_ZCc+"|[!]).|\\?(?!"+t.src_ZCc+"|[?]).)+|\\/)?",t.src_email_name='[\\-;:&=\\+\\$,\\.a-zA-Z0-9_][\\-;:&=\\+\\$,\\"\\.a-zA-Z0-9_]*',t.src_xn="xn--[a-z0-9\\-]{1,59}",t.src_domain_root="(?:"+t.src_xn+"|"+t.src_pseudo_letter+"{1,63})",t.src_domain="(?:"+t.src_xn+"|(?:"+t.src_pseudo_letter+")|(?:"+t.src_pseudo_letter+"(?:-|"+t.src_pseudo_letter+"){0,61}"+t.src_pseudo_letter+"))",t.src_host="(?:(?:(?:(?:"+t.src_domain+")\\.)*"+t.src_domain+"))",t.tpl_host_fuzzy="(?:"+t.src_ip4+"|(?:(?:(?:"+t.src_domain+")\\.)+(?:%TLDS%)))",t.tpl_host_no_ip_fuzzy="(?:(?:(?:"+t.src_domain+")\\.)+(?:%TLDS%))",t.src_host_strict=t.src_host+t.src_host_terminator,t.tpl_host_fuzzy_strict=t.tpl_host_fuzzy+t.src_host_terminator,t.src_host_port_strict=t.src_host+t.src_port+t.src_host_terminator,t.tpl_host_port_fuzzy_strict=t.tpl_host_fuzzy+t.src_port+t.src_host_terminator,t.tpl_host_port_no_ip_fuzzy_strict=t.tpl_host_no_ip_fuzzy+t.src_port+t.src_host_terminator,t.tpl_host_fuzzy_test="localhost|www\\.|\\.\\d{1,3}\\.|(?:\\.(?:%TLDS%)(?:"+t.src_ZPCc+"|>|$))",t.tpl_email_fuzzy='(^|[><|]|"|\\(|'+t.src_ZCc+")("+t.src_email_name+"@"+t.tpl_host_fuzzy_strict+")",t.tpl_link_fuzzy="(^|(?![.:/\\-_@])(?:[$+<=>^`||]|"+t.src_ZPCc+"))((?![$+<=>^`||])"+t.tpl_host_port_fuzzy_strict+t.src_path+")",t.tpl_link_no_ip_fuzzy="(^|(?![.:/\\-_@])(?:[$+<=>^`||]|"+t.src_ZPCc+"))((?![$+<=>^`||])"+t.tpl_host_port_no_ip_fuzzy_strict+t.src_path+")",t}},function(e,t,n){(function(e,r){var i;/*! https://mths.be/punycode v1.4.1 by @mathias */!function(s){t&&t.nodeType,e&&e.nodeType;var o="object"==typeof r&&r;o.global!==o&&o.window!==o&&o.self;var a,l=2147483647,c=/^xn--/,u=/[^\x20-\x7E]/,d=/[\x2E\u3002\uFF0E\uFF61]/g,h={overflow:"Overflow: input needs wider integers to process","not-basic":"Illegal input >= 0x80 (not a basic code point)","invalid-input":"Invalid input"},p=Math.floor,f=String.fromCharCode;function g(e){throw new RangeError(h[e])}function m(e,t){for(var n=e.length,r=[];n--;)r[n]=t(e[n]);return r}function b(e,t){var n=e.split("@"),r="";return n.length>1&&(r=n[0]+"@",e=n[1]),r+m((e=e.replace(d,".")).split("."),t).join(".")}function _(e){for(var t,n,r=[],i=0,s=e.length;i=55296&&t<=56319&&i65535&&(t+=f((e-=65536)>>>10&1023|55296),e=56320|1023&e),t+f(e)})).join("")}function v(e,t){return e+22+75*(e<26)-((0!=t)<<5)}function E(e,t,n){var r=0;for(e=n?p(e/700):e>>1,e+=p(e/t);e>455;r+=36)e=p(e/35);return p(r+36*e/(e+38))}function x(e){var t,n,r,i,s,o,a,c,u,d,h,f=[],m=e.length,b=0,_=128,v=72;for((n=e.lastIndexOf("-"))<0&&(n=0),r=0;r=128&&g("not-basic"),f.push(e.charCodeAt(r));for(i=n>0?n+1:0;i=m&&g("invalid-input"),((c=(h=e.charCodeAt(i++))-48<10?h-22:h-65<26?h-65:h-97<26?h-97:36)>=36||c>p((l-b)/o))&&g("overflow"),b+=c*o,!(c<(u=a<=v?1:a>=v+26?26:a-v));a+=36)o>p(l/(d=36-u))&&g("overflow"),o*=d;v=E(b-s,t=f.length+1,0==s),p(b/t)>l-_&&g("overflow"),_+=p(b/t),b%=t,f.splice(b++,0,_)}return y(f)}function S(e){var t,n,r,i,s,o,a,c,u,d,h,m,b,y,x,S=[];for(m=(e=_(e)).length,t=128,n=0,s=72,o=0;o=t&&hp((l-n)/(b=r+1))&&g("overflow"),n+=(a-t)*b,t=a,o=0;ol&&g("overflow"),h==t){for(c=n,u=36;!(c<(d=u<=s?1:u>=s+26?26:u-s));u+=36)x=c-d,y=36-d,S.push(f(v(d+x%y,0))),c=p(x/y);S.push(f(v(c,0))),s=E(n,b,r==i),n=0,++r}++n,++t}return S.join("")}a={version:"1.4.1",ucs2:{decode:_,encode:y},decode:x,encode:S,toASCII:function(e){return b(e,(function(e){return u.test(e)?"xn--"+S(e):e}))},toUnicode:function(e){return b(e,(function(e){return c.test(e)?x(e.slice(4).toLowerCase()):e}))}},void 0===(i=function(){return a}.call(t,n,t,e))||(e.exports=i)}()}).call(this,n(71)(e),n(72))},function(e,t){e.exports=function(e){return e.webpackPolyfill||(e.deprecate=function(){},e.paths=[],e.children||(e.children=[]),Object.defineProperty(e,"loaded",{enumerable:!0,get:function(){return e.l}}),Object.defineProperty(e,"id",{enumerable:!0,get:function(){return e.i}}),e.webpackPolyfill=1),e}},function(e,t){var n;n=function(){return this}();try{n=n||new Function("return this")()}catch(e){"object"==typeof window&&(n=window)}e.exports=n},function(e,t,n){"use strict";e.exports={options:{html:!1,xhtmlOut:!1,breaks:!1,langPrefix:"language-",linkify:!1,typographer:!1,quotes:"“”‘’",highlight:null,maxNesting:100},components:{core:{},block:{},inline:{}}}},function(e,t,n){"use strict";e.exports={options:{html:!1,xhtmlOut:!1,breaks:!1,langPrefix:"language-",linkify:!1,typographer:!1,quotes:"“”‘’",highlight:null,maxNesting:20},components:{core:{rules:["normalize","block","inline"]},block:{rules:["paragraph"]},inline:{rules:["text"],rules2:["balance_pairs","text_collapse"]}}}},function(e,t,n){"use strict";e.exports={options:{html:!0,xhtmlOut:!0,breaks:!1,langPrefix:"language-",linkify:!1,typographer:!1,quotes:"“”‘’",highlight:null,maxNesting:20},components:{core:{rules:["normalize","block","inline"]},block:{rules:["blockquote","code","fence","heading","hr","html_block","lheading","list","reference","paragraph"]},inline:{rules:["autolink","backticks","emphasis","entity","escape","html_inline","image","link","newline","text"],rules2:["balance_pairs","emphasis","text_collapse"]}}}},,function(e,t,n){"use strict";n.r(t),n.d(t,"default",(function(){return s}));var r=n(17),i=n(2);function s(e){var t=void 0===e?{}:e,n=t.Prism,s=t.baseConfig,o=t.codeBlockClass,a=t.codeHighlightExtensionMap,l=void 0===a?{}:a,c=Object(r.default)(s);return c.extend((function(e){e.set({highlight:Object(i.a)({codeHighlightExtensionMap:l,hasLang:function(e){return n.languages[e]},codeBlockClass:o,highlight:function(e,t){return n.highlight(e,n.languages[t],t)}})})})),{previewClass:"markdown-body",extend:function(e){c.extend((function(){for(var t=arguments.length,r=new Array(t),i=0;i=a)&&!((_=e.bMarks[u]+e.tShift[u])<(y=e.eMarks[u])&&e.sCount[u]=4)){for(c=_+1;c<=y&&r[(c-_)%s]===e.src[c];c++);if(!(Math.floor((c-_)/s)'+(e?'

    '+e+"

    ":"")+"\n"},r=function(){return"\n"}),l=function(e,t){var i=e[t],s=i.info.trim().slice(c.length).trim();return!s&&p&&(s="function"==typeof p?p():p),1===i.nesting?n(s):r(s)}),e.use(s.a,c,{render:l,validate:i,marker:a}))},l=function(e){e.extendMarkdown((function(t){var n=function(){var t=e.lang.config;return t.langConfig[t.lang]};a(t,{type:"tip",defaultTitle:function(){return n().tip.tip.defaultTitle},blockClass:"v-md-plugin-tip"}),a(t,{type:"warning",defaultTitle:function(){return n().tip.warning.defaultTitle},blockClass:"v-md-plugin-tip"}),a(t,{type:"danger",defaultTitle:function(){return n().tip.danger.defaultTitle},blockClass:"v-md-plugin-tip"}),a(t,{type:"details",defaultTitle:function(){return n().tip.details.defaultTitle},before:function(e){return'
    '+(e?""+e+"":"")+"\n"},after:function(){return"
    \n"}})})),e.lang.add({"zh-CN":{tip:{tip:{defaultTitle:"提示"},warning:{defaultTitle:"注意"},danger:{defaultTitle:"警告"},details:{defaultTitle:"详细信息"}}},"en-US":{tip:{tip:{defaultTitle:"TIP"},warning:{defaultTitle:"WARNING"},danger:{defaultTitle:"DANGER"},details:{defaultTitle:"DETAILS"}}}})};n(80),n(84),n(85),t.default={install:function(e,t){var n,i,s,o,a,c,u,d,h,p=(s=(i=void 0===n?{}:n).name,o=void 0===s?"tip":s,a=i.icon,c=void 0===a?"v-md-icon-tip":a,u=i.text,d=function(e,t){void 0===t&&(t="tip"),e.insert((function(n){var r=n||e.langConfig.tip[t].placeholder;return{text:"::: "+t+"\n "+r+"\n:::",selected:r}}))},h={title:function(e){return e.langConfig.tip.toolbar},icon:c,text:u,menus:[{name:"tip",text:function(e){return e.langConfig.tip.tip.toolbar},action:function(e){e.execCommand(o)}},{name:"warning",text:function(e){return e.langConfig.tip.warning.toolbar},action:function(e){e.execCommand(o,"warning")}},{name:"danger",text:function(e){return e.langConfig.tip.danger.toolbar},action:function(e){e.execCommand(o,"danger")}},{name:"details",text:function(e){return e.langConfig.tip.details.toolbar},action:function(e){e.execCommand(o,"details")}}]},{install:function(e){"v-md-editor"===e.name&&(e.command(o,d),e.toolbar(o,h),e.lang.add({"zh-CN":{tip:{toolbar:"插入提示",tip:{toolbar:"提示",placeholder:"在此输入内容"},warning:{toolbar:"注意",placeholder:"在此输入内容"},danger:{toolbar:"警告",placeholder:"在此输入内容"},details:{toolbar:"详细信息",placeholder:"内容"}}},"en-US":{tip:{toolbar:"Insert tip",tip:{toolbar:"Tip",placeholder:"Insert content"},warning:{toolbar:"Warning",placeholder:"Insert content"},danger:{toolbar:"Danger",placeholder:"Insert content"},details:{toolbar:"Details",placeholder:"Content"}}}})),e.vMdParser.use(l)}});e.vMdParser.use(r.default,t),e.use(p)}}}]).default}))},1166:function(e,t,n){"use strict";t.__esModule=!0,t.deepAssign=o;var r=n(7060),i=Object.prototype.hasOwnProperty;function s(e,t,n){var s=t[n];void 0!==s&&null!==s&&(i.call(e,n)&&(0,r.isObject)(s)?e[n]=o(Object(e[n]),t[n]):e[n]=s)}function o(e,t){return Object.keys(t).forEach((function(n){s(e,t,n)})),e}},2960:function(e,t){"use strict";function n(e){var t=e.renderer.rules.fence;e.renderer.rules.fence=function(){var e=t.apply(void 0,arguments),n='\n ',r=e.replace("\x3c!--beforeend--\x3e",n+"\x3c!--beforeend--\x3e").replace("v-md-pre-wrapper","v-md-pre-wrapper copy-code-mode");return r}}t.__esModule=!0,t["default"]=n},6325:function(e,t){"use strict";function n(e,t){var n,r,i=e.posMax,s=!0,o=!0;return n=t>0?e.src.charCodeAt(t-1):-1,r=t+1<=i?e.src.charCodeAt(t+1):-1,(32===n||9===n||r>=48&&r<=57)&&(o=!1),32!==r&&9!==r||(s=!1),{can_open:s,can_close:o}}function r(e,t){var r,i,s,o,a;if("$"!==e.src[e.pos])return!1;if(o=n(e,e.pos),!o.can_open)return t||(e.pending+="$"),e.pos+=1,!0;r=e.pos+1,i=r;while(-1!==(i=e.src.indexOf("$",i))){a=i-1;while("\\"===e.src[a])a-=1;if((i-a)%2==1)break;i+=1}return-1===i?(t||(e.pending+="$"),e.pos=r,!0):i-r===0?(t||(e.pending+="$$"),e.pos=r+1,!0):(o=n(e,i),o.can_close?(t||(s=e.push("math_inline","math",0),s.markup="$",s.content=e.src.slice(r,i)),e.pos=i+1,!0):(t||(e.pending+="$"),e.pos=r,!0))}function i(e,t,n,r){var i,s,o,a,l,c=!1,u=e.bMarks[t]+e.tShift[t],d=e.eMarks[t];if(u+2>d)return!1;if("$$"!==e.src.slice(u,u+2))return!1;if(u+=2,i=e.src.slice(u,d),r)return!0;for("$$"===i.trim().slice(-2)&&(i=i.trim().slice(0,-2),c=!0),o=t;!c;){if(o++,o>=n)break;if(u=e.bMarks[o]+e.tShift[o],d=e.eMarks[o],u"+s.renderToString(e,t)+"

    "}catch(n){return t.throwOnError&&console.log(n),e}},c=function(e,t){return l(e[t].content)+"\n"};e.inline.ruler.after("escape","math_inline",r),e.block.ruler.after("blockquote","math_block",i,{alt:["paragraph","reference","blockquote","list"]}),e.renderer.rules.math_inline=a,e.renderer.rules.math_block=c}t.__esModule=!0,t["default"]=s},3596:function(e,t){"use strict";function n(e,t){var n=void 0===t?{}:t,r=n.className,i=void 0===r?"v-md-mermaid":r,s=function(e){return function(){for(var t=arguments.length,n=new Array(t),r=0;r'+a.content.replace(//g,">")+"":l}},o=e.renderer.rules,a=o.fence,l=o.code_block;e.renderer.rules.fence=s(a),e.renderer.rules.code_block=s(l)}t.__esModule=!0,t["default"]=n},7060:function(e,t){"use strict";t.__esModule=!0,t.arraytoObject=s,t.importAll=o,t.isKorean=l,t.generatorText=c,t.inBrowser=t.isObject=void 0;var n=Object.prototype.toString,r=function(e){return"[object Object]"===n.call(e)};function i(e,t){return Object.keys(t).forEach((function(n){e[n]=t[n]})),e}function s(e){for(var t={},n=0;n0&&c(o.width)/e.offsetWidth||1,l=e.offsetHeight>0&&c(o.height)/e.offsetHeight||1);var u=i(e)?r(e):window,h=u.visualViewport,p=!d()&&n,f=(o.left+(p&&h?h.offsetLeft:0))/a,g=(o.top+(p&&h?h.offsetTop:0))/l,m=o.width/a,b=o.height/l;return{width:m,height:b,top:g,right:f+m,bottom:g+b,left:f,x:f,y:g}}function p(e){var t=r(e),n=t.pageXOffset,i=t.pageYOffset;return{scrollLeft:n,scrollTop:i}}function f(e){return{scrollLeft:e.scrollLeft,scrollTop:e.scrollTop}}function g(e){return e!==r(e)&&s(e)?f(e):p(e)}function m(e){return e?(e.nodeName||"").toLowerCase():null}function b(e){return((i(e)?e.ownerDocument:e.document)||window.document).documentElement}function _(e){return h(b(e)).left+p(e).scrollLeft}function y(e){return r(e).getComputedStyle(e)}function v(e){var t=y(e),n=t.overflow,r=t.overflowX,i=t.overflowY;return/auto|scroll|overlay|hidden/.test(n+i+r)}function E(e){var t=e.getBoundingClientRect(),n=c(t.width)/e.offsetWidth||1,r=c(t.height)/e.offsetHeight||1;return 1!==n||1!==r}function x(e,t,n){void 0===n&&(n=!1);var r=s(t),i=s(t)&&E(t),o=b(t),a=h(e,i,n),l={scrollLeft:0,scrollTop:0},c={x:0,y:0};return(r||!r&&!n)&&(("body"!==m(t)||v(o))&&(l=g(t)),s(t)?(c=h(t,!0),c.x+=t.clientLeft,c.y+=t.clientTop):o&&(c.x=_(o))),{x:a.left+l.scrollLeft-c.x,y:a.top+l.scrollTop-c.y,width:a.width,height:a.height}}function S(e){var t=h(e),n=e.offsetWidth,r=e.offsetHeight;return Math.abs(t.width-n)<=1&&(n=t.width),Math.abs(t.height-r)<=1&&(r=t.height),{x:e.offsetLeft,y:e.offsetTop,width:n,height:r}}function w(e){return"html"===m(e)?e:e.assignedSlot||e.parentNode||(o(e)?e.host:null)||b(e)}function T(e){return["html","body","#document"].indexOf(m(e))>=0?e.ownerDocument.body:s(e)&&v(e)?e:T(w(e))}function A(e,t){var n;void 0===t&&(t=[]);var i=T(e),s=i===(null==(n=e.ownerDocument)?void 0:n.body),o=r(i),a=s?[o].concat(o.visualViewport||[],v(i)?i:[]):i,l=t.concat(a);return s?l:l.concat(A(w(a)))}function C(e){return["table","td","th"].indexOf(m(e))>=0}function I(e){return s(e)&&"fixed"!==y(e).position?e.offsetParent:null}function R(e){var t=/firefox/i.test(u()),n=/Trident/i.test(u());if(n&&s(e)){var r=y(e);if("fixed"===r.position)return null}var i=w(e);o(i)&&(i=i.host);while(s(i)&&["html","body"].indexOf(m(i))<0){var a=y(i);if("none"!==a.transform||"none"!==a.perspective||"paint"===a.contain||-1!==["transform","perspective"].indexOf(a.willChange)||t&&"filter"===a.willChange||t&&a.filter&&"none"!==a.filter)return i;i=i.parentNode}return null}function k(e){var t=r(e),n=I(e);while(n&&C(n)&&"static"===y(n).position)n=I(n);return n&&("html"===m(n)||"body"===m(n)&&"static"===y(n).position)?t:n||R(e)||t}var P="top",O="bottom",N="right",M="left",D="auto",L=[P,O,N,M],F="start",B="end",U="clippingParents",G="viewport",$="popper",z="reference",H=L.reduce((function(e,t){return e.concat([t+"-"+F,t+"-"+B])}),[]),V=[].concat(L,[D]).reduce((function(e,t){return e.concat([t,t+"-"+F,t+"-"+B])}),[]),j="beforeRead",W="read",q="afterRead",X="beforeMain",Y="main",K="afterMain",Z="beforeWrite",Q="write",J="afterWrite",ee=[j,W,q,X,Y,K,Z,Q,J];function te(e){var t=new Map,n=new Set,r=[];function i(e){n.add(e.name);var s=[].concat(e.requires||[],e.requiresIfExists||[]);s.forEach((function(e){if(!n.has(e)){var r=t.get(e);r&&i(r)}})),r.push(e)}return e.forEach((function(e){t.set(e.name,e)})),e.forEach((function(e){n.has(e.name)||i(e)})),r}function ne(e){var t=te(e);return ee.reduce((function(e,n){return e.concat(t.filter((function(e){return e.phase===n})))}),[])}function re(e){var t;return function(){return t||(t=new Promise((function(n){Promise.resolve().then((function(){t=void 0,n(e())}))}))),t}}function ie(e){var t=e.reduce((function(e,t){var n=e[t.name];return e[t.name]=n?Object.assign({},n,t,{options:Object.assign({},n.options,t.options),data:Object.assign({},n.data,t.data)}):t,e}),{});return Object.keys(t).map((function(e){return t[e]}))}var se={placement:"bottom",modifiers:[],strategy:"absolute"};function oe(){for(var e=arguments.length,t=new Array(e),n=0;n=0?"x":"y"}function fe(e){var t,n=e.reference,r=e.element,i=e.placement,s=i?de(i):null,o=i?he(i):null,a=n.x+n.width/2-r.width/2,l=n.y+n.height/2-r.height/2;switch(s){case P:t={x:a,y:n.y-r.height};break;case O:t={x:a,y:n.y+n.height};break;case N:t={x:n.x+n.width,y:l};break;case M:t={x:n.x-r.width,y:l};break;default:t={x:n.x,y:n.y}}var c=s?pe(s):null;if(null!=c){var u="y"===c?"height":"width";switch(o){case F:t[c]=t[c]-(n[u]/2-r[u]/2);break;case B:t[c]=t[c]+(n[u]/2-r[u]/2);break;default:}}return t}function ge(e){var t=e.state,n=e.name;t.modifiersData[n]=fe({reference:t.rects.reference,element:t.rects.popper,strategy:"absolute",placement:t.placement})}var me={name:"popperOffsets",enabled:!0,phase:"read",fn:ge,data:{}},be={top:"auto",right:"auto",bottom:"auto",left:"auto"};function _e(e,t){var n=e.x,r=e.y,i=t.devicePixelRatio||1;return{x:c(n*i)/i||0,y:c(r*i)/i||0}}function ye(e){var t,n=e.popper,i=e.popperRect,s=e.placement,o=e.variation,a=e.offsets,l=e.position,c=e.gpuAcceleration,u=e.adaptive,d=e.roundOffsets,h=e.isFixed,p=a.x,f=void 0===p?0:p,g=a.y,m=void 0===g?0:g,_="function"===typeof d?d({x:f,y:m}):{x:f,y:m};f=_.x,m=_.y;var v=a.hasOwnProperty("x"),E=a.hasOwnProperty("y"),x=M,S=P,w=window;if(u){var T=k(n),A="clientHeight",C="clientWidth";if(T===r(n)&&(T=b(n),"static"!==y(T).position&&"absolute"===l&&(A="scrollHeight",C="scrollWidth")),s===P||(s===M||s===N)&&o===B){S=O;var I=h&&T===w&&w.visualViewport?w.visualViewport.height:T[A];m-=I-i.height,m*=c?1:-1}if(s===M||(s===P||s===O)&&o===B){x=N;var R=h&&T===w&&w.visualViewport?w.visualViewport.width:T[C];f-=R-i.width,f*=c?1:-1}}var D,L=Object.assign({position:l},u&&be),F=!0===d?_e({x:f,y:m},r(n)):{x:f,y:m};return f=F.x,m=F.y,c?Object.assign({},L,(D={},D[S]=E?"0":"",D[x]=v?"0":"",D.transform=(w.devicePixelRatio||1)<=1?"translate("+f+"px, "+m+"px)":"translate3d("+f+"px, "+m+"px, 0)",D)):Object.assign({},L,(t={},t[S]=E?m+"px":"",t[x]=v?f+"px":"",t.transform="",t))}function ve(e){var t=e.state,n=e.options,r=n.gpuAcceleration,i=void 0===r||r,s=n.adaptive,o=void 0===s||s,a=n.roundOffsets,l=void 0===a||a,c={placement:de(t.placement),variation:he(t.placement),popper:t.elements.popper,popperRect:t.rects.popper,gpuAcceleration:i,isFixed:"fixed"===t.options.strategy};null!=t.modifiersData.popperOffsets&&(t.styles.popper=Object.assign({},t.styles.popper,ye(Object.assign({},c,{offsets:t.modifiersData.popperOffsets,position:t.options.strategy,adaptive:o,roundOffsets:l})))),null!=t.modifiersData.arrow&&(t.styles.arrow=Object.assign({},t.styles.arrow,ye(Object.assign({},c,{offsets:t.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:l})))),t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-placement":t.placement})}var Ee={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:ve,data:{}};function xe(e){var t=e.state;Object.keys(t.elements).forEach((function(e){var n=t.styles[e]||{},r=t.attributes[e]||{},i=t.elements[e];s(i)&&m(i)&&(Object.assign(i.style,n),Object.keys(r).forEach((function(e){var t=r[e];!1===t?i.removeAttribute(e):i.setAttribute(e,!0===t?"":t)})))}))}function Se(e){var t=e.state,n={popper:{position:t.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(t.elements.popper.style,n.popper),t.styles=n,t.elements.arrow&&Object.assign(t.elements.arrow.style,n.arrow),function(){Object.keys(t.elements).forEach((function(e){var r=t.elements[e],i=t.attributes[e]||{},o=Object.keys(t.styles.hasOwnProperty(e)?t.styles[e]:n[e]),a=o.reduce((function(e,t){return e[t]="",e}),{});s(r)&&m(r)&&(Object.assign(r.style,a),Object.keys(i).forEach((function(e){r.removeAttribute(e)})))}))}}var we={name:"applyStyles",enabled:!0,phase:"write",fn:xe,effect:Se,requires:["computeStyles"]};function Te(e,t,n){var r=de(e),i=[M,P].indexOf(r)>=0?-1:1,s="function"===typeof n?n(Object.assign({},t,{placement:e})):n,o=s[0],a=s[1];return o=o||0,a=(a||0)*i,[M,N].indexOf(r)>=0?{x:a,y:o}:{x:o,y:a}}function Ae(e){var t=e.state,n=e.options,r=e.name,i=n.offset,s=void 0===i?[0,0]:i,o=V.reduce((function(e,n){return e[n]=Te(n,t.rects,s),e}),{}),a=o[t.placement],l=a.x,c=a.y;null!=t.modifiersData.popperOffsets&&(t.modifiersData.popperOffsets.x+=l,t.modifiersData.popperOffsets.y+=c),t.modifiersData[r]=o}var Ce={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:Ae},Ie={left:"right",right:"left",bottom:"top",top:"bottom"};function Re(e){return e.replace(/left|right|bottom|top/g,(function(e){return Ie[e]}))}var ke={start:"end",end:"start"};function Pe(e){return e.replace(/start|end/g,(function(e){return ke[e]}))}function Oe(e,t){var n=r(e),i=b(e),s=n.visualViewport,o=i.clientWidth,a=i.clientHeight,l=0,c=0;if(s){o=s.width,a=s.height;var u=d();(u||!u&&"fixed"===t)&&(l=s.offsetLeft,c=s.offsetTop)}return{width:o,height:a,x:l+_(e),y:c}}function Ne(e){var t,n=b(e),r=p(e),i=null==(t=e.ownerDocument)?void 0:t.body,s=a(n.scrollWidth,n.clientWidth,i?i.scrollWidth:0,i?i.clientWidth:0),o=a(n.scrollHeight,n.clientHeight,i?i.scrollHeight:0,i?i.clientHeight:0),l=-r.scrollLeft+_(e),c=-r.scrollTop;return"rtl"===y(i||n).direction&&(l+=a(n.clientWidth,i?i.clientWidth:0)-s),{width:s,height:o,x:l,y:c}}function Me(e,t){var n=t.getRootNode&&t.getRootNode();if(e.contains(t))return!0;if(n&&o(n)){var r=t;do{if(r&&e.isSameNode(r))return!0;r=r.parentNode||r.host}while(r)}return!1}function De(e){return Object.assign({},e,{left:e.x,top:e.y,right:e.x+e.width,bottom:e.y+e.height})}function Le(e,t){var n=h(e,!1,"fixed"===t);return n.top=n.top+e.clientTop,n.left=n.left+e.clientLeft,n.bottom=n.top+e.clientHeight,n.right=n.left+e.clientWidth,n.width=e.clientWidth,n.height=e.clientHeight,n.x=n.left,n.y=n.top,n}function Fe(e,t,n){return t===G?De(Oe(e,n)):i(t)?Le(t,n):De(Ne(b(e)))}function Be(e){var t=A(w(e)),n=["absolute","fixed"].indexOf(y(e).position)>=0,r=n&&s(e)?k(e):e;return i(r)?t.filter((function(e){return i(e)&&Me(e,r)&&"body"!==m(e)})):[]}function Ue(e,t,n,r){var i="clippingParents"===t?Be(e):[].concat(t),s=[].concat(i,[n]),o=s[0],c=s.reduce((function(t,n){var i=Fe(e,n,r);return t.top=a(i.top,t.top),t.right=l(i.right,t.right),t.bottom=l(i.bottom,t.bottom),t.left=a(i.left,t.left),t}),Fe(e,o,r));return c.width=c.right-c.left,c.height=c.bottom-c.top,c.x=c.left,c.y=c.top,c}function Ge(){return{top:0,right:0,bottom:0,left:0}}function $e(e){return Object.assign({},Ge(),e)}function ze(e,t){return t.reduce((function(t,n){return t[n]=e,t}),{})}function He(e,t){void 0===t&&(t={});var n=t,r=n.placement,s=void 0===r?e.placement:r,o=n.strategy,a=void 0===o?e.strategy:o,l=n.boundary,c=void 0===l?U:l,u=n.rootBoundary,d=void 0===u?G:u,p=n.elementContext,f=void 0===p?$:p,g=n.altBoundary,m=void 0!==g&&g,_=n.padding,y=void 0===_?0:_,v=$e("number"!==typeof y?y:ze(y,L)),E=f===$?z:$,x=e.rects.popper,S=e.elements[m?E:f],w=Ue(i(S)?S:S.contextElement||b(e.elements.popper),c,d,a),T=h(e.elements.reference),A=fe({reference:T,element:x,strategy:"absolute",placement:s}),C=De(Object.assign({},x,A)),I=f===$?C:T,R={top:w.top-I.top+v.top,bottom:I.bottom-w.bottom+v.bottom,left:w.left-I.left+v.left,right:I.right-w.right+v.right},k=e.modifiersData.offset;if(f===$&&k){var M=k[s];Object.keys(R).forEach((function(e){var t=[N,O].indexOf(e)>=0?1:-1,n=[P,O].indexOf(e)>=0?"y":"x";R[e]+=M[n]*t}))}return R}function Ve(e,t){void 0===t&&(t={});var n=t,r=n.placement,i=n.boundary,s=n.rootBoundary,o=n.padding,a=n.flipVariations,l=n.allowedAutoPlacements,c=void 0===l?V:l,u=he(r),d=u?a?H:H.filter((function(e){return he(e)===u})):L,h=d.filter((function(e){return c.indexOf(e)>=0}));0===h.length&&(h=d);var p=h.reduce((function(t,n){return t[n]=He(e,{placement:n,boundary:i,rootBoundary:s,padding:o})[de(n)],t}),{});return Object.keys(p).sort((function(e,t){return p[e]-p[t]}))}function je(e){if(de(e)===D)return[];var t=Re(e);return[Pe(e),t,Pe(t)]}function We(e){var t=e.state,n=e.options,r=e.name;if(!t.modifiersData[r]._skip){for(var i=n.mainAxis,s=void 0===i||i,o=n.altAxis,a=void 0===o||o,l=n.fallbackPlacements,c=n.padding,u=n.boundary,d=n.rootBoundary,h=n.altBoundary,p=n.flipVariations,f=void 0===p||p,g=n.allowedAutoPlacements,m=t.options.placement,b=de(m),_=b===m,y=l||(_||!f?[Re(m)]:je(m)),v=[m].concat(y).reduce((function(e,n){return e.concat(de(n)===D?Ve(t,{placement:n,boundary:u,rootBoundary:d,padding:c,flipVariations:f,allowedAutoPlacements:g}):n)}),[]),E=t.rects.reference,x=t.rects.popper,S=new Map,w=!0,T=v[0],A=0;A=0,L=k?"width":"height",B=He(t,{placement:C,boundary:u,rootBoundary:d,altBoundary:h,padding:c}),U=k?R?N:M:R?O:P;E[L]>x[L]&&(U=Re(U));var G=Re(U),$=[];if(s&&$.push(B[I]<=0),a&&$.push(B[U]<=0,B[G]<=0),$.every((function(e){return e}))){T=C,w=!1;break}S.set(C,$)}if(w)for(var z=f?3:1,H=function(e){var t=v.find((function(t){var n=S.get(t);if(n)return n.slice(0,e).every((function(e){return e}))}));if(t)return T=t,"break"},V=z;V>0;V--){var j=H(V);if("break"===j)break}t.placement!==T&&(t.modifiersData[r]._skip=!0,t.placement=T,t.reset=!0)}}var qe={name:"flip",enabled:!0,phase:"main",fn:We,requiresIfExists:["offset"],data:{_skip:!1}};function Xe(e){return"x"===e?"y":"x"}function Ye(e,t,n){return a(e,l(t,n))}function Ke(e,t,n){var r=Ye(e,t,n);return r>n?n:r}function Ze(e){var t=e.state,n=e.options,r=e.name,i=n.mainAxis,s=void 0===i||i,o=n.altAxis,c=void 0!==o&&o,u=n.boundary,d=n.rootBoundary,h=n.altBoundary,p=n.padding,f=n.tether,g=void 0===f||f,m=n.tetherOffset,b=void 0===m?0:m,_=He(t,{boundary:u,rootBoundary:d,padding:p,altBoundary:h}),y=de(t.placement),v=he(t.placement),E=!v,x=pe(y),w=Xe(x),T=t.modifiersData.popperOffsets,A=t.rects.reference,C=t.rects.popper,I="function"===typeof b?b(Object.assign({},t.rects,{placement:t.placement})):b,R="number"===typeof I?{mainAxis:I,altAxis:I}:Object.assign({mainAxis:0,altAxis:0},I),D=t.modifiersData.offset?t.modifiersData.offset[t.placement]:null,L={x:0,y:0};if(T){if(s){var B,U="y"===x?P:M,G="y"===x?O:N,$="y"===x?"height":"width",z=T[x],H=z+_[U],V=z-_[G],j=g?-C[$]/2:0,W=v===F?A[$]:C[$],q=v===F?-C[$]:-A[$],X=t.elements.arrow,Y=g&&X?S(X):{width:0,height:0},K=t.modifiersData["arrow#persistent"]?t.modifiersData["arrow#persistent"].padding:Ge(),Z=K[U],Q=K[G],J=Ye(0,A[$],Y[$]),ee=E?A[$]/2-j-J-Z-R.mainAxis:W-J-Z-R.mainAxis,te=E?-A[$]/2+j+J+Q+R.mainAxis:q+J+Q+R.mainAxis,ne=t.elements.arrow&&k(t.elements.arrow),re=ne?"y"===x?ne.clientTop||0:ne.clientLeft||0:0,ie=null!=(B=null==D?void 0:D[x])?B:0,se=z+ee-ie-re,oe=z+te-ie,ae=Ye(g?l(H,se):H,z,g?a(V,oe):V);T[x]=ae,L[x]=ae-z}if(c){var le,ce="x"===x?P:M,ue="x"===x?O:N,fe=T[w],ge="y"===w?"height":"width",me=fe+_[ce],be=fe-_[ue],_e=-1!==[P,M].indexOf(y),ye=null!=(le=null==D?void 0:D[w])?le:0,ve=_e?me:fe-A[ge]-C[ge]-ye+R.altAxis,Ee=_e?fe+A[ge]+C[ge]-ye-R.altAxis:be,xe=g&&_e?Ke(ve,fe,Ee):Ye(g?ve:me,fe,g?Ee:be);T[w]=xe,L[w]=xe-fe}t.modifiersData[r]=L}}var Qe={name:"preventOverflow",enabled:!0,phase:"main",fn:Ze,requiresIfExists:["offset"]},Je=function(e,t){return e="function"===typeof e?e(Object.assign({},t.rects,{placement:t.placement})):e,$e("number"!==typeof e?e:ze(e,L))};function et(e){var t,n=e.state,r=e.name,i=e.options,s=n.elements.arrow,o=n.modifiersData.popperOffsets,a=de(n.placement),l=pe(a),c=[M,N].indexOf(a)>=0,u=c?"height":"width";if(s&&o){var d=Je(i.padding,n),h=S(s),p="y"===l?P:M,f="y"===l?O:N,g=n.rects.reference[u]+n.rects.reference[l]-o[l]-n.rects.popper[u],m=o[l]-n.rects.reference[l],b=k(s),_=b?"y"===l?b.clientHeight||0:b.clientWidth||0:0,y=g/2-m/2,v=d[p],E=_-h[u]-d[f],x=_/2-h[u]/2+y,w=Ye(v,x,E),T=l;n.modifiersData[r]=(t={},t[T]=w,t.centerOffset=w-x,t)}}function tt(e){var t=e.state,n=e.options,r=n.element,i=void 0===r?"[data-popper-arrow]":r;null!=i&&("string"!==typeof i||(i=t.elements.popper.querySelector(i),i))&&Me(t.elements.popper,i)&&(t.elements.arrow=i)}var nt={name:"arrow",enabled:!0,phase:"main",fn:et,effect:tt,requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function rt(e,t,n){return void 0===n&&(n={x:0,y:0}),{top:e.top-t.height-n.y,right:e.right-t.width+n.x,bottom:e.bottom-t.height+n.y,left:e.left-t.width-n.x}}function it(e){return[P,N,O,M].some((function(t){return e[t]>=0}))}function st(e){var t=e.state,n=e.name,r=t.rects.reference,i=t.rects.popper,s=t.modifiersData.preventOverflow,o=He(t,{elementContext:"reference"}),a=He(t,{altBoundary:!0}),l=rt(o,r),c=rt(a,i,s),u=it(l),d=it(c);t.modifiersData[n]={referenceClippingOffsets:l,popperEscapeOffsets:c,isReferenceHidden:u,hasPopperEscaped:d},t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-reference-hidden":u,"data-popper-escaped":d})}var ot={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:st},at=[ue,me,Ee,we,Ce,qe,Qe,nt,ot],lt=ae({defaultModifiers:at})},640:function(e,t,n){"use strict";var r=n(1742),i={"text/plain":"Text","text/html":"Url",default:"Text"},s="Copy to clipboard: #{key}, Enter";function o(e){var t=(/mac os x/i.test(navigator.userAgent)?"⌘":"Ctrl")+"+C";return e.replace(/#{\s*key\s*}/g,t)}function a(e,t){var n,a,l,c,u,d,h=!1;t||(t={}),n=t.debug||!1;try{l=r(),c=document.createRange(),u=document.getSelection(),d=document.createElement("span"),d.textContent=e,d.ariaHidden="true",d.style.all="unset",d.style.position="fixed",d.style.top=0,d.style.clip="rect(0, 0, 0, 0)",d.style.whiteSpace="pre",d.style.webkitUserSelect="text",d.style.MozUserSelect="text",d.style.msUserSelect="text",d.style.userSelect="text",d.addEventListener("copy",(function(r){if(r.stopPropagation(),t.format)if(r.preventDefault(),"undefined"===typeof r.clipboardData){n&&console.warn("unable to use e.clipboardData"),n&&console.warn("trying IE specific stuff"),window.clipboardData.clearData();var s=i[t.format]||i["default"];window.clipboardData.setData(s,e)}else r.clipboardData.clearData(),r.clipboardData.setData(t.format,e);t.onCopy&&(r.preventDefault(),t.onCopy(r.clipboardData))})),document.body.appendChild(d),c.selectNodeContents(d),u.addRange(c);var p=document.execCommand("copy");if(!p)throw new Error("copy command was unsuccessful");h=!0}catch(f){n&&console.error("unable to copy using execCommand: ",f),n&&console.warn("trying IE specific stuff");try{window.clipboardData.setData(t.format||"text",e),t.onCopy&&t.onCopy(window.clipboardData),h=!0}catch(f){n&&console.error("unable to copy using clipboardData: ",f),n&&console.error("falling back to prompt"),a=o("message"in t?t.message:s),window.prompt(a,e)}}finally{u&&("function"==typeof u.removeRange?u.removeRange(c):u.removeAllRanges()),d&&document.body.removeChild(d),l()}return h}e.exports=a},9662:function(e,t,n){var r=n(614),i=n(6330),s=TypeError;e.exports=function(e){if(r(e))return e;throw s(i(e)+" is not a function")}},9670:function(e,t,n){var r=n(111),i=String,s=TypeError;e.exports=function(e){if(r(e))return e;throw s(i(e)+" is not an object")}},1318:function(e,t,n){var r=n(5656),i=n(1400),s=n(6244),o=function(e){return function(t,n,o){var a,l=r(t),c=s(l),u=i(o,c);if(e&&n!=n){while(c>u)if(a=l[u++],a!=a)return!0}else for(;c>u;u++)if((e||u in l)&&l[u]===n)return e||u||0;return!e&&-1}};e.exports={includes:o(!0),indexOf:o(!1)}},3658:function(e,t,n){"use strict";var r=n(9781),i=n(3157),s=TypeError,o=Object.getOwnPropertyDescriptor,a=r&&!function(){if(void 0!==this)return!0;try{Object.defineProperty([],"length",{writable:!1}).length=1}catch(e){return e instanceof TypeError}}();e.exports=a?function(e,t){if(i(e)&&!o(e,"length").writable)throw s("Cannot set read only .length");return e.length=t}:function(e,t){return e.length=t}},4326:function(e,t,n){var r=n(1702),i=r({}.toString),s=r("".slice);e.exports=function(e){return s(i(e),8,-1)}},9920:function(e,t,n){var r=n(2597),i=n(3887),s=n(1236),o=n(3070);e.exports=function(e,t,n){for(var a=i(t),l=o.f,c=s.f,u=0;un)throw t("Maximum allowed index exceeded");return e}},8113:function(e){e.exports="undefined"!=typeof navigator&&String(navigator.userAgent)||""},7392:function(e,t,n){var r,i,s=n(7854),o=n(8113),a=s.process,l=s.Deno,c=a&&a.versions||l&&l.version,u=c&&c.v8;u&&(r=u.split("."),i=r[0]>0&&r[0]<4?1:+(r[0]+r[1])),!i&&o&&(r=o.match(/Edge\/(\d+)/),(!r||r[1]>=74)&&(r=o.match(/Chrome\/(\d+)/),r&&(i=+r[1]))),e.exports=i},748:function(e){e.exports=["constructor","hasOwnProperty","isPrototypeOf","propertyIsEnumerable","toLocaleString","toString","valueOf"]},2109:function(e,t,n){var r=n(7854),i=n(1236).f,s=n(8880),o=n(8052),a=n(3072),l=n(9920),c=n(4705);e.exports=function(e,t){var n,u,d,h,p,f,g=e.target,m=e.global,b=e.stat;if(u=m?r:b?r[g]||a(g,{}):(r[g]||{}).prototype,u)for(d in t){if(p=t[d],e.dontCallGetSet?(f=i(u,d),h=f&&f.value):h=u[d],n=c(m?d:g+(b?".":"#")+d,e.forced),!n&&void 0!==h){if(typeof p==typeof h)continue;l(p,h)}(e.sham||h&&h.sham)&&s(p,"sham",!0),o(u,d,p,e)}}},7293:function(e){e.exports=function(e){try{return!!e()}catch(t){return!0}}},4374:function(e,t,n){var r=n(7293);e.exports=!r((function(){var e=function(){}.bind();return"function"!=typeof e||e.hasOwnProperty("prototype")}))},6916:function(e,t,n){var r=n(4374),i=Function.prototype.call;e.exports=r?i.bind(i):function(){return i.apply(i,arguments)}},6530:function(e,t,n){var r=n(9781),i=n(2597),s=Function.prototype,o=r&&Object.getOwnPropertyDescriptor,a=i(s,"name"),l=a&&"something"===function(){}.name,c=a&&(!r||r&&o(s,"name").configurable);e.exports={EXISTS:a,PROPER:l,CONFIGURABLE:c}},1702:function(e,t,n){var r=n(4374),i=Function.prototype,s=i.call,o=r&&i.bind.bind(s,s);e.exports=r?o:function(e){return function(){return s.apply(e,arguments)}}},5005:function(e,t,n){var r=n(7854),i=n(614),s=function(e){return i(e)?e:void 0};e.exports=function(e,t){return arguments.length<2?s(r[e]):r[e]&&r[e][t]}},8173:function(e,t,n){var r=n(9662),i=n(8554);e.exports=function(e,t){var n=e[t];return i(n)?void 0:r(n)}},7854:function(e,t,n){var r=function(e){return e&&e.Math==Math&&e};e.exports=r("object"==typeof globalThis&&globalThis)||r("object"==typeof window&&window)||r("object"==typeof self&&self)||r("object"==typeof n.g&&n.g)||function(){return this}()||this||Function("return this")()},2597:function(e,t,n){var r=n(1702),i=n(7908),s=r({}.hasOwnProperty);e.exports=Object.hasOwn||function(e,t){return s(i(e),t)}},3501:function(e){e.exports={}},4664:function(e,t,n){var r=n(9781),i=n(7293),s=n(317);e.exports=!r&&!i((function(){return 7!=Object.defineProperty(s("div"),"a",{get:function(){return 7}}).a}))},8361:function(e,t,n){var r=n(1702),i=n(7293),s=n(4326),o=Object,a=r("".split);e.exports=i((function(){return!o("z").propertyIsEnumerable(0)}))?function(e){return"String"==s(e)?a(e,""):o(e)}:o},2788:function(e,t,n){var r=n(1702),i=n(614),s=n(5465),o=r(Function.toString);i(s.inspectSource)||(s.inspectSource=function(e){return o(e)}),e.exports=s.inspectSource},9909:function(e,t,n){var r,i,s,o=n(4811),a=n(7854),l=n(111),c=n(8880),u=n(2597),d=n(5465),h=n(6200),p=n(3501),f="Object already initialized",g=a.TypeError,m=a.WeakMap,b=function(e){return s(e)?i(e):r(e,{})},_=function(e){return function(t){var n;if(!l(t)||(n=i(t)).type!==e)throw g("Incompatible receiver, "+e+" required");return n}};if(o||d.state){var y=d.state||(d.state=new m);y.get=y.get,y.has=y.has,y.set=y.set,r=function(e,t){if(y.has(e))throw g(f);return t.facade=e,y.set(e,t),t},i=function(e){return y.get(e)||{}},s=function(e){return y.has(e)}}else{var v=h("state");p[v]=!0,r=function(e,t){if(u(e,v))throw g(f);return t.facade=e,c(e,v,t),t},i=function(e){return u(e,v)?e[v]:{}},s=function(e){return u(e,v)}}e.exports={set:r,get:i,has:s,enforce:b,getterFor:_}},3157:function(e,t,n){var r=n(4326);e.exports=Array.isArray||function(e){return"Array"==r(e)}},614:function(e,t,n){var r=n(4154),i=r.all;e.exports=r.IS_HTMLDDA?function(e){return"function"==typeof e||e===i}:function(e){return"function"==typeof e}},4705:function(e,t,n){var r=n(7293),i=n(614),s=/#|\.prototype\./,o=function(e,t){var n=l[a(e)];return n==u||n!=c&&(i(t)?r(t):!!t)},a=o.normalize=function(e){return String(e).replace(s,".").toLowerCase()},l=o.data={},c=o.NATIVE="N",u=o.POLYFILL="P";e.exports=o},8554:function(e){e.exports=function(e){return null===e||void 0===e}},111:function(e,t,n){var r=n(614),i=n(4154),s=i.all;e.exports=i.IS_HTMLDDA?function(e){return"object"==typeof e?null!==e:r(e)||e===s}:function(e){return"object"==typeof e?null!==e:r(e)}},1913:function(e){e.exports=!1},2190:function(e,t,n){var r=n(5005),i=n(614),s=n(7976),o=n(3307),a=Object;e.exports=o?function(e){return"symbol"==typeof e}:function(e){var t=r("Symbol");return i(t)&&s(t.prototype,a(e))}},6244:function(e,t,n){var r=n(7466);e.exports=function(e){return r(e.length)}},6339:function(e,t,n){var r=n(1702),i=n(7293),s=n(614),o=n(2597),a=n(9781),l=n(6530).CONFIGURABLE,c=n(2788),u=n(9909),d=u.enforce,h=u.get,p=String,f=Object.defineProperty,g=r("".slice),m=r("".replace),b=r([].join),_=a&&!i((function(){return 8!==f((function(){}),"length",{value:8}).length})),y=String(String).split("String"),v=e.exports=function(e,t,n){"Symbol("===g(p(t),0,7)&&(t="["+m(p(t),/^Symbol\(([^)]*)\)/,"$1")+"]"),n&&n.getter&&(t="get "+t),n&&n.setter&&(t="set "+t),(!o(e,"name")||l&&e.name!==t)&&(a?f(e,"name",{value:t,configurable:!0}):e.name=t),_&&n&&o(n,"arity")&&e.length!==n.arity&&f(e,"length",{value:n.arity});try{n&&o(n,"constructor")&&n.constructor?a&&f(e,"prototype",{writable:!1}):e.prototype&&(e.prototype=void 0)}catch(i){}var r=d(e);return o(r,"source")||(r.source=b(y,"string"==typeof t?t:"")),e};Function.prototype.toString=v((function(){return s(this)&&h(this).source||c(this)}),"toString")},4758:function(e){var t=Math.ceil,n=Math.floor;e.exports=Math.trunc||function(e){var r=+e;return(r>0?n:t)(r)}},3070:function(e,t,n){var r=n(9781),i=n(4664),s=n(3353),o=n(9670),a=n(4948),l=TypeError,c=Object.defineProperty,u=Object.getOwnPropertyDescriptor,d="enumerable",h="configurable",p="writable";t.f=r?s?function(e,t,n){if(o(e),t=a(t),o(n),"function"===typeof e&&"prototype"===t&&"value"in n&&p in n&&!n[p]){var r=u(e,t);r&&r[p]&&(e[t]=n.value,n={configurable:h in n?n[h]:r[h],enumerable:d in n?n[d]:r[d],writable:!1})}return c(e,t,n)}:c:function(e,t,n){if(o(e),t=a(t),o(n),i)try{return c(e,t,n)}catch(r){}if("get"in n||"set"in n)throw l("Accessors not supported");return"value"in n&&(e[t]=n.value),e}},1236:function(e,t,n){var r=n(9781),i=n(6916),s=n(5296),o=n(9114),a=n(5656),l=n(4948),c=n(2597),u=n(4664),d=Object.getOwnPropertyDescriptor;t.f=r?d:function(e,t){if(e=a(e),t=l(t),u)try{return d(e,t)}catch(n){}if(c(e,t))return o(!i(s.f,e,t),e[t])}},8006:function(e,t,n){var r=n(6324),i=n(748),s=i.concat("length","prototype");t.f=Object.getOwnPropertyNames||function(e){return r(e,s)}},5181:function(e,t){t.f=Object.getOwnPropertySymbols},7976:function(e,t,n){var r=n(1702);e.exports=r({}.isPrototypeOf)},6324:function(e,t,n){var r=n(1702),i=n(2597),s=n(5656),o=n(1318).indexOf,a=n(3501),l=r([].push);e.exports=function(e,t){var n,r=s(e),c=0,u=[];for(n in r)!i(a,n)&&i(r,n)&&l(u,n);while(t.length>c)i(r,n=t[c++])&&(~o(u,n)||l(u,n));return u}},5296:function(e,t){"use strict";var n={}.propertyIsEnumerable,r=Object.getOwnPropertyDescriptor,i=r&&!n.call({1:2},1);t.f=i?function(e){var t=r(this,e);return!!t&&t.enumerable}:n},2140:function(e,t,n){var r=n(6916),i=n(614),s=n(111),o=TypeError;e.exports=function(e,t){var n,a;if("string"===t&&i(n=e.toString)&&!s(a=r(n,e)))return a;if(i(n=e.valueOf)&&!s(a=r(n,e)))return a;if("string"!==t&&i(n=e.toString)&&!s(a=r(n,e)))return a;throw o("Can't convert object to primitive value")}},3887:function(e,t,n){var r=n(5005),i=n(1702),s=n(8006),o=n(5181),a=n(9670),l=i([].concat);e.exports=r("Reflect","ownKeys")||function(e){var t=s.f(a(e)),n=o.f;return n?l(t,n(e)):t}},4488:function(e,t,n){var r=n(8554),i=TypeError;e.exports=function(e){if(r(e))throw i("Can't call method on "+e);return e}},6200:function(e,t,n){var r=n(2309),i=n(9711),s=r("keys");e.exports=function(e){return s[e]||(s[e]=i(e))}},5465:function(e,t,n){var r=n(7854),i=n(3072),s="__core-js_shared__",o=r[s]||i(s,{});e.exports=o},2309:function(e,t,n){var r=n(1913),i=n(5465);(e.exports=function(e,t){return i[e]||(i[e]=void 0!==t?t:{})})("versions",[]).push({version:"3.30.2",mode:r?"pure":"global",copyright:"© 2014-2023 Denis Pushkarev (zloirock.ru)",license:"https://github.com/zloirock/core-js/blob/v3.30.2/LICENSE",source:"https://github.com/zloirock/core-js"})},6293:function(e,t,n){var r=n(7392),i=n(7293),s=n(7854),o=s.String;e.exports=!!Object.getOwnPropertySymbols&&!i((function(){var e=Symbol();return!o(e)||!(Object(e)instanceof Symbol)||!Symbol.sham&&r&&r<41}))},1400:function(e,t,n){var r=n(9303),i=Math.max,s=Math.min;e.exports=function(e,t){var n=r(e);return n<0?i(n+t,0):s(n,t)}},5656:function(e,t,n){var r=n(8361),i=n(4488);e.exports=function(e){return r(i(e))}},9303:function(e,t,n){var r=n(4758);e.exports=function(e){var t=+e;return t!==t||0===t?0:r(t)}},7466:function(e,t,n){var r=n(9303),i=Math.min;e.exports=function(e){return e>0?i(r(e),9007199254740991):0}},7908:function(e,t,n){var r=n(4488),i=Object;e.exports=function(e){return i(r(e))}},7593:function(e,t,n){var r=n(6916),i=n(111),s=n(2190),o=n(8173),a=n(2140),l=n(5112),c=TypeError,u=l("toPrimitive");e.exports=function(e,t){if(!i(e)||s(e))return e;var n,l=o(e,u);if(l){if(void 0===t&&(t="default"),n=r(l,e,t),!i(n)||s(n))return n;throw c("Can't convert object to primitive value")}return void 0===t&&(t="number"),a(e,t)}},4948:function(e,t,n){var r=n(7593),i=n(2190);e.exports=function(e){var t=r(e,"string");return i(t)?t:t+""}},6330:function(e){var t=String;e.exports=function(e){try{return t(e)}catch(n){return"Object"}}},9711:function(e,t,n){var r=n(1702),i=0,s=Math.random(),o=r(1..toString);e.exports=function(e){return"Symbol("+(void 0===e?"":e)+")_"+o(++i+s,36)}},3307:function(e,t,n){var r=n(6293);e.exports=r&&!Symbol.sham&&"symbol"==typeof Symbol.iterator},3353:function(e,t,n){var r=n(9781),i=n(7293);e.exports=r&&i((function(){return 42!=Object.defineProperty((function(){}),"prototype",{value:42,writable:!1}).prototype}))},4811:function(e,t,n){var r=n(7854),i=n(614),s=r.WeakMap;e.exports=i(s)&&/native code/.test(String(s))},5112:function(e,t,n){var r=n(7854),i=n(2309),s=n(2597),o=n(9711),a=n(6293),l=n(3307),c=r.Symbol,u=i("wks"),d=l?c["for"]||c:c&&c.withoutSetter||o;e.exports=function(e){return s(u,e)||(u[e]=a&&s(c,e)?c[e]:d("Symbol."+e)),u[e]}},7658:function(e,t,n){"use strict";var r=n(2109),i=n(7908),s=n(6244),o=n(3658),a=n(7207),l=n(7293),c=l((function(){return 4294967297!==[].push.call({length:4294967296},1)})),u=function(){try{Object.defineProperty([],"length",{writable:!1}).push()}catch(e){return e instanceof TypeError}},d=c||!u();r({target:"Array",proto:!0,arity:1,forced:d},{push:function(e){var t=i(this),n=s(t),r=arguments.length;a(n+r);for(var l=0;l80*r){s=a=e[0],o=l=e[1];for(var b=r;ba&&(a=u),d>l&&(l=d);h=Math.max(a-s,l-o),h=0!==h?32767/h:0}return i(g,m,r,s,o,h,0),m}function n(e,t,n,r,i){var s,o;if(i===O(e,t,n,r)>0)for(s=t;s=t;s-=r)o=R(s,e[s],e[s+1],o);return o&&E(o,o.next)&&(k(o),o=o.next),o}function r(e,t){if(!e)return e;t||(t=e);var n,r=e;do{if(n=!1,r.steiner||!E(r,r.next)&&0!==v(r.prev,r,r.next))r=r.next;else{if(k(r),r=t=r.prev,r===r.next)break;n=!0}}while(n||r!==t);return t}function i(e,t,n,c,u,d,h){if(e){!h&&d&&f(e,c,u,d);var p,g,m=e;while(e.prev!==e.next)if(p=e.prev,g=e.next,d?o(e,c,u,d):s(e))t.push(p.i/n|0),t.push(e.i/n|0),t.push(g.i/n|0),k(e),e=g.next,m=g.next;else if(e=g,e===m){h?1===h?(e=a(r(e),t,n),i(e,t,n,c,u,d,2)):2===h&&l(e,t,n,c,u,d):i(r(e),t,n,c,u,d,1);break}}}function s(e){var t=e.prev,n=e,r=e.next;if(v(t,n,r)>=0)return!1;var i=t.x,s=n.x,o=r.x,a=t.y,l=n.y,c=r.y,u=is?i>o?i:o:s>o?s:o,p=a>l?a>c?a:c:l>c?l:c,f=r.next;while(f!==t){if(f.x>=u&&f.x<=h&&f.y>=d&&f.y<=p&&_(i,a,s,l,o,c,f.x,f.y)&&v(f.prev,f,f.next)>=0)return!1;f=f.next}return!0}function o(e,t,n,r){var i=e.prev,s=e,o=e.next;if(v(i,s,o)>=0)return!1;var a=i.x,l=s.x,c=o.x,u=i.y,d=s.y,h=o.y,p=al?a>c?a:c:l>c?l:c,b=u>d?u>h?u:h:d>h?d:h,y=m(p,f,t,n,r),E=m(g,b,t,n,r),x=e.prevZ,S=e.nextZ;while(x&&x.z>=y&&S&&S.z<=E){if(x.x>=p&&x.x<=g&&x.y>=f&&x.y<=b&&x!==i&&x!==o&&_(a,u,l,d,c,h,x.x,x.y)&&v(x.prev,x,x.next)>=0)return!1;if(x=x.prevZ,S.x>=p&&S.x<=g&&S.y>=f&&S.y<=b&&S!==i&&S!==o&&_(a,u,l,d,c,h,S.x,S.y)&&v(S.prev,S,S.next)>=0)return!1;S=S.nextZ}while(x&&x.z>=y){if(x.x>=p&&x.x<=g&&x.y>=f&&x.y<=b&&x!==i&&x!==o&&_(a,u,l,d,c,h,x.x,x.y)&&v(x.prev,x,x.next)>=0)return!1;x=x.prevZ}while(S&&S.z<=E){if(S.x>=p&&S.x<=g&&S.y>=f&&S.y<=b&&S!==i&&S!==o&&_(a,u,l,d,c,h,S.x,S.y)&&v(S.prev,S,S.next)>=0)return!1;S=S.nextZ}return!0}function a(e,t,n){var i=e;do{var s=i.prev,o=i.next.next;!E(s,o)&&x(s,i,i.next,o)&&A(s,o)&&A(o,s)&&(t.push(s.i/n|0),t.push(i.i/n|0),t.push(o.i/n|0),k(i),k(i.next),i=e=o),i=i.next}while(i!==e);return r(i)}function l(e,t,n,s,o,a){var l=e;do{var c=l.next.next;while(c!==l.prev){if(l.i!==c.i&&y(l,c)){var u=I(l,c);return l=r(l,l.next),u=r(u,u.next),i(l,t,n,s,o,a,0),void i(u,t,n,s,o,a,0)}c=c.next}l=l.next}while(l!==e)}function c(e,t,r,i){var s,o,a,l,c,h=[];for(s=0,o=t.length;s=r.next.y&&r.next.y!==r.y){var a=r.x+(s-r.y)*(r.next.x-r.x)/(r.next.y-r.y);if(a<=i&&a>o&&(o=a,n=r.x=r.x&&r.x>=u&&i!==r.x&&_(sn.x||r.x===n.x&&p(n,r)))&&(n=r,h=l)),r=r.next}while(r!==c);return n}function p(e,t){return v(e.prev,e,t.prev)<0&&v(t.next,e,e.next)<0}function f(e,t,n,r){var i=e;do{0===i.z&&(i.z=m(i.x,i.y,t,n,r)),i.prevZ=i.prev,i.nextZ=i.next,i=i.next}while(i!==e);i.prevZ.nextZ=null,i.prevZ=null,g(i)}function g(e){var t,n,r,i,s,o,a,l,c=1;do{n=e,e=null,s=null,o=0;while(n){for(o++,r=n,a=0,t=0;t0||l>0&&r)0!==a&&(0===l||!r||n.z<=r.z)?(i=n,n=n.nextZ,a--):(i=r,r=r.nextZ,l--),s?s.nextZ=i:e=i,i.prevZ=s,s=i;n=r}s.nextZ=null,c*=2}while(o>1);return e}function m(e,t,n,r,i){return e=(e-n)*i|0,t=(t-r)*i|0,e=16711935&(e|e<<8),e=252645135&(e|e<<4),e=858993459&(e|e<<2),e=1431655765&(e|e<<1),t=16711935&(t|t<<8),t=252645135&(t|t<<4),t=858993459&(t|t<<2),t=1431655765&(t|t<<1),e|t<<1}function b(e){var t=e,n=e;do{(t.x=(e-o)*(s-a)&&(e-o)*(r-a)>=(n-o)*(t-a)&&(n-o)*(s-a)>=(i-o)*(r-a)}function y(e,t){return e.next.i!==t.i&&e.prev.i!==t.i&&!T(e,t)&&(A(e,t)&&A(t,e)&&C(e,t)&&(v(e.prev,e,t.prev)||v(e,t.prev,t))||E(e,t)&&v(e.prev,e,e.next)>0&&v(t.prev,t,t.next)>0)}function v(e,t,n){return(t.y-e.y)*(n.x-t.x)-(t.x-e.x)*(n.y-t.y)}function E(e,t){return e.x===t.x&&e.y===t.y}function x(e,t,n,r){var i=w(v(e,t,n)),s=w(v(e,t,r)),o=w(v(n,r,e)),a=w(v(n,r,t));return i!==s&&o!==a||(!(0!==i||!S(e,n,t))||(!(0!==s||!S(e,r,t))||(!(0!==o||!S(n,e,r))||!(0!==a||!S(n,t,r)))))}function S(e,t,n){return t.x<=Math.max(e.x,n.x)&&t.x>=Math.min(e.x,n.x)&&t.y<=Math.max(e.y,n.y)&&t.y>=Math.min(e.y,n.y)}function w(e){return e>0?1:e<0?-1:0}function T(e,t){var n=e;do{if(n.i!==e.i&&n.next.i!==e.i&&n.i!==t.i&&n.next.i!==t.i&&x(n,n.next,e,t))return!0;n=n.next}while(n!==e);return!1}function A(e,t){return v(e.prev,e,e.next)<0?v(e,t,e.next)>=0&&v(e,e.prev,t)>=0:v(e,t,e.prev)<0||v(e,e.next,t)<0}function C(e,t){var n=e,r=!1,i=(e.x+t.x)/2,s=(e.y+t.y)/2;do{n.y>s!==n.next.y>s&&n.next.y!==n.y&&i<(n.next.x-n.x)*(s-n.y)/(n.next.y-n.y)+n.x&&(r=!r),n=n.next}while(n!==e);return r}function I(e,t){var n=new P(e.i,e.x,e.y),r=new P(t.i,t.x,t.y),i=e.next,s=t.prev;return e.next=t,t.prev=e,n.next=i,i.prev=n,r.next=n,n.prev=r,s.next=r,r.prev=s,r}function R(e,t,n,r){var i=new P(e,t,n);return r?(i.next=r.next,i.prev=r,r.next.prev=i,r.next=i):(i.prev=i,i.next=i),i}function k(e){e.next.prev=e.prev,e.prev.next=e.next,e.prevZ&&(e.prevZ.nextZ=e.nextZ),e.nextZ&&(e.nextZ.prevZ=e.prevZ)}function P(e,t,n){this.i=e,this.x=t,this.y=n,this.prev=null,this.next=null,this.z=0,this.prevZ=null,this.nextZ=null,this.steiner=!1}function O(e,t,n,r){for(var i=0,s=t,o=n-r;s0&&(r+=e[i-1].length,n.holes.push(r))}return n}},6729:function(e){"use strict";var t=Object.prototype.hasOwnProperty,n="~";function r(){}function i(e,t,n){this.fn=e,this.context=t,this.once=n||!1}function s(e,t,r,s,o){if("function"!==typeof r)throw new TypeError("The listener must be a function");var a=new i(r,s||e,o),l=n?n+t:t;return e._events[l]?e._events[l].fn?e._events[l]=[e._events[l],a]:e._events[l].push(a):(e._events[l]=a,e._eventsCount++),e}function o(e,t){0===--e._eventsCount?e._events=new r:delete e._events[t]}function a(){this._events=new r,this._eventsCount=0}Object.create&&(r.prototype=Object.create(null),(new r).__proto__||(n=!1)),a.prototype.eventNames=function(){var e,r,i=[];if(0===this._eventsCount)return i;for(r in e=this._events)t.call(e,r)&&i.push(n?r.slice(1):r);return Object.getOwnPropertySymbols?i.concat(Object.getOwnPropertySymbols(e)):i},a.prototype.listeners=function(e){var t=n?n+e:e,r=this._events[t];if(!r)return[];if(r.fn)return[r.fn];for(var i=0,s=r.length,o=new Array(s);i>2]|=e[s]<>6,l[i++]=128|63&r):r<55296||r>=57344?(l[i++]=224|r>>12,l[i++]=128|r>>6&63,l[i++]=128|63&r):(r=65536+((1023&r)<<10|1023&e.charCodeAt(++s)),l[i++]=240|r>>18,l[i++]=128|r>>12&63,l[i++]=128|r>>6&63,l[i++]=128|63&r);else for(i=this.start;s>2]|=r<>2]|=(192|r>>6)<>2]|=(128|63&r)<=57344?(a[i>>2]|=(224|r>>12)<>2]|=(128|r>>6&63)<>2]|=(128|63&r)<>2]|=(240|r>>18)<>2]|=(128|r>>12&63)<>2]|=(128|r>>6&63)<>2]|=(128|63&r)<=64?(this.start=i-64,this.hash(),this.hashed=!0):this.start=i}return this.bytes>4294967295&&(this.hBytes+=this.bytes/4294967296<<0,this.bytes=this.bytes%4294967296),this}},Md5.prototype.finalize=function(){if(!this.finalized){this.finalized=!0;var e=this.blocks,t=this.lastByteIndex;e[t>>2]|=EXTRA[3&t],t>=56&&(this.hashed||this.hash(),e[0]=e[16],e[16]=e[1]=e[2]=e[3]=e[4]=e[5]=e[6]=e[7]=e[8]=e[9]=e[10]=e[11]=e[12]=e[13]=e[14]=e[15]=0),e[14]=this.bytes<<3,e[15]=this.hBytes<<3|this.bytes>>>29,this.hash()}},Md5.prototype.hash=function(){var e,t,n,r,i,s,o=this.blocks;this.first?(e=o[0]-680876937,e=(e<<7|e>>>25)-271733879<<0,r=(-1732584194^2004318071&e)+o[1]-117830708,r=(r<<12|r>>>20)+e<<0,n=(-271733879^r&(-271733879^e))+o[2]-1126478375,n=(n<<17|n>>>15)+r<<0,t=(e^n&(r^e))+o[3]-1316259209,t=(t<<22|t>>>10)+n<<0):(e=this.h0,t=this.h1,n=this.h2,r=this.h3,e+=(r^t&(n^r))+o[0]-680876936,e=(e<<7|e>>>25)+t<<0,r+=(n^e&(t^n))+o[1]-389564586,r=(r<<12|r>>>20)+e<<0,n+=(t^r&(e^t))+o[2]+606105819,n=(n<<17|n>>>15)+r<<0,t+=(e^n&(r^e))+o[3]-1044525330,t=(t<<22|t>>>10)+n<<0),e+=(r^t&(n^r))+o[4]-176418897,e=(e<<7|e>>>25)+t<<0,r+=(n^e&(t^n))+o[5]+1200080426,r=(r<<12|r>>>20)+e<<0,n+=(t^r&(e^t))+o[6]-1473231341,n=(n<<17|n>>>15)+r<<0,t+=(e^n&(r^e))+o[7]-45705983,t=(t<<22|t>>>10)+n<<0,e+=(r^t&(n^r))+o[8]+1770035416,e=(e<<7|e>>>25)+t<<0,r+=(n^e&(t^n))+o[9]-1958414417,r=(r<<12|r>>>20)+e<<0,n+=(t^r&(e^t))+o[10]-42063,n=(n<<17|n>>>15)+r<<0,t+=(e^n&(r^e))+o[11]-1990404162,t=(t<<22|t>>>10)+n<<0,e+=(r^t&(n^r))+o[12]+1804603682,e=(e<<7|e>>>25)+t<<0,r+=(n^e&(t^n))+o[13]-40341101,r=(r<<12|r>>>20)+e<<0,n+=(t^r&(e^t))+o[14]-1502002290,n=(n<<17|n>>>15)+r<<0,t+=(e^n&(r^e))+o[15]+1236535329,t=(t<<22|t>>>10)+n<<0,e+=(n^r&(t^n))+o[1]-165796510,e=(e<<5|e>>>27)+t<<0,r+=(t^n&(e^t))+o[6]-1069501632,r=(r<<9|r>>>23)+e<<0,n+=(e^t&(r^e))+o[11]+643717713,n=(n<<14|n>>>18)+r<<0,t+=(r^e&(n^r))+o[0]-373897302,t=(t<<20|t>>>12)+n<<0,e+=(n^r&(t^n))+o[5]-701558691,e=(e<<5|e>>>27)+t<<0,r+=(t^n&(e^t))+o[10]+38016083,r=(r<<9|r>>>23)+e<<0,n+=(e^t&(r^e))+o[15]-660478335,n=(n<<14|n>>>18)+r<<0,t+=(r^e&(n^r))+o[4]-405537848,t=(t<<20|t>>>12)+n<<0,e+=(n^r&(t^n))+o[9]+568446438,e=(e<<5|e>>>27)+t<<0,r+=(t^n&(e^t))+o[14]-1019803690,r=(r<<9|r>>>23)+e<<0,n+=(e^t&(r^e))+o[3]-187363961,n=(n<<14|n>>>18)+r<<0,t+=(r^e&(n^r))+o[8]+1163531501,t=(t<<20|t>>>12)+n<<0,e+=(n^r&(t^n))+o[13]-1444681467,e=(e<<5|e>>>27)+t<<0,r+=(t^n&(e^t))+o[2]-51403784,r=(r<<9|r>>>23)+e<<0,n+=(e^t&(r^e))+o[7]+1735328473,n=(n<<14|n>>>18)+r<<0,t+=(r^e&(n^r))+o[12]-1926607734,t=(t<<20|t>>>12)+n<<0,i=t^n,e+=(i^r)+o[5]-378558,e=(e<<4|e>>>28)+t<<0,r+=(i^e)+o[8]-2022574463,r=(r<<11|r>>>21)+e<<0,s=r^e,n+=(s^t)+o[11]+1839030562,n=(n<<16|n>>>16)+r<<0,t+=(s^n)+o[14]-35309556,t=(t<<23|t>>>9)+n<<0,i=t^n,e+=(i^r)+o[1]-1530992060,e=(e<<4|e>>>28)+t<<0,r+=(i^e)+o[4]+1272893353,r=(r<<11|r>>>21)+e<<0,s=r^e,n+=(s^t)+o[7]-155497632,n=(n<<16|n>>>16)+r<<0,t+=(s^n)+o[10]-1094730640,t=(t<<23|t>>>9)+n<<0,i=t^n,e+=(i^r)+o[13]+681279174,e=(e<<4|e>>>28)+t<<0,r+=(i^e)+o[0]-358537222,r=(r<<11|r>>>21)+e<<0,s=r^e,n+=(s^t)+o[3]-722521979,n=(n<<16|n>>>16)+r<<0,t+=(s^n)+o[6]+76029189,t=(t<<23|t>>>9)+n<<0,i=t^n,e+=(i^r)+o[9]-640364487,e=(e<<4|e>>>28)+t<<0,r+=(i^e)+o[12]-421815835,r=(r<<11|r>>>21)+e<<0,s=r^e,n+=(s^t)+o[15]+530742520,n=(n<<16|n>>>16)+r<<0,t+=(s^n)+o[2]-995338651,t=(t<<23|t>>>9)+n<<0,e+=(n^(t|~r))+o[0]-198630844,e=(e<<6|e>>>26)+t<<0,r+=(t^(e|~n))+o[7]+1126891415,r=(r<<10|r>>>22)+e<<0,n+=(e^(r|~t))+o[14]-1416354905,n=(n<<15|n>>>17)+r<<0,t+=(r^(n|~e))+o[5]-57434055,t=(t<<21|t>>>11)+n<<0,e+=(n^(t|~r))+o[12]+1700485571,e=(e<<6|e>>>26)+t<<0,r+=(t^(e|~n))+o[3]-1894986606,r=(r<<10|r>>>22)+e<<0,n+=(e^(r|~t))+o[10]-1051523,n=(n<<15|n>>>17)+r<<0,t+=(r^(n|~e))+o[1]-2054922799,t=(t<<21|t>>>11)+n<<0,e+=(n^(t|~r))+o[8]+1873313359,e=(e<<6|e>>>26)+t<<0,r+=(t^(e|~n))+o[15]-30611744,r=(r<<10|r>>>22)+e<<0,n+=(e^(r|~t))+o[6]-1560198380,n=(n<<15|n>>>17)+r<<0,t+=(r^(n|~e))+o[13]+1309151649,t=(t<<21|t>>>11)+n<<0,e+=(n^(t|~r))+o[4]-145523070,e=(e<<6|e>>>26)+t<<0,r+=(t^(e|~n))+o[11]-1120210379,r=(r<<10|r>>>22)+e<<0,n+=(e^(r|~t))+o[2]+718787259,n=(n<<15|n>>>17)+r<<0,t+=(r^(n|~e))+o[9]-343485551,t=(t<<21|t>>>11)+n<<0,this.first?(this.h0=e+1732584193<<0,this.h1=t-271733879<<0,this.h2=n-1732584194<<0,this.h3=r+271733878<<0,this.first=!1):(this.h0=this.h0+e<<0,this.h1=this.h1+t<<0,this.h2=this.h2+n<<0,this.h3=this.h3+r<<0)},Md5.prototype.hex=function(){this.finalize();var e=this.h0,t=this.h1,n=this.h2,r=this.h3;return HEX_CHARS[e>>4&15]+HEX_CHARS[15&e]+HEX_CHARS[e>>12&15]+HEX_CHARS[e>>8&15]+HEX_CHARS[e>>20&15]+HEX_CHARS[e>>16&15]+HEX_CHARS[e>>28&15]+HEX_CHARS[e>>24&15]+HEX_CHARS[t>>4&15]+HEX_CHARS[15&t]+HEX_CHARS[t>>12&15]+HEX_CHARS[t>>8&15]+HEX_CHARS[t>>20&15]+HEX_CHARS[t>>16&15]+HEX_CHARS[t>>28&15]+HEX_CHARS[t>>24&15]+HEX_CHARS[n>>4&15]+HEX_CHARS[15&n]+HEX_CHARS[n>>12&15]+HEX_CHARS[n>>8&15]+HEX_CHARS[n>>20&15]+HEX_CHARS[n>>16&15]+HEX_CHARS[n>>28&15]+HEX_CHARS[n>>24&15]+HEX_CHARS[r>>4&15]+HEX_CHARS[15&r]+HEX_CHARS[r>>12&15]+HEX_CHARS[r>>8&15]+HEX_CHARS[r>>20&15]+HEX_CHARS[r>>16&15]+HEX_CHARS[r>>28&15]+HEX_CHARS[r>>24&15]},Md5.prototype.toString=Md5.prototype.hex,Md5.prototype.digest=function(){this.finalize();var e=this.h0,t=this.h1,n=this.h2,r=this.h3;return[255&e,e>>8&255,e>>16&255,e>>24&255,255&t,t>>8&255,t>>16&255,t>>24&255,255&n,n>>8&255,n>>16&255,n>>24&255,255&r,r>>8&255,r>>16&255,r>>24&255]},Md5.prototype.array=Md5.prototype.digest,Md5.prototype.arrayBuffer=function(){this.finalize();var e=new ArrayBuffer(16),t=new Uint32Array(e);return t[0]=this.h0,t[1]=this.h1,t[2]=this.h2,t[3]=this.h3,e},Md5.prototype.buffer=Md5.prototype.arrayBuffer,Md5.prototype.base64=function(){for(var e,t,n,r="",i=this.array(),s=0;s<15;)e=i[s++],t=i[s++],n=i[s++],r+=BASE64_ENCODE_CHAR[e>>>2]+BASE64_ENCODE_CHAR[63&(e<<4|t>>>4)]+BASE64_ENCODE_CHAR[63&(t<<2|n>>>6)]+BASE64_ENCODE_CHAR[63&n];return e=i[s],r+=BASE64_ENCODE_CHAR[e>>>2]+BASE64_ENCODE_CHAR[e<<4&63]+"==",r};var exports=createMethod();COMMON_JS?module.exports=exports:(root.md5=exports,AMD&&(__WEBPACK_AMD_DEFINE_RESULT__=function(){return exports}.call(exports,__webpack_require__,exports,module),void 0===__WEBPACK_AMD_DEFINE_RESULT__||(module.exports=__WEBPACK_AMD_DEFINE_RESULT__)))})()},2288:function(e){"use strict";e.exports={angry:[">:(",">:-("],blush:[':")',':-")'],broken_heart:["=0&&(t[n]=r[n]),t}),{})),n=Object.keys(e.shortcuts).reduce((function(t,n){return r[n]?Array.isArray(e.shortcuts[n])?(e.shortcuts[n].forEach((function(e){t[e]=n})),t):(t[e.shortcuts[n]]=n,t):t}),{});var i=Object.keys(r).map((function(e){return":"+e+":"})).concat(Object.keys(n)).sort().reverse().map((function(e){return t(e)})).join("|"),s=RegExp(i),o=RegExp(i,"g");return{defs:r,shortcuts:n,scanRE:s,replaceRE:o}}},8950:function(e){"use strict";e.exports=function(e,t){return e[t].content}},287:function(e){"use strict";e.exports=function(e,t,n,r,i){var s=e.utils.arrayReplaceAt,o=e.utils.lib.ucmicro,a=new RegExp([o.Z.source,o.P.source,o.Cc.source].join("|"));function l(e,r,s){var o,l=0,c=[];return e.replace(i,(function(r,i,u){var d;if(n.hasOwnProperty(r)){if(d=n[r],i>0&&!a.test(u[i-1]))return;if(i+r.lengthl&&(o=new s("text","",0),o.content=e.slice(l,i),c.push(o)),o=new s("emoji","",0),o.markup=d,o.content=t[d],c.push(o),l=i+r.length})),l=0;t--)a=o[t],"link_open"!==a.type&&"link_close"!==a.type||"auto"===a.info&&(u-=a.nesting),"text"===a.type&&0===u&&r.test(a.content)&&(c[n].children=o=s(o,t,l(a.content,a.level,e.Token)))}}},6308:function(e,t,n){"use strict";var r=n(2676),i=n(2288),s=n(8950),o=n(287),a=n(7701);e.exports=function(e,t){var n={defs:r,shortcuts:i,enabled:[]},l=a(e.utils.assign({},n,t||{}));e.renderer.rules.emoji=s,e.core.ruler.push("emoji",o(e,l.defs,l.shortcuts,l.scanRE,l.replaceRE))}},6495:function(e,t,n){"use strict";n.d(t,{_Y:function(){return dn}});var r=n(4038),i=n(4260),s=n(2848),o=n(8276),a=Math.pow,l=(e,t,n)=>new Promise(((r,i)=>{var s=e=>{try{a(n.next(e))}catch(t){i(t)}},o=e=>{try{a(n.throw(e))}catch(t){i(t)}},a=e=>e.done?r(e.value):Promise.resolve(e.value).then(s,o);a((n=n.apply(e,t)).next())}));class c{constructor(){this._breathParameters=[],this._currentTime=0}static create(){return new c}setParameters(e){this._breathParameters=e}getParameters(){return this._breathParameters}updateParameters(e,t){this._currentTime+=t;const n=2*this._currentTime*3.14159;for(let r=0;r=1&&(r=1,this._blinkingState=p.EyeState_Closed,this._stateStartTimeSeconds=this._userTimeSeconds),n=1-r;break;case p.EyeState_Closed:r=(this._userTimeSeconds-this._stateStartTimeSeconds)/this._closedSeconds,r>=1&&(this._blinkingState=p.EyeState_Opening,this._stateStartTimeSeconds=this._userTimeSeconds),n=0;break;case p.EyeState_Opening:r=(this._userTimeSeconds-this._stateStartTimeSeconds)/this._openingSeconds,r>=1&&(r=1,this._blinkingState=p.EyeState_Interval,this._nextBlinkingTime=this.determinNextBlinkingTiming()),n=r;break;case p.EyeState_Interval:this._nextBlinkingTime(e[e["EyeState_First"]=0]="EyeState_First",e[e["EyeState_Interval"]=1]="EyeState_Interval",e[e["EyeState_Closing"]=2]="EyeState_Closing",e[e["EyeState_Closed"]=3]="EyeState_Closed",e[e["EyeState_Opening"]=4]="EyeState_Opening",e))(p||{});const f=.001,g=.5;class m{static create(e){const t=new m;"number"===typeof e.FadeInTime&&(t._fadeTimeSeconds=e.FadeInTime,t._fadeTimeSeconds<=0&&(t._fadeTimeSeconds=g));const n=e.Groups,r=n.length;for(let i=0;if){if(i>=0)break;i=l,s=e.getPartOpacityByIndex(n),s+=t/this._fadeTimeSeconds,s>1&&(s=1)}}i<0&&(i=0,s=1);for(let l=n;la&&(n=1-a/(1-s)),r>n&&(r=n),e.setPartOpacityByIndex(t,r)}}}constructor(){this._fadeTimeSeconds=g,this._lastModel=void 0,this._partGroups=[],this._partGroupCounts=[]}}class b{constructor(e){this.parameterIndex=0,this.partIndex=0,this.partId="",this.link=[],void 0!=e&&this.assignment(e)}assignment(e){return this.partId=e.partId,this.link=e.link.map((e=>e.clone())),this}initialize(e){this.parameterIndex=e.getParameterIndex(this.partId),this.partIndex=e.getPartIndex(this.partId),e.setParameterValueByIndex(this.parameterIndex,1)}clone(){const e=new b;return e.partId=this.partId,e.parameterIndex=this.parameterIndex,e.partIndex=this.partIndex,e.link=this.link.map((e=>e.clone())),e}}class _{constructor(e,t){this.x=e||0,this.y=t||0}add(e){const t=new _(0,0);return t.x=this.x+e.x,t.y=this.y+e.y,t}substract(e){const t=new _(0,0);return t.x=this.x-e.x,t.y=this.y-e.y,t}multiply(e){const t=new _(0,0);return t.x=this.x*e.x,t.y=this.y*e.y,t}multiplyByScaler(e){return this.multiply(new _(e,e))}division(e){const t=new _(0,0);return t.x=this.x/e.x,t.y=this.y/e.y,t}divisionByScalar(e){return this.division(new _(e,e))}getLength(){return Math.sqrt(this.x*this.x+this.y*this.y)}getDistanceWith(e){return Math.sqrt((this.x-e.x)*(this.x-e.x)+(this.y-e.y)*(this.y-e.y))}dot(e){return this.x*e.x+this.y*e.y}normalize(){const e=Math.pow(this.x*this.x+this.y*this.y,.5);this.x=this.x/e,this.y=this.y/e}isEqual(e){return this.x==e.x&&this.y==e.y}isNotEqual(e){return!this.isEqual(e)}}const y=class{static range(e,t,n){return en&&(e=n),e}static sin(e){return Math.sin(e)}static cos(e){return Math.cos(e)}static abs(e){return Math.abs(e)}static sqrt(e){return Math.sqrt(e)}static cbrt(e){if(0===e)return e;let t=e;const n=t<0;let r;return n&&(t=-t),t===1/0?r=1/0:(r=Math.exp(Math.log(t)/3),r=(t/(r*r)+2*r)/3),n?-r:r}static getEasingSine(e){return e<0?0:e>1?1:.5-.5*this.cos(e*Math.PI)}static max(e,t){return e>t?e:t}static min(e,t){return e>t?t:e}static degreesToRadian(e){return e/180*Math.PI}static radianToDegrees(e){return 180*e/Math.PI}static directionToRadian(e,t){const n=Math.atan2(t.y,t.x),r=Math.atan2(e.y,e.x);let i=n-r;while(i<-Math.PI)i+=2*Math.PI;while(i>Math.PI)i-=2*Math.PI;return i}static directionToDegrees(e,t){const n=this.directionToRadian(e,t);let r=this.radianToDegrees(n);return t.x-e.x>0&&(r=-r),r}static radianToDirection(e){const t=new _;return t.x=this.sin(e),t.y=this.cos(e),t}static quadraticEquation(e,t,n){return this.abs(e)1&&(e=1),t<0?t=0:t>1&&(t=1),n<0?n=0:n>1&&(n=1),r<0?r=0:r>1&&(r=1),this._modelColor.R=e,this._modelColor.G=t,this._modelColor.B=n,this._modelColor.A=r}getModelColor(){return Object.assign({},this._modelColor)}setIsPremultipliedAlpha(e){this._isPremultipliedAlpha=e}isPremultipliedAlpha(){return this._isPremultipliedAlpha}setIsCulling(e){this._isCulling=e}isCulling(){return this._isCulling}setAnisotropy(e){this._anisortopy=e}getAnisotropy(){return this._anisortopy}getModel(){return this._model}constructor(){this._isCulling=!1,this._isPremultipliedAlpha=!1,this._anisortopy=0,this._modelColor=new w,this._mvpMatrix4x4=new E,this._mvpMatrix4x4.loadIdentity()}}var S=(e=>(e[e["CubismBlendMode_Normal"]=0]="CubismBlendMode_Normal",e[e["CubismBlendMode_Additive"]=1]="CubismBlendMode_Additive",e[e["CubismBlendMode_Multiplicative"]=2]="CubismBlendMode_Multiplicative",e))(S||{});class w{constructor(){this.R=1,this.G=1,this.B=1,this.A=1}}let T,A=!1,C=!1;const I={vertexOffset:0,vertexStep:2};class R{static startUp(e){if(A)return N("CubismFramework.startUp() is already done."),A;if(Live2DCubismCore._isStarted)return A=!0,!0;if(Live2DCubismCore._isStarted=!0,T=e,T&&Live2DCubismCore.Logging.csmSetLogFunction(T.logFunction),A=!0,A){const e=Live2DCubismCore.Version.csmGetVersion(),t=(4278190080&e)>>24,n=(16711680&e)>>16,r=65535&e,i=e;N("Live2D Cubism Core version: {0}.{1}.{2} ({3})",("00"+t).slice(-2),("00"+n).slice(-2),("0000"+r).slice(-4),i)}return N("CubismFramework.startUp() is complete."),A}static cleanUp(){A=!1,C=!1,T=void 0}static initialize(){A?C?M("CubismFramework.initialize() skipped, already initialized."):(C=!0,N("CubismFramework.initialize() is complete.")):M("CubismFramework is not started.")}static dispose(){A?C?(x.staticRelease(),C=!1,N("CubismFramework.dispose() is complete.")):M("CubismFramework.dispose() skipped, not initialized."):M("CubismFramework is not started.")}static isStarted(){return A}static isInitialized(){return C}static coreLogFunction(e){Live2DCubismCore.Logging.csmGetLogFunction()&&Live2DCubismCore.Logging.csmGetLogFunction()(e)}static getLoggingLevel(){return null!=T?T.loggingLevel:k.LogLevel_Off}constructor(){}}var k=(e=>(e[e["LogLevel_Verbose"]=0]="LogLevel_Verbose",e[e["LogLevel_Debug"]=1]="LogLevel_Debug",e[e["LogLevel_Info"]=2]="LogLevel_Info",e[e["LogLevel_Warning"]=3]="LogLevel_Warning",e[e["LogLevel_Error"]=4]="LogLevel_Error",e[e["LogLevel_Off"]=5]="LogLevel_Off",e))(k||{});const P=()=>{};function O(e,...t){L.print(k.LogLevel_Debug,"[CSM][D]"+e+"\n",t)}function N(e,...t){L.print(k.LogLevel_Info,"[CSM][I]"+e+"\n",t)}function M(e,...t){L.print(k.LogLevel_Warning,"[CSM][W]"+e+"\n",t)}function D(e,...t){L.print(k.LogLevel_Error,"[CSM][E]"+e+"\n",t)}class L{static print(e,t,n){if(en[t]));r(i)}static dumpBytes(e,t,n){for(let r=0;r0?this.print(e,"\n"):r%8==0&&r>0&&this.print(e," "),this.print(e,"{0} ",[255&t[r]]);this.print(e,"\n")}constructor(){}}class F{update(){this._model.update(),this._model.drawables.resetDynamicFlags()}getCanvasWidth(){return null==this._model?0:this._model.canvasinfo.CanvasWidth/this._model.canvasinfo.PixelsPerUnit}getCanvasHeight(){return null==this._model?0:this._model.canvasinfo.CanvasHeight/this._model.canvasinfo.PixelsPerUnit}saveParameters(){const e=this._model.parameters.count,t=this._savedParameters.length;for(let n=0;nt&&(t=this._model.parameters.minimumValues[e]),this._parameterValues[e]=1==n?t:this._parameterValues[e]=this._parameterValues[e]*(1-n)+t*n)}setParameterValueById(e,t,n=1){const r=this.getParameterIndex(e);this.setParameterValueByIndex(r,t,n)}addParameterValueByIndex(e,t,n=1){this.setParameterValueByIndex(e,this.getParameterValueByIndex(e)+t*n)}addParameterValueById(e,t,n=1){const r=this.getParameterIndex(e);this.addParameterValueByIndex(r,t,n)}multiplyParameterValueById(e,t,n=1){const r=this.getParameterIndex(e);this.multiplyParameterValueByIndex(r,t,n)}multiplyParameterValueByIndex(e,t,n=1){this.setParameterValueByIndex(e,this.getParameterValueByIndex(e)*(1+(t-1)*n))}getDrawableIds(){return this._drawableIds.slice()}getDrawableIndex(e){const t=this._model.drawables.count;for(let n=0;nt&&(e=t);for(let n=0;n0&&t.getEndTime()(e[e["ExpressionBlendType_Add"]=0]="ExpressionBlendType_Add",e[e["ExpressionBlendType_Multiply"]=1]="ExpressionBlendType_Multiply",e[e["ExpressionBlendType_Overwrite"]=2]="ExpressionBlendType_Overwrite",e))(H||{});(e=>{e.supportMoreMaskDivisions=!0,e.setOpacityFromMotion=!1})(z||(z={}));var V=(e=>(e[e["CubismMotionCurveTarget_Model"]=0]="CubismMotionCurveTarget_Model",e[e["CubismMotionCurveTarget_Parameter"]=1]="CubismMotionCurveTarget_Parameter",e[e["CubismMotionCurveTarget_PartOpacity"]=2]="CubismMotionCurveTarget_PartOpacity",e))(V||{}),j=(e=>(e[e["CubismMotionSegmentType_Linear"]=0]="CubismMotionSegmentType_Linear",e[e["CubismMotionSegmentType_Bezier"]=1]="CubismMotionSegmentType_Bezier",e[e["CubismMotionSegmentType_Stepped"]=2]="CubismMotionSegmentType_Stepped",e[e["CubismMotionSegmentType_InverseStepped"]=3]="CubismMotionSegmentType_InverseStepped",e))(j||{});class W{constructor(e=0,t=0){this.time=e,this.value=t}}class q{constructor(){this.basePointIndex=0,this.segmentType=0}}class X{constructor(){this.id="",this.type=0,this.segmentCount=0,this.baseSegmentIndex=0,this.fadeInTime=0,this.fadeOutTime=0}}class Y{constructor(){this.fireTime=0,this.value=""}}class K{constructor(){this.duration=0,this.loop=!1,this.curveCount=0,this.eventCount=0,this.fps=0,this.curves=[],this.segments=[],this.points=[],this.events=[]}}class Z{constructor(e){this._json=e}release(){this._json=void 0}getMotionDuration(){return this._json.Meta.Duration}isMotionLoop(){return this._json.Meta.Loop||!1}getEvaluationOptionFlag(e){return Q.EvaluationOptionFlag_AreBeziersRistricted==e&&!!this._json.Meta.AreBeziersRestricted}getMotionCurveCount(){return this._json.Meta.CurveCount}getMotionFps(){return this._json.Meta.Fps}getMotionTotalSegmentCount(){return this._json.Meta.TotalSegmentCount}getMotionTotalPointCount(){return this._json.Meta.TotalPointCount}getMotionFadeInTime(){return this._json.Meta.FadeInTime}getMotionFadeOutTime(){return this._json.Meta.FadeOutTime}getMotionCurveTarget(e){return this._json.Curves[e].Target}getMotionCurveId(e){return this._json.Curves[e].Id}getMotionCurveFadeInTime(e){return this._json.Curves[e].FadeInTime}getMotionCurveFadeOutTime(e){return this._json.Curves[e].FadeOutTime}getMotionCurveSegmentCount(e){return this._json.Curves[e].Segments.length}getMotionCurveSegment(e,t){return this._json.Curves[e].Segments[t]}getEventCount(){return this._json.Meta.UserDataCount||0}getTotalEventValueSize(){return this._json.Meta.TotalUserDataSize}getEventTime(e){return this._json.UserData[e].Time}getEventValue(e){return this._json.UserData[e].Value}}var Q=(e=>(e[e["EvaluationOptionFlag_AreBeziersRistricted"]=0]="EvaluationOptionFlag_AreBeziersRistricted",e))(Q||{});const J="EyeBlink",ee="LipSync",te="Model",ne="Parameter",re="PartOpacity",ie=!1;function se(e,t,n){const r=new W;return r.time=e.time+(t.time-e.time)*n,r.value=e.value+(t.value-e.value)*n,r}function oe(e,t){let n=(t-e[0].time)/(e[1].time-e[0].time);return n<0&&(n=0),e[0].value+(e[1].value-e[0].value)*n}function ae(e,t){let n=(t-e[0].time)/(e[3].time-e[0].time);n<0&&(n=0);const r=se(e[0],e[1],n),i=se(e[1],e[2],n),s=se(e[2],e[3],n),o=se(r,i,n),a=se(i,s,n);return se(o,a,n).value}function le(e,t){const n=t,r=e[0].time,i=e[3].time,s=e[1].time,o=e[2].time,a=i-3*o+3*s-r,l=3*o-6*s+3*r,c=3*s-3*r,u=r-n,d=v.cardanoAlgorithmForBezier(a,l,c,u),h=se(e[0],e[1],d),p=se(e[1],e[2],d),f=se(e[2],e[3],d),g=se(h,p,d),m=se(p,f,d);return se(g,m,d).value}function ce(e,t){return e[0].value}function ue(e,t){return e[1].value}function de(e,t,n){const r=e.curves[t];let i=-1;const s=r.baseSegmentIndex+r.segmentCount;let o=0;for(let l=r.baseSegmentIndex;ln){i=l;break}if(-1==i)return e.points[o].value;const a=e.segments[i];return a.evaluate(e.points.slice(a.basePointIndex),n)}class he extends U{constructor(){super(),this._eyeBlinkParameterIds=[],this._lipSyncParameterIds=[],this._sourceFrameRate=30,this._loopDurationSeconds=-1,this._isLoop=!1,this._isLoopFadeIn=!0,this._lastWeight=0}static create(e,t){const n=new he;return n.parse(e),n._sourceFrameRate=n._motionData.fps,n._loopDurationSeconds=n._motionData.duration,n._onFinishedMotion=t,n}doUpdateParameters(e,t,n,r){null==this._modelCurveIdEyeBlink&&(this._modelCurveIdEyeBlink=J),null==this._modelCurveIdLipSync&&(this._modelCurveIdLipSync=ee);let i=t-r.getStartTime();i<0&&(i=0);let s=Number.MAX_VALUE,o=Number.MAX_VALUE;const a=64;let l=0,c=0;this._eyeBlinkParameterIds.length>a&&O("too many eye blink targets : {0}",this._eyeBlinkParameterIds.length),this._lipSyncParameterIds.length>a&&O("too many lip sync targets : {0}",this._lipSyncParameterIds.length);const u=this._fadeInSeconds<=0?1:v.getEasingSine((t-r.getFadeInStartTime())/this._fadeInSeconds),d=this._fadeOutSeconds<=0||r.getEndTime()<0?1:v.getEasingSine((r.getEndTime()-t)/this._fadeOutSeconds);let h,p,f,g=i;if(this._isLoop)while(g>this._motionData.duration)g-=this._motionData.duration;const m=this._motionData.curves;for(p=0;p>b&1)continue;const r=t+(o-t)*n;e.setParameterValueById(this._eyeBlinkParameterIds[b],r)}if(s!=Number.MAX_VALUE)for(let b=0;b>b&1)continue;const r=t+(s-t)*n;e.setParameterValueById(this._lipSyncParameterIds[b],r)}for(;p=this._motionData.duration&&(this._isLoop?(r.setStartTime(t),this._isLoopFadeIn&&r.setFadeInStartTime(t)):(this._onFinishedMotion&&this._onFinishedMotion(this),r.setIsFinished(!0))),this._lastWeight=n}setIsLoop(e){this._isLoop=e}isLoop(){return this._isLoop}setIsLoopFadeIn(e){this._isLoopFadeIn=e}isLoopFadeIn(){return this._isLoopFadeIn}getDuration(){return this._isLoop?-1:this._loopDurationSeconds}getLoopDuration(){return this._loopDurationSeconds}setParameterFadeInTime(e,t){const n=this._motionData.curves;for(let r=0;rnew X)),this._motionData.segments=Array.from({length:t.getMotionTotalSegmentCount()}).map((()=>new q)),this._motionData.events=Array.from({length:this._motionData.eventCount}).map((()=>new Y)),this._motionData.points=[];let s=0,o=0;for(let a=0;ae&&this._motionData.events[n].fireTime<=t&&this._firedEventValues.push(this._motionData.events[n].value);return this._firedEventValues}}class pe{constructor(){this._autoDelete=!1,this._available=!0,this._finished=!1,this._started=!1,this._startTimeSeconds=-1,this._fadeInStartTimeSeconds=0,this._endTimeSeconds=-1,this._stateTimeSeconds=0,this._stateWeight=0,this._lastEventCheckSeconds=0,this._motionQueueEntryHandle=this,this._fadeOutSeconds=0,this._isTriggeredFadeOut=!1}release(){this._autoDelete&&this._motion&&this._motion.release()}setFadeOut(e){this._fadeOutSeconds=e,this._isTriggeredFadeOut=!0}startFadeOut(e,t){const n=t+e;this._isTriggeredFadeOut=!0,(this._endTimeSeconds<0||nnull!=t&&t._motionQueueEntryHandle==e))}setEventCallback(e,t=null){this._eventCallBack=e,this._eventCustomData=t}doUpdateMotion(e,t){let n=!1,r=0;while(r(e[e["CubismPhysicsTargetType_Parameter"]=0]="CubismPhysicsTargetType_Parameter",e))(me||{}),be=(e=>(e[e["CubismPhysicsSource_X"]=0]="CubismPhysicsSource_X",e[e["CubismPhysicsSource_Y"]=1]="CubismPhysicsSource_Y",e[e["CubismPhysicsSource_Angle"]=2]="CubismPhysicsSource_Angle",e))(be||{});class _e{constructor(){this.initialPosition=new _(0,0),this.position=new _(0,0),this.lastPosition=new _(0,0),this.lastGravity=new _(0,0),this.force=new _(0,0),this.velocity=new _(0,0)}}class ye{constructor(){this.normalizationPosition={},this.normalizationAngle={}}}class ve{constructor(){this.source={}}}class Ee{constructor(){this.destination={},this.translationScale=new _(0,0)}}class xe{constructor(){this.settings=[],this.inputs=[],this.outputs=[],this.particles=[],this.gravity=new _(0,0),this.wind=new _(0,0)}}class Se{constructor(e){this._json=e}release(){this._json=void 0}getGravity(){const e=new _(0,0);return e.x=this._json.Meta.EffectiveForces.Gravity.X,e.y=this._json.Meta.EffectiveForces.Gravity.Y,e}getWind(){const e=new _(0,0);return e.x=this._json.Meta.EffectiveForces.Wind.X,e.y=this._json.Meta.EffectiveForces.Wind.Y,e}getSubRigCount(){return this._json.Meta.PhysicsSettingCount}getTotalInputCount(){return this._json.Meta.TotalInputCount}getTotalOutputCount(){return this._json.Meta.TotalOutputCount}getVertexCount(){return this._json.Meta.VertexCount}getNormalizationPositionMinimumValue(e){return this._json.PhysicsSettings[e].Normalization.Position.Minimum}getNormalizationPositionMaximumValue(e){return this._json.PhysicsSettings[e].Normalization.Position.Maximum}getNormalizationPositionDefaultValue(e){return this._json.PhysicsSettings[e].Normalization.Position.Default}getNormalizationAngleMinimumValue(e){return this._json.PhysicsSettings[e].Normalization.Angle.Minimum}getNormalizationAngleMaximumValue(e){return this._json.PhysicsSettings[e].Normalization.Angle.Maximum}getNormalizationAngleDefaultValue(e){return this._json.PhysicsSettings[e].Normalization.Angle.Default}getInputCount(e){return this._json.PhysicsSettings[e].Input.length}getInputWeight(e,t){return this._json.PhysicsSettings[e].Input[t].Weight}getInputReflect(e,t){return this._json.PhysicsSettings[e].Input[t].Reflect}getInputType(e,t){return this._json.PhysicsSettings[e].Input[t].Type}getInputSourceId(e,t){return this._json.PhysicsSettings[e].Input[t].Source.Id}getOutputCount(e){return this._json.PhysicsSettings[e].Output.length}getOutputVertexIndex(e,t){return this._json.PhysicsSettings[e].Output[t].VertexIndex}getOutputAngleScale(e,t){return this._json.PhysicsSettings[e].Output[t].Scale}getOutputWeight(e,t){return this._json.PhysicsSettings[e].Output[t].Weight}getOutputDestinationId(e,t){return this._json.PhysicsSettings[e].Output[t].Destination.Id}getOutputType(e,t){return this._json.PhysicsSettings[e].Output[t].Type}getOutputReflect(e,t){return this._json.PhysicsSettings[e].Output[t].Reflect}getParticleCount(e){return this._json.PhysicsSettings[e].Vertices.length}getParticleMobility(e,t){return this._json.PhysicsSettings[e].Vertices[t].Mobility}getParticleDelay(e,t){return this._json.PhysicsSettings[e].Vertices[t].Delay}getParticleAcceleration(e,t){return this._json.PhysicsSettings[e].Vertices[t].Acceleration}getParticleRadius(e,t){return this._json.PhysicsSettings[e].Vertices[t].Radius}getParticlePosition(e,t){const n=new _(0,0);return n.x=this._json.PhysicsSettings[e].Vertices[t].Position.X,n.y=this._json.PhysicsSettings[e].Vertices[t].Position.Y,n}}const we="X",Te="Y",Ae="Angle",Ce=5,Ie=100,Re=.001;class ke{static create(e){const t=new ke;return t.parse(e),t._physicsRig.gravity.y=0,t}evaluate(e,t){let n,r,i,s;const o=new _;let a,l,c,u,d,h,p,f;d=e.getModel().parameters.values,h=e.getModel().parameters.maximumValues,p=e.getModel().parameters.minimumValues,f=e.getModel().parameters.defaultValues;for(let g=0;g=a.particleCount)break;-1==c[t].destinationParameterIndex&&(c[t].destinationParameterIndex=e.getParameterIndex(c[t].destination.id));const r=new _;r.x=u[n].position.x-u[n-1].position.x,r.y=u[n].position.y-u[n-1].position.y,s=c[t].getValue(r,u,n,c[t].reflect,this._options.gravity);const i=c[t].destinationParameterIndex,o=!Float32Array.prototype.slice&&"subarray"in Float32Array.prototype?JSON.parse(JSON.stringify(d.subarray(i))):d.slice(i);Ve(o,p[i],h[i],s,c[t]);for(let e=i,t=0;e=2?t[n-1].position.substract(t[n-2].position):i.multiplyByScaler(-1),s=v.directionToRadian(i,e),r&&(s*=-1),s}function Be(e,t){return Math.abs(Math.max(e,t)-Math.min(e,t))}function Ue(e,t){const n=Math.min(e,t);return n+Be(e,t)/2}function Ge(e,t){return e.x}function $e(e,t){return e.y}function ze(e,t){return t}function He(e,t,n,r,i,s,o,a){let l,c,u,d,h=new _(0,0),p=new _(0,0),f=new _(0,0),g=new _(0,0);e[0].position=new _(n.x,n.y),l=v.degreesToRadian(r),d=v.radianToDirection(l),d.normalize();for(let m=1;mn&&(o>i.valueExceededMaximum&&(i.valueExceededMaximum=o),o=n),a=i.weight/Ie,a>=1||(o=e[0]*(1-a)+o*a),e[0]=o}function je(e,t,n,r,i,s,o,a){let l=0;const c=v.max(n,t);ce&&(e=u);const d=v.min(i,s),h=v.max(i,s),p=o,f=Ue(u,c),g=e-f;switch(Math.sign(g)){case 1:{const e=h-p,t=c-f;0!=t&&(l=g*(e/t),l+=p);break}case-1:{const e=d-p,t=u-f;0!=t&&(l=g*(e/t),l+=p);break}case 0:l=p;break}return a?l:-1*l}class We{constructor(e=0,t=0,n=0,r=0){this.x=e,this.y=t,this.width=n,this.height=r}getCenterX(){return this.x+.5*this.width}getCenterY(){return this.y+.5*this.height}getRight(){return this.x+this.width}getBottom(){return this.y+this.height}setRect(e){this.x=e.x,this.y=e.y,this.width=e.width,this.height=e.height}expand(e,t){this.x-=e,this.y-=t,this.width+=2*e,this.height+=2*t}}const qe=4,Xe=10;let Ye,Ke,Ze;class Qe{getChannelFlagAsColor(e){return this._channelColors[e]}getMaskRenderTexture(){let e=0;if(this._maskTexture&&0!=this._maskTexture.texture&&(this._maskTexture.frameNo=this._currentFrameNo,e=this._maskTexture.texture),0==e){const t=this._clippingMaskBufferSize;this._colorBuffer=this.gl.createTexture(),this.gl.bindTexture(this.gl.TEXTURE_2D,this._colorBuffer),this.gl.texImage2D(this.gl.TEXTURE_2D,0,this.gl.RGBA,t,t,0,this.gl.RGBA,this.gl.UNSIGNED_BYTE,null),this.gl.texParameteri(this.gl.TEXTURE_2D,this.gl.TEXTURE_WRAP_S,this.gl.CLAMP_TO_EDGE),this.gl.texParameteri(this.gl.TEXTURE_2D,this.gl.TEXTURE_WRAP_T,this.gl.CLAMP_TO_EDGE),this.gl.texParameteri(this.gl.TEXTURE_2D,this.gl.TEXTURE_MIN_FILTER,this.gl.LINEAR),this.gl.texParameteri(this.gl.TEXTURE_2D,this.gl.TEXTURE_MAG_FILTER,this.gl.LINEAR),this.gl.bindTexture(this.gl.TEXTURE_2D,null),e=this.gl.createFramebuffer(),this.gl.bindFramebuffer(this.gl.FRAMEBUFFER,e),this.gl.framebufferTexture2D(this.gl.FRAMEBUFFER,this.gl.COLOR_ATTACHMENT0,this.gl.TEXTURE_2D,this._colorBuffer,0),this.gl.bindFramebuffer(this.gl.FRAMEBUFFER,Ze),this._maskTexture=new Je(this._currentFrameNo,e)}return e}setGL(e){this.gl=e}calcClippedDrawTotalBounds(e,t){let n=Number.MAX_VALUE,r=Number.MAX_VALUE,i=Number.MIN_VALUE,s=Number.MIN_VALUE;const o=t._clippedDrawableIndexList.length;for(let a=0;ah&&(h=t),np&&(p=n)}if(u!=Number.MAX_VALUE)if(ui&&(i=h),p>s&&(s=p),n==Number.MAX_VALUE)t._allClippedDrawRect.x=0,t._allClippedDrawRect.y=0,t._allClippedDrawRect.width=0,t._allClippedDrawRect.height=0,t._isUsing=!1;else{t._isUsing=!0;const e=i-n,o=s-r;t._allClippedDrawRect.x=n,t._allClippedDrawRect.y=r,t._allClippedDrawRect.width=e,t._allClippedDrawRect.height=o}}}constructor(){this._maskRenderTexture=null,this._colorBuffer=null,this._currentFrameNo=0,this._clippingMaskBufferSize=256,this._clippingContextListForMask=[],this._clippingContextListForDraw=[],this._channelColors=[],this._tmpBoundsOnModel=new We,this._tmpMatrix=new E,this._tmpMatrixForMask=new E,this._tmpMatrixForDraw=new E;let e=new w;e.R=1,e.G=0,e.B=0,e.A=0,this._channelColors.push(e),e=new w,e.R=0,e.G=1,e.B=0,e.A=0,this._channelColors.push(e),e=new w,e.R=0,e.G=0,e.B=1,e.A=0,this._channelColors.push(e),e=new w,e.R=0,e.G=0,e.B=0,e.A=1,this._channelColors.push(e)}release(){var e,t,n;const r=this;for(let i=0;i0){this.gl.viewport(0,0,this._clippingMaskBufferSize,this._clippingMaskBufferSize),this._maskRenderTexture=this.getMaskRenderTexture(),t.getMvpMatrix(),t.preDraw(),this.setupLayoutBounds(n),this.gl.bindFramebuffer(this.gl.FRAMEBUFFER,this._maskRenderTexture),this.gl.clearColor(1,1,1,1),this.gl.clear(this.gl.COLOR_BUFFER_BIT);for(let n=0;n(e[e["ShaderNames_SetupMask"]=0]="ShaderNames_SetupMask",e[e["ShaderNames_NormalPremultipliedAlpha"]=1]="ShaderNames_NormalPremultipliedAlpha",e[e["ShaderNames_NormalMaskedPremultipliedAlpha"]=2]="ShaderNames_NormalMaskedPremultipliedAlpha",e[e["ShaderNames_NomralMaskedInvertedPremultipliedAlpha"]=3]="ShaderNames_NomralMaskedInvertedPremultipliedAlpha",e[e["ShaderNames_AddPremultipliedAlpha"]=4]="ShaderNames_AddPremultipliedAlpha",e[e["ShaderNames_AddMaskedPremultipliedAlpha"]=5]="ShaderNames_AddMaskedPremultipliedAlpha",e[e["ShaderNames_AddMaskedPremultipliedAlphaInverted"]=6]="ShaderNames_AddMaskedPremultipliedAlphaInverted",e[e["ShaderNames_MultPremultipliedAlpha"]=7]="ShaderNames_MultPremultipliedAlpha",e[e["ShaderNames_MultMaskedPremultipliedAlpha"]=8]="ShaderNames_MultMaskedPremultipliedAlpha",e[e["ShaderNames_MultMaskedPremultipliedAlphaInverted"]=9]="ShaderNames_MultMaskedPremultipliedAlphaInverted",e))(nt||{});const rt="attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_myPos;uniform mat4 u_clipMatrix;void main(){ gl_Position = u_clipMatrix * a_position; v_myPos = u_clipMatrix * a_position; v_texCoord = a_texCoord; v_texCoord.y = 1.0 - v_texCoord.y;}",it="precision mediump float;varying vec2 v_texCoord;varying vec4 v_myPos;uniform vec4 u_baseColor;uniform vec4 u_channelFlag;uniform sampler2D s_texture0;void main(){ float isInside = step(u_baseColor.x, v_myPos.x/v_myPos.w) * step(u_baseColor.y, v_myPos.y/v_myPos.w) * step(v_myPos.x/v_myPos.w, u_baseColor.z) * step(v_myPos.y/v_myPos.w, u_baseColor.w); gl_FragColor = u_channelFlag * texture2D(s_texture0, v_texCoord).a * isInside;}",st="attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;uniform mat4 u_matrix;void main(){ gl_Position = u_matrix * a_position; v_texCoord = a_texCoord; v_texCoord.y = 1.0 - v_texCoord.y;}",ot="attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_clipPos;uniform mat4 u_matrix;uniform mat4 u_clipMatrix;void main(){ gl_Position = u_matrix * a_position; v_clipPos = u_clipMatrix * a_position; v_texCoord = a_texCoord; v_texCoord.y = 1.0 - v_texCoord.y;}",at="precision mediump float;varying vec2 v_texCoord;uniform vec4 u_baseColor;uniform sampler2D s_texture0;void main(){ gl_FragColor = texture2D(s_texture0 , v_texCoord) * u_baseColor;}",lt="precision mediump float;varying vec2 v_texCoord;varying vec4 v_clipPos;uniform vec4 u_baseColor;uniform vec4 u_channelFlag;uniform sampler2D s_texture0;uniform sampler2D s_texture1;void main(){ vec4 col_formask = texture2D(s_texture0 , v_texCoord) * u_baseColor; vec4 clipMask = (1.0 - texture2D(s_texture1, v_clipPos.xy / v_clipPos.w)) * u_channelFlag; float maskVal = clipMask.r + clipMask.g + clipMask.b + clipMask.a; col_formask = col_formask * maskVal; gl_FragColor = col_formask;}",ct="precision mediump float;varying vec2 v_texCoord;varying vec4 v_clipPos;uniform sampler2D s_texture0;uniform sampler2D s_texture1;uniform vec4 u_channelFlag;uniform vec4 u_baseColor;void main(){vec4 col_formask = texture2D(s_texture0, v_texCoord) * u_baseColor;vec4 clipMask = (1.0 - texture2D(s_texture1, v_clipPos.xy / v_clipPos.w)) * u_channelFlag;float maskVal = clipMask.r + clipMask.g + clipMask.b + clipMask.a;col_formask = col_formask * (1.0 - maskVal);gl_FragColor = col_formask;}";class ut extends x{constructor(){super(),this._clippingContextBufferForMask=null,this._clippingContextBufferForDraw=null,this._clippingManager=new Qe,this.firstDraw=!0,this._textures={},this._sortedDrawableIndexList=[],this._bufferData={vertex:null,uv:null,index:null}}initialize(e){e.isUsingMasking()&&(this._clippingManager=new Qe,this._clippingManager.initialize(e,e.getDrawableCount(),e.getDrawableMasks(),e.getDrawableMaskCounts()));for(let t=e.getDrawableCount()-1;t>=0;t--)this._sortedDrawableIndexList[t]=0;super.initialize(e)}bindTexture(e,t){this._textures[e]=t}getBindedTextures(){return this._textures}setClippingMaskBufferSize(e){this._clippingManager.release(),this._clippingManager=new Qe,this._clippingManager.setClippingMaskBufferSize(e),this._clippingManager.initialize(this.getModel(),this.getModel().getDrawableCount(),this.getModel().getDrawableMasks(),this.getModel().getDrawableMaskCounts())}getClippingMaskBufferSize(){return this._clippingManager.getClippingMaskBufferSize()}release(){var e,t,n;const r=this;this._clippingManager.release(),r._clippingManager=void 0,null==(e=this.gl)||e.deleteBuffer(this._bufferData.vertex),this._bufferData.vertex=null,null==(t=this.gl)||t.deleteBuffer(this._bufferData.uv),this._bufferData.uv=null,null==(n=this.gl)||n.deleteBuffer(this._bufferData.index),this._bufferData.index=null,r._bufferData=void 0,r._textures=void 0}doDrawModel(){this.preDraw(),null!=this._clippingManager&&this._clippingManager.setupClippingContext(this.getModel(),this);const e=this.getModel().getDrawableCount(),t=this.getModel().getDrawableRenderOrders();for(let n=0;n{ut.doStaticRelease()};class dt{constructor(e){this.groups=e.Groups,this.hitAreas=e.HitAreas,this.layout=e.Layout,this.moc=e.FileReferences.Moc,this.expressions=e.FileReferences.Expressions,this.motions=e.FileReferences.Motions,this.textures=e.FileReferences.Textures,this.physics=e.FileReferences.Physics,this.pose=e.FileReferences.Pose}getEyeBlinkParameters(){var e,t;return null==(t=null==(e=this.groups)?void 0:e.find((e=>"EyeBlink"===e.Name)))?void 0:t.Ids}getLipSyncParameters(){var e,t;return null==(t=null==(e=this.groups)?void 0:e.find((e=>"LipSync"===e.Name)))?void 0:t.Ids}}const ht="ParamAngleX",pt="ParamAngleY",ft="ParamAngleZ",gt="ParamEyeBallX",mt="ParamEyeBallY",bt="ParamBodyAngleX",_t="ParamBreath",yt=2,vt=2;var Et;(e=>{e.LOG_LEVEL_VERBOSE=0,e.LOG_LEVEL_WARNING=1,e.LOG_LEVEL_ERROR=2,e.LOG_LEVEL_NONE=999,e.logLevel=e.LOG_LEVEL_WARNING,e.sound=!0,e.motionSync=!0,e.motionFadingDuration=500,e.idleMotionFadingDuration=2e3,e.expressionFadingDuration=500,e.preserveExpressionOnMotion=!0,e.cubism4=z})(Et||(Et={}));const xt={log(e,...t){Et.logLevel<=Et.LOG_LEVEL_VERBOSE&&console.log(`[${e}]`,...t)},warn(e,...t){Et.logLevel<=Et.LOG_LEVEL_WARNING&&console.warn(`[${e}]`,...t)},error(e,...t){Et.logLevel<=Et.LOG_LEVEL_ERROR&&console.error(`[${e}]`,...t)}};function St(e,t,n){return en?n:e}function wt(e,t){t.forEach((t=>{Object.getOwnPropertyNames(t.prototype).forEach((n=>{"constructor"!==n&&Object.defineProperty(e.prototype,n,Object.getOwnPropertyDescriptor(t.prototype,n))}))}))}function Tt(e){let t=e.lastIndexOf("/");return-1!=t&&(e=e.slice(0,t)),t=e.lastIndexOf("/"),-1!==t&&(e=e.slice(t+1)),e}function At(e,t){const n=e.indexOf(t);-1!==n&&e.splice(n,1)}class Ct extends r.EventEmitter{constructor(e,t){super(),this.expressions=[],this.reserveExpressionIndex=-1,this.destroyed=!1,this.settings=e,this.tag=`ExpressionManager(${e.name})`}init(){this.defaultExpression=this.createExpression({},void 0),this.currentExpression=this.defaultExpression,this.stopAllExpressions()}loadExpression(e){return l(this,null,(function*(){if(!this.definitions[e])return void xt.warn(this.tag,`Undefined expression at [${e}]`);if(null===this.expressions[e])return void xt.warn(this.tag,`Cannot set expression at [${e}] because it's already failed in loading.`);if(this.expressions[e])return this.expressions[e];const t=yield this._loadExpression(e);return this.expressions[e]=t,t}))}_loadExpression(e){throw new Error("Not implemented.")}setRandomExpression(){return l(this,null,(function*(){if(this.definitions.length){const e=[];for(let t=0;t-1&&ec&&(s*=c/l,o*=c/l),this.vx+=s,this.vy+=o;const u=Math.sqrt(a(this.vx,2)+a(this.vy,2)),d=.5*(Math.sqrt(a(c,2)+8*c*r)-c);u>d&&(this.vx*=d/u,this.vy*=d/u),this.x+=this.vx,this.y+=this.vy}}class Ot{constructor(e){this.json=e;let t=e.url;if("string"!==typeof t)throw new TypeError("The `url` field in settings JSON must be defined as a string.");this.url=t,this.name=Tt(this.url)}resolveURL(e){return r.url.resolve(this.url,e)}replaceFiles(e){this.moc=e(this.moc,"moc"),void 0!==this.pose&&(this.pose=e(this.pose,"pose")),void 0!==this.physics&&(this.physics=e(this.physics,"physics"));for(let t=0;t(e.push(t),t))),e}validateFiles(e){const t=(t,n)=>{const r=this.resolveURL(t);if(!e.includes(r)){if(n)throw new Error(`File "${t}" is defined in settings, but doesn't exist in given files`);return!1}return!0},n=[this.moc,...this.textures];n.forEach((e=>t(e,!0)));const r=this.getDefinedFiles();return r.filter((e=>t(e,!1)))}}var Nt=(e=>(e[e["NONE"]=0]="NONE",e[e["IDLE"]=1]="IDLE",e[e["NORMAL"]=2]="NORMAL",e[e["FORCE"]=3]="FORCE",e))(Nt||{});class Mt{constructor(){this.debug=!1,this.currentPriority=0,this.reservePriority=0}reserve(e,t,n){if(n<=0)return xt.log(this.tag,"Cannot start a motion with MotionPriority.NONE."),!1;if(e===this.currentGroup&&t===this.currentIndex)return xt.log(this.tag,"Motion is already playing.",this.dump(e,t)),!1;if(e===this.reservedGroup&&t===this.reservedIndex||e===this.reservedIdleGroup&&t===this.reservedIdleIndex)return xt.log(this.tag,"Motion is already reserved.",this.dump(e,t)),!1;if(1===n){if(0!==this.currentPriority)return xt.log(this.tag,"Cannot start idle motion because another motion is playing.",this.dump(e,t)),!1;if(void 0!==this.reservedIdleGroup)return xt.log(this.tag,"Cannot start idle motion because another idle motion has reserved.",this.dump(e,t)),!1;this.setReservedIdle(e,t)}else{if(n<3){if(n<=this.currentPriority)return xt.log(this.tag,"Cannot start motion because another motion is playing as an equivalent or higher priority.",this.dump(e,t)),!1;if(n<=this.reservePriority)return xt.log(this.tag,"Cannot start motion because another motion has reserved as an equivalent or higher priority.",this.dump(e,t)),!1}this.setReserved(e,t,n)}return!0}start(e,t,n,r){if(1===r){if(this.setReservedIdle(void 0,void 0),0!==this.currentPriority)return xt.log(this.tag,"Cannot start idle motion because another motion is playing.",this.dump(t,n)),!1}else{if(t!==this.reservedGroup||n!==this.reservedIndex)return xt.log(this.tag,"Cannot start motion because another motion has taken the place.",this.dump(t,n)),!1;this.setReserved(void 0,void 0,0)}return!!e&&(this.setCurrent(t,n,r),!0)}complete(){this.setCurrent(void 0,void 0,0)}setCurrent(e,t,n){this.currentPriority=n,this.currentGroup=e,this.currentIndex=t}setReserved(e,t,n){this.reservePriority=n,this.reservedGroup=e,this.reservedIndex=t}setReservedIdle(e,t){this.reservedIdleGroup=e,this.reservedIdleIndex=t}isActive(e,t){return e===this.currentGroup&&t===this.currentIndex||e===this.reservedGroup&&t===this.reservedIndex||e===this.reservedIdleGroup&&t===this.reservedIdleIndex}reset(){this.setCurrent(void 0,void 0,0),this.setReserved(void 0,void 0,0),this.setReservedIdle(void 0,void 0)}shouldRequestIdleMotion(){return void 0===this.currentGroup&&void 0===this.reservedIdleGroup}shouldOverrideExpression(){return!Et.preserveExpressionOnMotion&&this.currentPriority>1}dump(e,t){if(this.debug){const n=["currentPriority","reservePriority","currentGroup","currentIndex","reservedGroup","reservedIndex","reservedIdleGroup","reservedIdleIndex"];return`\n group = "${e}", index = ${t}\n`+n.map((e=>"["+e+"] "+this[e])).join("\n")}return""}}const Dt="SoundManager",Lt=.5;class Ft{static get volume(){return this._volume}static set volume(e){this._volume=(e>1?1:e<0?0:e)||0,this.audios.forEach((e=>e.volume=this._volume))}static add(e,t,n){const r=new Audio(e);return r.volume=this._volume,r.preload="auto",r.addEventListener("ended",(()=>{this.dispose(r),null==t||t()})),r.addEventListener("error",(t=>{this.dispose(r),xt.warn(Dt,`Error occurred on "${e}"`,t.error),null==n||n(t.error)})),this.audios.push(r),r}static play(e){return new Promise(((t,n)=>{var r;null==(r=e.play())||r.catch((t=>{e.dispatchEvent(new ErrorEvent("error",{error:t})),n(t)})),e.readyState===e.HAVE_ENOUGH_DATA?t():e.addEventListener("canplaythrough",t)}))}static dispose(e){e.pause(),e.removeAttribute("src"),At(this.audios,e)}static destroy(){for(let e=this.audios.length-1;e>=0;e--)this.dispose(this.audios[e])}}Ft.audios=[],Ft._volume=Lt;class Bt extends r.EventEmitter{constructor(e,t){super(),this.motionGroups={},this.state=new Mt,this.playing=!1,this.destroyed=!1,this.settings=e,this.tag=`MotionManager(${e.name})`,this.state.tag=this.tag}init(e){(null==e?void 0:e.idleMotionGroup)&&(this.groups.idle=e.idleMotionGroup),this.setupMotions(e),this.stopAllMotions()}setupMotions(e){for(const n of Object.keys(this.definitions))this.motionGroups[n]=[];let t;switch(null==e?void 0:e.motionPreload){case"NONE":return;case"ALL":t=Object.keys(this.definitions);break;case"IDLE":default:t=[this.groups.idle];break}for(const n of t)if(this.definitions[n])for(let e=0;ethis.currentAudio=void 0),(()=>this.currentAudio=void 0)),this.currentAudio=s}catch(a){xt.warn(this.tag,"Failed to create audio",e,a)}}const o=yield this.loadMotion(e,t);if(s){const e=Ft.play(s).catch((e=>xt.warn(this.tag,"Failed to play audio",s.src,e)));Et.motionSync&&(yield e)}return this.state.start(o,e,t,n)?(xt.log(this.tag,"Start motion:",this.getMotionName(i)),this.emit("motionStart",e,t,s),this.state.shouldOverrideExpression()&&this.expressionManager&&this.expressionManager.resetExpression(),this.playing=!0,this._startMotion(o),!0):(s&&(Ft.dispose(s),this.currentAudio=void 0),!1)}))}startRandomMotion(e,t){return l(this,null,(function*(){const n=this.definitions[e];if(null==n?void 0:n.length){const r=[];for(let t=0;te.index>=0));for(const t of e)this.hitAreas[t.name]=t}hitTest(e,t){return Object.keys(this.hitAreas).filter((n=>this.isHit(n,e,t)))}isHit(e,t,n){if(!this.hitAreas[e])return!1;const r=this.hitAreas[e].index,i=this.getDrawableBounds(r,Ut);return i.x<=t&&t<=i.x+i.width&&i.y<=n&&n<=i.y+i.height}getDrawableBounds(e,t){const n=this.getDrawableVertices(e);let r=n[0],i=n[0],s=n[1],o=n[1];for(let a=0;a{200!==s.status&&0!==s.status||!s.response?s.onerror():r(s.response)},s.onerror=()=>{xt.warn($t,`Failed to load resource as ${s.responseType} (Status ${s.status}): ${t}`),i(new zt("Network error.",t,s.status))},s.onabort=()=>i(new zt("Aborted.",t,s.status,!0)),s.onloadend=()=>{var t;Ht.allXhrSet.delete(s),e&&(null==(t=Ht.xhrMap.get(e))||t.delete(s))},s}static cancelXHRs(){var e;null==(e=Ht.xhrMap.get(this))||e.forEach((e=>{e.abort(),Ht.allXhrSet.delete(e)})),Ht.xhrMap.delete(this)}static release(){Ht.allXhrSet.forEach((e=>e.abort())),Ht.allXhrSet.clear(),Ht.xhrMap=new WeakMap}};let Vt=Ht;function jt(e,t){let n=-1;return r(0);function r(i,s){if(s)return Promise.reject(s);if(i<=n)return Promise.reject(new Error("next() called multiple times"));n=i;const o=e[i];if(!o)return Promise.resolve();try{return Promise.resolve(o(t,r.bind(null,i+1)))}catch(a){return Promise.reject(a)}}}Vt.xhrMap=new WeakMap,Vt.allXhrSet=new Set,Vt.loader=(e,t)=>new Promise(((t,n)=>{const r=Ht.createXHR(e.target,e.settings?e.settings.resolveURL(e.url):e.url,e.type,(n=>{e.result=n,t()}),n);r.send()}));class Wt{static load(e){return jt(this.middlewares,e).then((()=>e.result))}}function qt(e,t={}){const n={resourceOptions:{crossorigin:t.crossOrigin}};if(s.xE.fromURL)return s.xE.fromURL(e,n).catch((e=>{if(e instanceof Error)throw e;const t=new Error("Texture loading error");throw t.event=e,t}));n.resourceOptions.autoLoad=!1;const r=s.xE.from(e,n);if(r.baseTexture.valid)return Promise.resolve(r);const i=r.baseTexture.resource;return null!=i._live2d_load||(i._live2d_load=new Promise(((e,t)=>{const n=e=>{i.source.removeEventListener("error",n);const r=new Error("Texture loading error");r.event=e,t(r)};i.source.addEventListener("error",n),i.load().then((()=>e(r))).catch(n)}))),i._live2d_load}Wt.middlewares=[Vt.loader];const Xt="Live2DFactory",Yt=(e,t)=>l(void 0,null,(function*(){if("string"===typeof e.source){const t=yield Wt.load({url:e.source,type:"json",target:e.live2dModel});t.url=e.source,e.source=t,e.live2dModel.emit("settingsJSONLoaded",t)}return t()})),Kt=(e,t)=>l(void 0,null,(function*(){if(e.source instanceof Ot)return e.settings=e.source,t();if("object"===typeof e.source){const n=nn.findRuntime(e.source);if(n){const r=n.createModelSettings(e.source);return e.settings=r,e.live2dModel.emit("settingsLoaded",r),t()}}throw new TypeError("Unknown settings format.")})),Zt=(e,t)=>{if(e.settings){const n=nn.findRuntime(e.settings);if(n)return n.ready().then(t)}return t()},Qt=(e,t)=>l(void 0,null,(function*(){yield t();const n=e.internalModel;if(n){const t=e.settings,r=nn.findRuntime(t);if(r){const i=[];t.pose&&i.push(Wt.load({settings:t,url:t.pose,type:"json",target:n}).then((t=>{n.pose=r.createPose(n.coreModel,t),e.live2dModel.emit("poseLoaded",n.pose)})).catch((t=>{e.live2dModel.emit("poseLoadError",t),xt.warn(Xt,"Failed to load pose.",t)}))),t.physics&&i.push(Wt.load({settings:t,url:t.physics,type:"json",target:n}).then((t=>{n.physics=r.createPhysics(n.coreModel,t),e.live2dModel.emit("physicsLoaded",n.physics)})).catch((t=>{e.live2dModel.emit("physicsLoadError",t),xt.warn(Xt,"Failed to load physics.",t)}))),i.length&&(yield Promise.all(i))}}})),Jt=(e,t)=>l(void 0,null,(function*(){if(!e.settings)throw new TypeError("Missing settings.");{const n=e.live2dModel,r=e.settings.textures.map((t=>{const n=e.settings.resolveURL(t);return qt(n,{crossOrigin:e.options.crossOrigin})}));if(yield t(),!e.internalModel)throw new TypeError("Missing internal model.");n.internalModel=e.internalModel,n.emit("modelLoaded",e.internalModel),n.textures=yield Promise.all(r),n.emit("textureLoaded",n.textures)}})),en=(e,t)=>l(void 0,null,(function*(){const n=e.settings;if(n instanceof Ot){const r=nn.findRuntime(n);if(!r)throw new TypeError("Unknown model settings.");const i=yield Wt.load({settings:n,url:n.moc,type:"arraybuffer",target:e.live2dModel});if(!r.isValidMoc(i))throw new Error("Invalid moc data");const s=r.createCoreModel(i);return e.internalModel=r.createInternalModel(s,n,e.options),t()}throw new TypeError("Missing settings.")})),tn=class{static registerRuntime(e){tn.runtimes.push(e),tn.runtimes.sort(((e,t)=>t.version-e.version))}static findRuntime(e){for(const t of tn.runtimes)if(t.test(e))return t}static setupLive2DModel(e,t,n){return l(this,null,(function*(){const r=new Promise((t=>e.once("textureLoaded",t))),i=new Promise((t=>e.once("modelLoaded",t))),s=Promise.all([r,i]).then((()=>e.emit("ready")));yield jt(tn.live2DModelMiddlewares,{live2dModel:e,source:t,options:n||{}}),yield s,e.emit("load")}))}static loadMotion(e,t,n){var r;const i=r=>e.emit("motionLoadError",t,n,r);try{const s=null==(r=e.definitions[t])?void 0:r[n];if(!s)return Promise.resolve(void 0);e.listeners("destroy").includes(tn.releaseTasks)||e.once("destroy",tn.releaseTasks);let o=tn.motionTasksMap.get(e);o||(o={},tn.motionTasksMap.set(e,o));let a=o[t];a||(a=[],o[t]=a);const l=e.getMotionFile(s);return null!=a[n]||(a[n]=Wt.load({url:l,settings:e.settings,type:e.motionDataType,target:e}).then((r=>{var i;const o=null==(i=tn.motionTasksMap.get(e))?void 0:i[t];o&&delete o[n];const a=e.createMotion(r,t,s);return e.emit("motionLoaded",t,n,a),a})).catch((t=>{xt.warn(e.tag,`Failed to load motion: ${l}\n`,t),i(t)}))),a[n]}catch(s){xt.warn(e.tag,`Failed to load motion at "${t}"[${n}]\n`,s),i(s)}return Promise.resolve(void 0)}static loadExpression(e,t){const n=n=>e.emit("expressionLoadError",t,n);try{const r=e.definitions[t];if(!r)return Promise.resolve(void 0);e.listeners("destroy").includes(tn.releaseTasks)||e.once("destroy",tn.releaseTasks);let i=tn.expressionTasksMap.get(e);i||(i=[],tn.expressionTasksMap.set(e,i));const s=e.getExpressionFile(r);return null!=i[t]||(i[t]=Wt.load({url:s,settings:e.settings,type:"json",target:e}).then((n=>{const i=tn.expressionTasksMap.get(e);i&&delete i[t];const s=e.createExpression(n,r);return e.emit("expressionLoaded",t,s),s})).catch((t=>{xt.warn(e.tag,`Failed to load expression: ${s}\n`,t),n(t)}))),i[t]}catch(r){xt.warn(e.tag,`Failed to load expression at [${t}]\n`,r),n(r)}return Promise.resolve(void 0)}static releaseTasks(){this instanceof Bt?tn.motionTasksMap.delete(this):tn.expressionTasksMap.delete(this)}};let nn=tn;nn.runtimes=[],nn.urlToJSON=Yt,nn.jsonToSettings=Kt,nn.waitUntilReady=Zt,nn.setupOptionals=Qt,nn.setupEssentials=Jt,nn.createInternalModel=en,nn.live2DModelMiddlewares=[Yt,Kt,Zt,Qt,Jt,en],nn.motionTasksMap=new WeakMap,nn.expressionTasksMap=new WeakMap,Bt.prototype["_loadMotion"]=function(e,t){return nn.loadMotion(this,e,t)},Ct.prototype["_loadExpression"]=function(e){return nn.loadExpression(this,e)};class rn{constructor(){this._autoInteract=!1}get autoInteract(){return this._autoInteract}set autoInteract(e){e!==this._autoInteract&&(e?this.on("pointertap",sn,this):this.off("pointertap",sn,this),this._autoInteract=e)}registerInteraction(e){e!==this.interactionManager&&(this.unregisterInteraction(),this._autoInteract&&e&&(this.interactionManager=e,e.on("pointermove",on,this)))}unregisterInteraction(){var e;this.interactionManager&&(null==(e=this.interactionManager)||e.off("pointermove",on,this),this.interactionManager=void 0)}}function sn(e){this.tap(e.data.global.x,e.data.global.y)}function on(e){this.focus(e.data.global.x,e.data.global.y)}class an extends i.wx{}const ln=new i.E9,cn=new i.y3;let un;class dn extends o.W2{constructor(e){super(),this.tag="Live2DModel(uninitialized)",this.textures=[],this.transform=new an,this.anchor=new i.AB(this.onAnchorChange,this,0,0),this.glContextID=-1,this.elapsedTime=performance.now(),this.deltaTime=0,this._autoUpdate=!1,this.once("modelLoaded",(()=>this.init(e)))}static from(e,t){const n=new this(t);return nn.setupLive2DModel(n,e,t).then((()=>n))}static fromSync(e,t){const n=new this(t);return nn.setupLive2DModel(n,e,t).then(null==t?void 0:t.onLoad).catch(null==t?void 0:t.onError),n}static registerTicker(e){un=e}get autoUpdate(){return this._autoUpdate}set autoUpdate(e){var t;un||(un=null==(t=window.PIXI)?void 0:t.Ticker),e?this._destroyed||(un?(un.shared.add(this.onTickerUpdate,this),this._autoUpdate=!0):xt.warn(this.tag,"No Ticker registered, please call Live2DModel.registerTicker(Ticker).")):(null==un||un.shared.remove(this.onTickerUpdate,this),this._autoUpdate=!1)}init(e){this.tag=`Live2DModel(${this.internalModel.settings.name})`;const t=Object.assign({autoUpdate:!0,autoInteract:!0},e);t.autoInteract&&(this.interactive=!0),this.autoInteract=t.autoInteract,this.autoUpdate=t.autoUpdate}onAnchorChange(){this.pivot.set(this.anchor.x*this.internalModel.width,this.anchor.y*this.internalModel.height)}motion(e,t,n){return void 0===t?this.internalModel.motionManager.startRandomMotion(e,n):this.internalModel.motionManager.startMotion(e,t,n)}expression(e){return this.internalModel.motionManager.expressionManager?void 0===e?this.internalModel.motionManager.expressionManager.setRandomExpression():this.internalModel.motionManager.expressionManager.setExpression(e):Promise.resolve(!1)}focus(e,t,n=!1){ln.x=e,ln.y=t,this.toModelPosition(ln,ln,!0);let r=ln.x/this.internalModel.originalWidth*2-1,i=ln.y/this.internalModel.originalHeight*2-1,s=Math.atan2(i,r);this.internalModel.focusController.focus(Math.cos(s),-Math.sin(s),n)}tap(e,t){const n=this.hitTest(e,t);n.length&&(xt.log(this.tag,"Hit",n),this.emit("hit",n))}hitTest(e,t){return ln.x=e,ln.y=t,this.toModelPosition(ln,ln),this.internalModel.hitTest(ln.x,ln.y)}toModelPosition(e,t=e.clone(),n){return n||(this._recursivePostUpdateTransform(),this.parent?this.displayObjectUpdateTransform():(this.parent=this._tempDisplayObjectParent,this.displayObjectUpdateTransform(),this.parent=null)),this.transform.worldTransform.applyInverse(e,t),this.internalModel.localTransform.applyInverse(t,t),t}containsPoint(e){return this.getBounds(!0).contains(e.x,e.y)}_calculateBounds(){this._bounds.addFrame(this.transform,0,0,this.internalModel.width,this.internalModel.height)}onTickerUpdate(){this.update(un.shared.deltaMS)}update(e){this.deltaTime+=e,this.elapsedTime+=e}_render(e){this.registerInteraction(e.plugins.interaction),e.batch.reset(),e.geometry.reset(),e.shader.reset(),e.state.reset();let t=!1;this.glContextID!==e.CONTEXT_UID&&(this.glContextID=e.CONTEXT_UID,this.internalModel.updateWebGLContext(e.gl,this.glContextID),t=!0);for(let i=0;it.destroy(e.baseTexture))),this.internalModel.destroy(),super.destroy(e)}}wt(dn,[rn]);const hn=class{static resolveURL(e,t){var n;const r=null==(n=hn.filesMap[e])?void 0:n[t];if(void 0===r)throw new Error("Cannot find this file from uploaded files: "+t);return r}static upload(e,t){return l(this,null,(function*(){const n={};for(const i of t.getDefinedFiles()){const s=decodeURI(r.url.resolve(t.url,i)),o=e.find((e=>e.webkitRelativePath===s));o&&(n[i]=URL.createObjectURL(o))}hn.filesMap[t._objectURL]=n}))}static createSettings(e){return l(this,null,(function*(){const t=e.find((e=>e.name.endsWith("model.json")||e.name.endsWith("model3.json")));if(!t)throw new TypeError("Settings file not found");const n=yield hn.readText(t),r=JSON.parse(n);r.url=t.webkitRelativePath;const i=nn.findRuntime(r);if(!i)throw new Error("Unknown settings JSON");const s=i.createModelSettings(r);return s._objectURL=URL.createObjectURL(t),s}))}static readText(e){return l(this,null,(function*(){return new Promise(((t,n)=>{const r=new FileReader;r.onload=()=>t(r.result),r.onerror=n,r.readAsText(e,"utf8")}))}))}};let pn=hn;pn.filesMap={},pn.factory=(e,t)=>l(void 0,null,(function*(){if(Array.isArray(e.source)&&e.source[0]instanceof File){const t=e.source;let n=t.settings;if(n){if(!n._objectURL)throw new Error('"_objectURL" must be specified in ModelSettings')}else n=yield hn.createSettings(t);n.validateFiles(t.map((e=>encodeURI(e.webkitRelativePath)))),yield hn.upload(t,n),n.resolveURL=function(e){return hn.resolveURL(this._objectURL,e)},e.source=n,e.live2dModel.once("modelLoaded",(e=>{e.once("destroy",(function(){const e=this.settings._objectURL;if(URL.revokeObjectURL(e),hn.filesMap[e])for(const t of Object.values(hn.filesMap[e]))URL.revokeObjectURL(t);delete hn.filesMap[e]}))}))}return t()})),nn.live2DModelMiddlewares.unshift(pn.factory);const fn=class{static unzip(e,t){return l(this,null,(function*(){const n=yield fn.getFilePaths(e),i=[];for(const e of t.getDefinedFiles()){const s=decodeURI(r.url.resolve(t.url,e));n.includes(s)&&i.push(s)}const s=yield fn.getFiles(e,i);for(let e=0;ee.endsWith("model.json")||e.endsWith("model3.json")));if(!n)throw new Error("Settings file not found");const r=yield fn.readText(e,n);if(!r)throw new Error("Empty settings file: "+n);const i=JSON.parse(r);i.url=n;const s=nn.findRuntime(i);if(!s)throw new Error("Unknown settings JSON");return s.createModelSettings(i)}))}static zipReader(e,t){return l(this,null,(function*(){throw new Error("Not implemented")}))}static getFilePaths(e){return l(this,null,(function*(){throw new Error("Not implemented")}))}static getFiles(e,t){return l(this,null,(function*(){throw new Error("Not implemented")}))}static readText(e,t){return l(this,null,(function*(){throw new Error("Not implemented")}))}static releaseReader(e){}};let gn=fn;if(gn.ZIP_PROTOCOL="zip://",gn.uid=0,gn.factory=(e,t)=>l(void 0,null,(function*(){const n=e.source;let r,i,s;if("string"===typeof n&&(n.endsWith(".zip")||n.startsWith(fn.ZIP_PROTOCOL))?(r=n.startsWith(fn.ZIP_PROTOCOL)?n.slice(fn.ZIP_PROTOCOL.length):n,i=yield Wt.load({url:r,type:"blob",target:e.live2dModel})):Array.isArray(n)&&1===n.length&&n[0]instanceof File&&n[0].name.endsWith(".zip")&&(i=n[0],r=URL.createObjectURL(i),s=n.settings),i){if(!i.size)throw new Error("Empty zip file");const t=yield fn.zipReader(i,r);s||(s=yield fn.createSettings(t)),s._objectURL=fn.ZIP_PROTOCOL+fn.uid+"/"+s.url;const n=yield fn.unzip(t,s);n.settings=s,e.source=n,r.startsWith("blob:")&&e.live2dModel.once("modelLoaded",(e=>{e.once("destroy",(function(){URL.revokeObjectURL(r)}))})),fn.releaseReader(t)}return t()})),nn.live2DModelMiddlewares.unshift(gn.factory),!window.Live2DCubismCore)throw new Error("Could not find Cubism 4 runtime. This plugin requires live2dcubismcore.js to be loaded.");class mn extends Ct{constructor(e,t){var n;super(e,t),this.queueManager=new fe,this.definitions=null!=(n=e.expressions)?n:[],this.init()}isFinished(){return this.queueManager.isFinished()}getExpressionIndex(e){return this.definitions.findIndex((t=>t.Name===e))}getExpressionFile(e){return e.File}createExpression(e,t){return $.create(e)}_setExpression(e){return this.queueManager.startMotion(e,!1,performance.now())}stopAllExpressions(){this.queueManager.stopAllMotions()}updateParameters(e,t){return this.queueManager.doUpdateMotion(e,t)}}class bn extends Ot{constructor(e){if(super(e),!bn.isValidJSON(e))throw new TypeError("Invalid JSON.");Object.assign(this,new dt(e))}static isValidJSON(e){var t;return!!(null==e?void 0:e.FileReferences)&&"string"===typeof e.FileReferences.Moc&&(null==(t=e.FileReferences.Textures)?void 0:t.length)>0&&e.FileReferences.Textures.every((e=>"string"===typeof e))}replaceFiles(e){if(super.replaceFiles(e),this.motions)for(const[t,n]of Object.entries(this.motions))for(let r=0;r{this.emit("motion:"+t)}))}isFinished(){return this.queueManager.isFinished()}_startMotion(e,t){return e.setFinishedMotionHandler(t),this.queueManager.stopAllMotions(),this.queueManager.startMotion(e,!1,performance.now())}_stopAllMotions(){this.queueManager.stopAllMotions()}createMotion(e,t,n){const r=he.create(e),i=new Z(e),s=(t===this.groups.idle?Et.idleMotionFadingDuration:Et.motionFadingDuration)/1e3;return void 0===i.getMotionFadeInTime()&&r.setFadeInTime(n.FadeInTime>0?n.FadeInTime:s),void 0===i.getMotionFadeOutTime()&&r.setFadeOutTime(n.FadeOutTime>0?n.FadeOutTime:s),r.setEffectIds(this.eyeBlinkIds,this.lipSyncIds),r}getMotionFile(e){return e.File}getMotionName(e){return e.File}getSoundFile(e){return e.Sound}updateParameters(e,t){return this.queueManager.doUpdateMotion(e,t)}destroy(){super.destroy(),this.queueManager.release(),this.queueManager=void 0}}const yn=new E;class vn extends Gt{constructor(e,t,n){super(),this.lipSync=!0,this.breath=c.create(),this.renderer=new ut,this.idParamAngleX=ht,this.idParamAngleY=pt,this.idParamAngleZ=ft,this.idParamEyeBallX=gt,this.idParamEyeBallY=mt,this.idParamBodyAngleX=bt,this.idParamBreath=_t,this.pixelsPerUnit=1,this.centeringTransform=new i.y3,this.coreModel=e,this.settings=t,this.motionManager=new _n(t,n),this.init()}init(){var e;super.init(),(null==(e=this.settings.getEyeBlinkParameters())?void 0:e.length)>0&&(this.eyeBlink=h.create(this.settings)),this.breath.setParameters([new u(this.idParamAngleX,0,15,6.5345,.5),new u(this.idParamAngleY,0,8,3.5345,.5),new u(this.idParamAngleZ,0,10,5.5345,.5),new u(this.idParamBodyAngleX,0,4,15.5345,.5),new u(this.idParamBreath,0,.5,3.2345,.5)]),this.renderer.initialize(this.coreModel),this.renderer.setIsPremultipliedAlpha(!0)}getSize(){return[this.coreModel.getModel().canvasinfo.CanvasWidth,this.coreModel.getModel().canvasinfo.CanvasHeight]}getLayout(){const e={};if(this.settings.layout)for(const t of Object.keys(this.settings.layout)){const n=t.charAt(0).toLowerCase()+t.slice(1);e[n]=this.settings.layout[t]}return e}setupLayout(){super.setupLayout(),this.pixelsPerUnit=this.coreModel.getModel().canvasinfo.PixelsPerUnit,this.centeringTransform.scale(this.pixelsPerUnit,this.pixelsPerUnit).translate(this.originalWidth/2,this.originalHeight/2)}updateWebGLContext(e,t){this.renderer.firstDraw=!0,this.renderer._bufferData={vertex:null,uv:null,index:null},this.renderer.startUp(e),this.renderer._clippingManager._currentFrameNo=t,this.renderer._clippingManager._maskTexture=void 0,tt.getInstance()._shaderSets=[]}bindTexture(e,t){this.renderer.bindTexture(e,t)}getHitAreaDefs(){var e,t;return null!=(t=null==(e=this.settings.hitAreas)?void 0:e.map((e=>({id:e.Id,name:e.Name,index:this.coreModel.getDrawableIndex(e.Id)}))))?t:[]}getDrawableIDs(){return this.coreModel.getDrawableIds()}getDrawableIndex(e){return this.coreModel.getDrawableIndex(e)}getDrawableVertices(e){if("string"===typeof e&&(e=this.coreModel.getDrawableIndex(e),-1===e))throw new TypeError("Unable to find drawable ID: "+e);const t=this.coreModel.getDrawableVertices(e).slice();for(let n=0;n{function n(){try{wn(),e()}catch(r){if(xn--,xn<0){const e=new Error("Failed to start up Cubism 4 framework.");return e.cause=r,void t(e)}xt.log("Cubism4","Startup failed, retrying 10ms later..."),setTimeout(n,10)}}n()}))),En)}function wn(e){e=Object.assign({logFunction:console.log,loggingLevel:k.LogLevel_Verbose},e),R.startUp(e),R.initialize()}function Tn(){var e;null==(e=this.__moc)||e.release()}nn.registerRuntime({version:4,ready:Sn,test(e){return e instanceof bn||bn.isValidJSON(e)},isValidMoc(e){if(e.byteLength<4)return!1;const t=new Int8Array(e,0,4);return"MOC3"===String.fromCharCode(...t)},createModelSettings(e){return new bn(e)},createCoreModel(e){const t=B.create(e);try{const e=t.createModel();return e.__moc=t,e}catch(n){try{t.release()}catch(r){}throw n}},createInternalModel(e,t,n){const r=new vn(e,t,n),i=e;return i.__moc&&(r.__moc=i.__moc,delete i.__moc,r.once("destroy",Tn)),r},createPhysics(e,t){return ke.create(t)},createPose(e,t){return m.create(t)}})},6405:function(){Prism.languages.abap={comment:/^\*.*/m,string:/(`|')(?:\\.|(?!\1)[^\\\r\n])*\1/,"string-template":{pattern:/([|}])(?:\\.|[^\\|{\r\n])*(?=[|{])/,lookbehind:!0,alias:"string"},"eol-comment":{pattern:/(^|\s)".*/m,lookbehind:!0,alias:"comment"},keyword:{pattern:/(\s|\.|^)(?:\*-INPUT|\?TO|ABAP-SOURCE|ABBREVIATED|ABS|ABSTRACT|ACCEPT|ACCEPTING|ACCESSPOLICY|ACCORDING|ACOS|ACTIVATION|ACTUAL|ADD|ADD-CORRESPONDING|ADJACENT|AFTER|ALIAS|ALIASES|ALIGN|ALL|ALLOCATE|ALPHA|ANALYSIS|ANALYZER|AND|ANY|APPEND|APPENDAGE|APPENDING|APPLICATION|ARCHIVE|AREA|ARITHMETIC|AS|ASCENDING|ASIN|ASPECT|ASSERT|ASSIGN|ASSIGNED|ASSIGNING|ASSOCIATION|ASYNCHRONOUS|AT|ATAN|ATTRIBUTES|AUTHORITY|AUTHORITY-CHECK|AVG|BACK|BACKGROUND|BACKUP|BACKWARD|BADI|BASE|BEFORE|BEGIN|BETWEEN|BIG|BINARY|BINDING|BIT|BIT-AND|BIT-NOT|BIT-OR|BIT-XOR|BLACK|BLANK|BLANKS|BLOB|BLOCK|BLOCKS|BLUE|BOUND|BOUNDARIES|BOUNDS|BOXED|BREAK-POINT|BT|BUFFER|BY|BYPASSING|BYTE|BYTE-CA|BYTE-CN|BYTE-CO|BYTE-CS|BYTE-NA|BYTE-NS|BYTE-ORDER|C|CA|CALL|CALLING|CASE|CAST|CASTING|CATCH|CEIL|CENTER|CENTERED|CHAIN|CHAIN-INPUT|CHAIN-REQUEST|CHANGE|CHANGING|CHANNELS|CHAR-TO-HEX|CHARACTER|CHARLEN|CHECK|CHECKBOX|CIRCULAR|CI_|CLASS|CLASS-CODING|CLASS-DATA|CLASS-EVENTS|CLASS-METHODS|CLASS-POOL|CLEANUP|CLEAR|CLIENT|CLOB|CLOCK|CLOSE|CN|CNT|CO|COALESCE|CODE|CODING|COLLECT|COLOR|COLUMN|COLUMNS|COL_BACKGROUND|COL_GROUP|COL_HEADING|COL_KEY|COL_NEGATIVE|COL_NORMAL|COL_POSITIVE|COL_TOTAL|COMMENT|COMMENTS|COMMIT|COMMON|COMMUNICATION|COMPARING|COMPONENT|COMPONENTS|COMPRESSION|COMPUTE|CONCAT|CONCATENATE|COND|CONDENSE|CONDITION|CONNECT|CONNECTION|CONSTANTS|CONTEXT|CONTEXTS|CONTINUE|CONTROL|CONTROLS|CONV|CONVERSION|CONVERT|COPIES|COPY|CORRESPONDING|COS|COSH|COUNT|COUNTRY|COVER|CP|CPI|CREATE|CREATING|CRITICAL|CS|CURRENCY|CURRENCY_CONVERSION|CURRENT|CURSOR|CURSOR-SELECTION|CUSTOMER|CUSTOMER-FUNCTION|DANGEROUS|DATA|DATABASE|DATAINFO|DATASET|DATE|DAYLIGHT|DBMAXLEN|DD\/MM\/YY|DD\/MM\/YYYY|DDMMYY|DEALLOCATE|DECIMALS|DECIMAL_SHIFT|DECLARATIONS|DEEP|DEFAULT|DEFERRED|DEFINE|DEFINING|DEFINITION|DELETE|DELETING|DEMAND|DEPARTMENT|DESCENDING|DESCRIBE|DESTINATION|DETAIL|DIALOG|DIRECTORY|DISCONNECT|DISPLAY|DISPLAY-MODE|DISTANCE|DISTINCT|DIV|DIVIDE|DIVIDE-CORRESPONDING|DIVISION|DO|DUMMY|DUPLICATE|DUPLICATES|DURATION|DURING|DYNAMIC|DYNPRO|E|EACH|EDIT|EDITOR-CALL|ELSE|ELSEIF|EMPTY|ENABLED|ENABLING|ENCODING|END|END-ENHANCEMENT-SECTION|END-LINES|END-OF-DEFINITION|END-OF-FILE|END-OF-PAGE|END-OF-SELECTION|ENDAT|ENDCASE|ENDCATCH|ENDCHAIN|ENDCLASS|ENDDO|ENDENHANCEMENT|ENDEXEC|ENDFOR|ENDFORM|ENDFUNCTION|ENDIAN|ENDIF|ENDING|ENDINTERFACE|ENDLOOP|ENDMETHOD|ENDMODULE|ENDON|ENDPROVIDE|ENDSELECT|ENDTRY|ENDWHILE|ENGINEERING|ENHANCEMENT|ENHANCEMENT-POINT|ENHANCEMENT-SECTION|ENHANCEMENTS|ENTRIES|ENTRY|ENVIRONMENT|EQ|EQUAL|EQUIV|ERRORMESSAGE|ERRORS|ESCAPE|ESCAPING|EVENT|EVENTS|EXACT|EXCEPT|EXCEPTION|EXCEPTION-TABLE|EXCEPTIONS|EXCLUDE|EXCLUDING|EXEC|EXECUTE|EXISTS|EXIT|EXIT-COMMAND|EXP|EXPAND|EXPANDING|EXPIRATION|EXPLICIT|EXPONENT|EXPORT|EXPORTING|EXTEND|EXTENDED|EXTENSION|EXTRACT|FAIL|FETCH|FIELD|FIELD-GROUPS|FIELD-SYMBOL|FIELD-SYMBOLS|FIELDS|FILE|FILTER|FILTER-TABLE|FILTERS|FINAL|FIND|FIRST|FIRST-LINE|FIXED-POINT|FKEQ|FKGE|FLOOR|FLUSH|FONT|FOR|FORM|FORMAT|FORWARD|FOUND|FRAC|FRAME|FRAMES|FREE|FRIENDS|FROM|FUNCTION|FUNCTION-POOL|FUNCTIONALITY|FURTHER|GAPS|GE|GENERATE|GET|GIVING|GKEQ|GKGE|GLOBAL|GRANT|GREATER|GREEN|GROUP|GROUPS|GT|HANDLE|HANDLER|HARMLESS|HASHED|HAVING|HDB|HEAD-LINES|HEADER|HEADERS|HEADING|HELP-ID|HELP-REQUEST|HIDE|HIGH|HINT|HOLD|HOTSPOT|I|ICON|ID|IDENTIFICATION|IDENTIFIER|IDS|IF|IGNORE|IGNORING|IMMEDIATELY|IMPLEMENTATION|IMPLEMENTATIONS|IMPLEMENTED|IMPLICIT|IMPORT|IMPORTING|IN|INACTIVE|INCL|INCLUDE|INCLUDES|INCLUDING|INCREMENT|INDEX|INDEX-LINE|INFOTYPES|INHERITING|INIT|INITIAL|INITIALIZATION|INNER|INOUT|INPUT|INSERT|INSTANCES|INTENSIFIED|INTERFACE|INTERFACE-POOL|INTERFACES|INTERNAL|INTERVALS|INTO|INVERSE|INVERTED-DATE|IS|ISO|ITERATOR|ITNO|JOB|JOIN|KEEP|KEEPING|KERNEL|KEY|KEYS|KEYWORDS|KIND|LANGUAGE|LAST|LATE|LAYOUT|LE|LEADING|LEAVE|LEFT|LEFT-JUSTIFIED|LEFTPLUS|LEFTSPACE|LEGACY|LENGTH|LESS|LET|LEVEL|LEVELS|LIKE|LINE|LINE-COUNT|LINE-SELECTION|LINE-SIZE|LINEFEED|LINES|LIST|LIST-PROCESSING|LISTBOX|LITTLE|LLANG|LOAD|LOAD-OF-PROGRAM|LOB|LOCAL|LOCALE|LOCATOR|LOG|LOG-POINT|LOG10|LOGFILE|LOGICAL|LONG|LOOP|LOW|LOWER|LPAD|LPI|LT|M|MAIL|MAIN|MAJOR-ID|MAPPING|MARGIN|MARK|MASK|MATCH|MATCHCODE|MAX|MAXIMUM|MEDIUM|MEMBERS|MEMORY|MESH|MESSAGE|MESSAGE-ID|MESSAGES|MESSAGING|METHOD|METHODS|MIN|MINIMUM|MINOR-ID|MM\/DD\/YY|MM\/DD\/YYYY|MMDDYY|MOD|MODE|MODIF|MODIFIER|MODIFY|MODULE|MOVE|MOVE-CORRESPONDING|MULTIPLY|MULTIPLY-CORRESPONDING|NA|NAME|NAMETAB|NATIVE|NB|NE|NESTED|NESTING|NEW|NEW-LINE|NEW-PAGE|NEW-SECTION|NEXT|NO|NO-DISPLAY|NO-EXTENSION|NO-GAP|NO-GAPS|NO-GROUPING|NO-HEADING|NO-SCROLLING|NO-SIGN|NO-TITLE|NO-TOPOFPAGE|NO-ZERO|NODE|NODES|NON-UNICODE|NON-UNIQUE|NOT|NP|NS|NULL|NUMBER|NUMOFCHAR|O|OBJECT|OBJECTS|OBLIGATORY|OCCURRENCE|OCCURRENCES|OCCURS|OF|OFF|OFFSET|OLE|ON|ONLY|OPEN|OPTION|OPTIONAL|OPTIONS|OR|ORDER|OTHER|OTHERS|OUT|OUTER|OUTPUT|OUTPUT-LENGTH|OVERFLOW|OVERLAY|PACK|PACKAGE|PAD|PADDING|PAGE|PAGES|PARAMETER|PARAMETER-TABLE|PARAMETERS|PART|PARTIALLY|PATTERN|PERCENTAGE|PERFORM|PERFORMING|PERSON|PF|PF-STATUS|PINK|PLACES|POOL|POSITION|POS_HIGH|POS_LOW|PRAGMAS|PRECOMPILED|PREFERRED|PRESERVING|PRIMARY|PRINT|PRINT-CONTROL|PRIORITY|PRIVATE|PROCEDURE|PROCESS|PROGRAM|PROPERTY|PROTECTED|PROVIDE|PUBLIC|PUSHBUTTON|PUT|QUEUE-ONLY|QUICKINFO|RADIOBUTTON|RAISE|RAISING|RANGE|RANGES|RAW|READ|READ-ONLY|READER|RECEIVE|RECEIVED|RECEIVER|RECEIVING|RED|REDEFINITION|REDUCE|REDUCED|REF|REFERENCE|REFRESH|REGEX|REJECT|REMOTE|RENAMING|REPLACE|REPLACEMENT|REPLACING|REPORT|REQUEST|REQUESTED|RESERVE|RESET|RESOLUTION|RESPECTING|RESPONSIBLE|RESULT|RESULTS|RESUMABLE|RESUME|RETRY|RETURN|RETURNCODE|RETURNING|RIGHT|RIGHT-JUSTIFIED|RIGHTPLUS|RIGHTSPACE|RISK|RMC_COMMUNICATION_FAILURE|RMC_INVALID_STATUS|RMC_SYSTEM_FAILURE|ROLE|ROLLBACK|ROUND|ROWS|RTTI|RUN|SAP|SAP-SPOOL|SAVING|SCALE_PRESERVING|SCALE_PRESERVING_SCIENTIFIC|SCAN|SCIENTIFIC|SCIENTIFIC_WITH_LEADING_ZERO|SCREEN|SCROLL|SCROLL-BOUNDARY|SCROLLING|SEARCH|SECONDARY|SECONDS|SECTION|SELECT|SELECT-OPTIONS|SELECTION|SELECTION-SCREEN|SELECTION-SET|SELECTION-SETS|SELECTION-TABLE|SELECTIONS|SELECTOR|SEND|SEPARATE|SEPARATED|SET|SHARED|SHIFT|SHORT|SHORTDUMP-ID|SIGN|SIGN_AS_POSTFIX|SIMPLE|SIN|SINGLE|SINH|SIZE|SKIP|SKIPPING|SMART|SOME|SORT|SORTABLE|SORTED|SOURCE|SPACE|SPECIFIED|SPLIT|SPOOL|SPOTS|SQL|SQLSCRIPT|SQRT|STABLE|STAMP|STANDARD|START-OF-SELECTION|STARTING|STATE|STATEMENT|STATEMENTS|STATIC|STATICS|STATUSINFO|STEP-LOOP|STOP|STRLEN|STRUCTURE|STRUCTURES|STYLE|SUBKEY|SUBMATCHES|SUBMIT|SUBROUTINE|SUBSCREEN|SUBSTRING|SUBTRACT|SUBTRACT-CORRESPONDING|SUFFIX|SUM|SUMMARY|SUMMING|SUPPLIED|SUPPLY|SUPPRESS|SWITCH|SWITCHSTATES|SYMBOL|SYNCPOINTS|SYNTAX|SYNTAX-CHECK|SYNTAX-TRACE|SYSTEM-CALL|SYSTEM-EXCEPTIONS|SYSTEM-EXIT|TAB|TABBED|TABLE|TABLES|TABLEVIEW|TABSTRIP|TAN|TANH|TARGET|TASK|TASKS|TEST|TESTING|TEXT|TEXTPOOL|THEN|THROW|TIME|TIMES|TIMESTAMP|TIMEZONE|TITLE|TITLE-LINES|TITLEBAR|TO|TOKENIZATION|TOKENS|TOP-LINES|TOP-OF-PAGE|TRACE-FILE|TRACE-TABLE|TRAILING|TRANSACTION|TRANSFER|TRANSFORMATION|TRANSLATE|TRANSPORTING|TRMAC|TRUNC|TRUNCATE|TRUNCATION|TRY|TYPE|TYPE-POOL|TYPE-POOLS|TYPES|ULINE|UNASSIGN|UNDER|UNICODE|UNION|UNIQUE|UNIT|UNIT_CONVERSION|UNIX|UNPACK|UNTIL|UNWIND|UP|UPDATE|UPPER|USER|USER-COMMAND|USING|UTF-8|VALID|VALUE|VALUE-REQUEST|VALUES|VARY|VARYING|VERIFICATION-MESSAGE|VERSION|VIA|VIEW|VISIBLE|WAIT|WARNING|WHEN|WHENEVER|WHERE|WHILE|WIDTH|WINDOW|WINDOWS|WITH|WITH-HEADING|WITH-TITLE|WITHOUT|WORD|WORK|WRITE|WRITER|X|XML|XOR|XSD|XSTRLEN|YELLOW|YES|YYMMDD|Z|ZERO|ZONE)(?![\w-])/i,lookbehind:!0},number:/\b\d+\b/,operator:{pattern:/(\s)(?:\*\*?|<[=>]?|>=?|\?=|[-+\/=])(?=\s)/,lookbehind:!0},"string-operator":{pattern:/(\s)&&?(?=\s)/,lookbehind:!0,alias:"keyword"},"token-operator":[{pattern:/(\w)(?:->?|=>|[~|{}])(?=\w)/,lookbehind:!0,alias:"punctuation"},{pattern:/[|{}]/,alias:"punctuation"}],punctuation:/[,.:()]/}},8758:function(){(function(e){var t="(?:ALPHA|BIT|CHAR|CR|CRLF|CTL|DIGIT|DQUOTE|HEXDIG|HTAB|LF|LWSP|OCTET|SP|VCHAR|WSP)";e.languages.abnf={comment:/;.*/,string:{pattern:/(?:%[is])?"[^"\n\r]*"/,greedy:!0,inside:{punctuation:/^%[is]/}},range:{pattern:/%(?:b[01]+-[01]+|d\d+-\d+|x[A-F\d]+-[A-F\d]+)/i,alias:"number"},terminal:{pattern:/%(?:b[01]+(?:\.[01]+)*|d\d+(?:\.\d+)*|x[A-F\d]+(?:\.[A-F\d]+)*)/i,alias:"number"},repetition:{pattern:/(^|[^\w-])(?:\d*\*\d*|\d+)/,lookbehind:!0,alias:"operator"},definition:{pattern:/(^[ \t]*)(?:[a-z][\w-]*|<[^<>\r\n]*>)(?=\s*=)/m,lookbehind:!0,alias:"keyword",inside:{punctuation:/<|>/}},"core-rule":{pattern:RegExp("(?:(^|[^<\\w-])"+t+"|<"+t+">)(?![\\w-])","i"),lookbehind:!0,alias:["rule","constant"],inside:{punctuation:/<|>/}},rule:{pattern:/(^|[^<\w-])[a-z][\w-]*|<[^<>\r\n]*>/i,lookbehind:!0,inside:{punctuation:/<|>/}},operator:/=\/?|\//,punctuation:/[()\[\]]/}})(Prism)},5249:function(){Prism.languages.actionscript=Prism.languages.extend("javascript",{keyword:/\b(?:as|break|case|catch|class|const|default|delete|do|dynamic|each|else|extends|final|finally|for|function|get|if|implements|import|in|include|instanceof|interface|internal|is|namespace|native|new|null|override|package|private|protected|public|return|set|static|super|switch|this|throw|try|typeof|use|var|void|while|with)\b/,operator:/\+\+|--|(?:[+\-*\/%^]|&&?|\|\|?|<>?>?|[!=]=?)=?|[~?@]/}),Prism.languages.actionscript["class-name"].alias="function",delete Prism.languages.actionscript["parameter"],delete Prism.languages.actionscript["literal-property"],Prism.languages.markup&&Prism.languages.insertBefore("actionscript","string",{xml:{pattern:/(^|[^.])<\/?\w+(?:\s+[^\s>\/=]+=("|')(?:\\[\s\S]|(?!\2)[^\\])*\2)*\s*\/?>/,lookbehind:!0,inside:Prism.languages.markup}})},5795:function(){Prism.languages.ada={comment:/--.*/,string:/"(?:""|[^"\r\f\n])*"/,number:[{pattern:/\b\d(?:_?\d)*#[\dA-F](?:_?[\dA-F])*(?:\.[\dA-F](?:_?[\dA-F])*)?#(?:E[+-]?\d(?:_?\d)*)?/i},{pattern:/\b\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:E[+-]?\d(?:_?\d)*)?\b/i}],attribute:{pattern:/\b'\w+/,alias:"attr-name"},keyword:/\b(?:abort|abs|abstract|accept|access|aliased|all|and|array|at|begin|body|case|constant|declare|delay|delta|digits|do|else|elsif|end|entry|exception|exit|for|function|generic|goto|if|in|interface|is|limited|loop|mod|new|not|null|of|or|others|out|overriding|package|pragma|private|procedure|protected|raise|range|record|rem|renames|requeue|return|reverse|select|separate|some|subtype|synchronized|tagged|task|terminate|then|type|until|use|when|while|with|xor)\b/i,boolean:/\b(?:false|true)\b/i,operator:/<[=>]?|>=?|=>?|:=|\/=?|\*\*?|[&+-]/,punctuation:/\.\.?|[,;():]/,char:/'.'/,variable:/\b[a-z](?:\w)*\b/i}},7231:function(){(function(e){e.languages.agda={comment:/\{-[\s\S]*?(?:-\}|$)|--.*/,string:{pattern:/"(?:\\(?:\r\n|[\s\S])|[^\\\r\n"])*"/,greedy:!0},punctuation:/[(){}⦃⦄.;@]/,"class-name":{pattern:/((?:data|record) +)\S+/,lookbehind:!0},function:{pattern:/(^[ \t]*)(?!\s)[^:\r\n]+(?=:)/m,lookbehind:!0},operator:{pattern:/(^\s*|\s)(?:[=|:∀→λ\\?_]|->)(?=\s)/,lookbehind:!0},keyword:/\b(?:Set|abstract|constructor|data|eta-equality|field|forall|hiding|import|in|inductive|infix|infixl|infixr|instance|let|macro|module|mutual|no-eta-equality|open|overlap|pattern|postulate|primitive|private|public|quote|quoteContext|quoteGoal|quoteTerm|record|renaming|rewrite|syntax|tactic|unquote|unquoteDecl|unquoteDef|using|variable|where|with)\b/}})(Prism)},2273:function(){Prism.languages.al={comment:/\/\/.*|\/\*[\s\S]*?\*\//,string:{pattern:/'(?:''|[^'\r\n])*'(?!')|"(?:""|[^"\r\n])*"(?!")/,greedy:!0},function:{pattern:/(\b(?:event|procedure|trigger)\s+|(?:^|[^.])\.\s*)[a-z_]\w*(?=\s*\()/i,lookbehind:!0},keyword:[/\b(?:array|asserterror|begin|break|case|do|downto|else|end|event|exit|for|foreach|function|if|implements|in|indataset|interface|internal|local|of|procedure|program|protected|repeat|runonclient|securityfiltering|suppressdispose|temporary|then|to|trigger|until|var|while|with|withevents)\b/i,/\b(?:action|actions|addafter|addbefore|addfirst|addlast|area|assembly|chartpart|codeunit|column|controladdin|cuegroup|customizes|dataitem|dataset|dotnet|elements|enum|enumextension|extends|field|fieldattribute|fieldelement|fieldgroup|fieldgroups|fields|filter|fixed|grid|group|key|keys|label|labels|layout|modify|moveafter|movebefore|movefirst|movelast|page|pagecustomization|pageextension|part|profile|query|repeater|report|requestpage|schema|separator|systempart|table|tableelement|tableextension|textattribute|textelement|type|usercontrol|value|xmlport)\b/i],number:/\b(?:0x[\da-f]+|(?:\d+(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?)(?:F|LL?|U(?:LL?)?)?\b/i,boolean:/\b(?:false|true)\b/i,variable:/\b(?:Curr(?:FieldNo|Page|Report)|x?Rec|RequestOptionsPage)\b/,"class-name":/\b(?:automation|biginteger|bigtext|blob|boolean|byte|char|clienttype|code|completiontriggererrorlevel|connectiontype|database|dataclassification|datascope|date|dateformula|datetime|decimal|defaultlayout|dialog|dictionary|dotnetassembly|dotnettypedeclaration|duration|errorinfo|errortype|executioncontext|executionmode|fieldclass|fieldref|fieldtype|file|filterpagebuilder|guid|httpclient|httpcontent|httpheaders|httprequestmessage|httpresponsemessage|instream|integer|joker|jsonarray|jsonobject|jsontoken|jsonvalue|keyref|list|moduledependencyinfo|moduleinfo|none|notification|notificationscope|objecttype|option|outstream|pageresult|record|recordid|recordref|reportformat|securityfilter|sessionsettings|tableconnectiontype|tablefilter|testaction|testfield|testfilterfield|testpage|testpermissions|testrequestpage|text|textbuilder|textconst|textencoding|time|transactionmodel|transactiontype|variant|verbosity|version|view|views|webserviceactioncontext|webserviceactionresultcode|xmlattribute|xmlattributecollection|xmlcdata|xmlcomment|xmldeclaration|xmldocument|xmldocumenttype|xmlelement|xmlnamespacemanager|xmlnametable|xmlnode|xmlnodelist|xmlprocessinginstruction|xmlreadoptions|xmltext|xmlwriteoptions)\b/i,operator:/\.\.|:[=:]|[-+*/]=?|<>|[<>]=?|=|\b(?:and|div|mod|not|or|xor)\b/i,punctuation:/[()\[\]{}:.;,]/}},4852:function(){Prism.languages.antlr4={comment:/\/\/.*|\/\*[\s\S]*?(?:\*\/|$)/,string:{pattern:/'(?:\\.|[^\\'\r\n])*'/,greedy:!0},"character-class":{pattern:/\[(?:\\.|[^\\\]\r\n])*\]/,greedy:!0,alias:"regex",inside:{range:{pattern:/([^[]|(?:^|[^\\])(?:\\\\)*\\\[)-(?!\])/,lookbehind:!0,alias:"punctuation"},escape:/\\(?:u(?:[a-fA-F\d]{4}|\{[a-fA-F\d]+\})|[pP]\{[=\w-]+\}|[^\r\nupP])/,punctuation:/[\[\]]/}},action:{pattern:/\{(?:[^{}]|\{(?:[^{}]|\{(?:[^{}]|\{[^{}]*\})*\})*\})*\}/,greedy:!0,inside:{content:{pattern:/(\{)[\s\S]+(?=\})/,lookbehind:!0},punctuation:/[{}]/}},command:{pattern:/(->\s*(?!\s))(?:\s*(?:,\s*)?\b[a-z]\w*(?:\s*\([^()\r\n]*\))?)+(?=\s*;)/i,lookbehind:!0,inside:{function:/\b\w+(?=\s*(?:[,(]|$))/,punctuation:/[,()]/}},annotation:{pattern:/@\w+(?:::\w+)*/,alias:"keyword"},label:{pattern:/#[ \t]*\w+/,alias:"punctuation"},keyword:/\b(?:catch|channels|finally|fragment|grammar|import|lexer|locals|mode|options|parser|returns|throws|tokens)\b/,definition:[{pattern:/\b[a-z]\w*(?=\s*:)/,alias:["rule","class-name"]},{pattern:/\b[A-Z]\w*(?=\s*:)/,alias:["token","constant"]}],constant:/\b[A-Z][A-Z_]*\b/,operator:/\.\.|->|[|~]|[*+?]\??/,punctuation:/[;:()=]/},Prism.languages.g4=Prism.languages.antlr4},7533:function(){Prism.languages.apacheconf={comment:/#.*/,"directive-inline":{pattern:/(^[\t ]*)\b(?:AcceptFilter|AcceptPathInfo|AccessFileName|Action|Add(?:Alt|AltByEncoding|AltByType|Charset|DefaultCharset|Description|Encoding|Handler|Icon|IconByEncoding|IconByType|InputFilter|Language|ModuleInfo|OutputFilter|OutputFilterByType|Type)|Alias|AliasMatch|Allow(?:CONNECT|EncodedSlashes|Methods|Override|OverrideList)?|Anonymous(?:_LogEmail|_MustGiveEmail|_NoUserID|_VerifyEmail)?|AsyncRequestWorkerFactor|Auth(?:BasicAuthoritative|BasicFake|BasicProvider|BasicUseDigestAlgorithm|DBDUserPWQuery|DBDUserRealmQuery|DBMGroupFile|DBMType|DBMUserFile|Digest(?:Algorithm|Domain|NonceLifetime|Provider|Qop|ShmemSize)|Form(?:Authoritative|Body|DisableNoStore|FakeBasicAuth|Location|LoginRequiredLocation|LoginSuccessLocation|LogoutLocation|Method|Mimetype|Password|Provider|SitePassphrase|Size|Username)|GroupFile|LDAP(?:AuthorizePrefix|BindAuthoritative|BindDN|BindPassword|CharsetConfig|CompareAsUser|CompareDNOnServer|DereferenceAliases|GroupAttribute|GroupAttributeIsDN|InitialBindAsUser|InitialBindPattern|MaxSubGroupDepth|RemoteUserAttribute|RemoteUserIsDN|SearchAsUser|SubGroupAttribute|SubGroupClass|Url)|Merging|Name|nCache(?:Context|Enable|ProvideFor|SOCache|Timeout)|nzFcgiCheckAuthnProvider|nzFcgiDefineProvider|Type|UserFile|zDBDLoginToReferer|zDBDQuery|zDBDRedirectQuery|zDBMType|zSendForbiddenOnFailure)|BalancerGrowth|BalancerInherit|BalancerMember|BalancerPersist|BrowserMatch|BrowserMatchNoCase|BufferedLogs|BufferSize|Cache(?:DefaultExpire|DetailHeader|DirLength|DirLevels|Disable|Enable|File|Header|IgnoreCacheControl|IgnoreHeaders|IgnoreNoLastMod|IgnoreQueryString|IgnoreURLSessionIdentifiers|KeyBaseURL|LastModifiedFactor|Lock|LockMaxAge|LockPath|MaxExpire|MaxFileSize|MinExpire|MinFileSize|NegotiatedDocs|QuickHandler|ReadSize|ReadTime|Root|Socache(?:MaxSize|MaxTime|MinTime|ReadSize|ReadTime)?|StaleOnError|StoreExpired|StoreNoStore|StorePrivate)|CGIDScriptTimeout|CGIMapExtension|CharsetDefault|CharsetOptions|CharsetSourceEnc|CheckCaseOnly|CheckSpelling|ChrootDir|ContentDigest|CookieDomain|CookieExpires|CookieName|CookieStyle|CookieTracking|CoreDumpDirectory|CustomLog|Dav|DavDepthInfinity|DavGenericLockDB|DavLockDB|DavMinTimeout|DBDExptime|DBDInitSQL|DBDKeep|DBDMax|DBDMin|DBDParams|DBDPersist|DBDPrepareSQL|DBDriver|DefaultIcon|DefaultLanguage|DefaultRuntimeDir|DefaultType|Define|Deflate(?:BufferSize|CompressionLevel|FilterNote|InflateLimitRequestBody|InflateRatio(?:Burst|Limit)|MemLevel|WindowSize)|Deny|DirectoryCheckHandler|DirectoryIndex|DirectoryIndexRedirect|DirectorySlash|DocumentRoot|DTracePrivileges|DumpIOInput|DumpIOOutput|EnableExceptionHook|EnableMMAP|EnableSendfile|Error|ErrorDocument|ErrorLog|ErrorLogFormat|Example|ExpiresActive|ExpiresByType|ExpiresDefault|ExtendedStatus|ExtFilterDefine|ExtFilterOptions|FallbackResource|FileETag|FilterChain|FilterDeclare|FilterProtocol|FilterProvider|FilterTrace|ForceLanguagePriority|ForceType|ForensicLog|GprofDir|GracefulShutdownTimeout|Group|Header|HeaderName|Heartbeat(?:Address|Listen|MaxServers|Storage)|HostnameLookups|IdentityCheck|IdentityCheckTimeout|ImapBase|ImapDefault|ImapMenu|Include|IncludeOptional|Index(?:HeadInsert|Ignore|IgnoreReset|Options|OrderDefault|StyleSheet)|InputSed|ISAPI(?:AppendLogToErrors|AppendLogToQuery|CacheFile|FakeAsync|LogNotSupported|ReadAheadBuffer)|KeepAlive|KeepAliveTimeout|KeptBodySize|LanguagePriority|LDAP(?:CacheEntries|CacheTTL|ConnectionPoolTTL|ConnectionTimeout|LibraryDebug|OpCacheEntries|OpCacheTTL|ReferralHopLimit|Referrals|Retries|RetryDelay|SharedCacheFile|SharedCacheSize|Timeout|TrustedClientCert|TrustedGlobalCert|TrustedMode|VerifyServerCert)|Limit(?:InternalRecursion|Request(?:Body|Fields|FieldSize|Line)|XMLRequestBody)|Listen|ListenBackLog|LoadFile|LoadModule|LogFormat|LogLevel|LogMessage|LuaAuthzProvider|LuaCodeCache|Lua(?:Hook(?:AccessChecker|AuthChecker|CheckUserID|Fixups|InsertFilter|Log|MapToStorage|TranslateName|TypeChecker)|Inherit|InputFilter|MapHandler|OutputFilter|PackageCPath|PackagePath|QuickHandler|Root|Scope)|Max(?:ConnectionsPerChild|KeepAliveRequests|MemFree|RangeOverlaps|RangeReversals|Ranges|RequestWorkers|SpareServers|SpareThreads|Threads)|MergeTrailers|MetaDir|MetaFiles|MetaSuffix|MimeMagicFile|MinSpareServers|MinSpareThreads|MMapFile|ModemStandard|ModMimeUsePathInfo|MultiviewsMatch|Mutex|NameVirtualHost|NoProxy|NWSSLTrustedCerts|NWSSLUpgradeable|Options|Order|OutputSed|PassEnv|PidFile|PrivilegesMode|Protocol|ProtocolEcho|Proxy(?:AddHeaders|BadHeader|Block|Domain|ErrorOverride|ExpressDBMFile|ExpressDBMType|ExpressEnable|FtpDirCharset|FtpEscapeWildcards|FtpListOnWildcard|HTML(?:BufSize|CharsetOut|DocType|Enable|Events|Extended|Fixups|Interp|Links|Meta|StripComments|URLMap)|IOBufferSize|MaxForwards|Pass(?:Inherit|InterpolateEnv|Match|Reverse|ReverseCookieDomain|ReverseCookiePath)?|PreserveHost|ReceiveBufferSize|Remote|RemoteMatch|Requests|SCGIInternalRedirect|SCGISendfile|Set|SourceAddress|Status|Timeout|Via)|ReadmeName|ReceiveBufferSize|Redirect|RedirectMatch|RedirectPermanent|RedirectTemp|ReflectorHeader|RemoteIP(?:Header|InternalProxy|InternalProxyList|ProxiesHeader|TrustedProxy|TrustedProxyList)|RemoveCharset|RemoveEncoding|RemoveHandler|RemoveInputFilter|RemoveLanguage|RemoveOutputFilter|RemoveType|RequestHeader|RequestReadTimeout|Require|Rewrite(?:Base|Cond|Engine|Map|Options|Rule)|RLimitCPU|RLimitMEM|RLimitNPROC|Satisfy|ScoreBoardFile|Script(?:Alias|AliasMatch|InterpreterSource|Log|LogBuffer|LogLength|Sock)?|SecureListen|SeeRequestTail|SendBufferSize|Server(?:Admin|Alias|Limit|Name|Path|Root|Signature|Tokens)|Session(?:Cookie(?:Name|Name2|Remove)|Crypto(?:Cipher|Driver|Passphrase|PassphraseFile)|DBD(?:CookieName|CookieName2|CookieRemove|DeleteLabel|InsertLabel|PerUser|SelectLabel|UpdateLabel)|Env|Exclude|Header|Include|MaxAge)?|SetEnv|SetEnvIf|SetEnvIfExpr|SetEnvIfNoCase|SetHandler|SetInputFilter|SetOutputFilter|SSIEndTag|SSIErrorMsg|SSIETag|SSILastModified|SSILegacyExprParser|SSIStartTag|SSITimeFormat|SSIUndefinedEcho|SSL(?:CACertificateFile|CACertificatePath|CADNRequestFile|CADNRequestPath|CARevocationCheck|CARevocationFile|CARevocationPath|CertificateChainFile|CertificateFile|CertificateKeyFile|CipherSuite|Compression|CryptoDevice|Engine|FIPS|HonorCipherOrder|InsecureRenegotiation|OCSP(?:DefaultResponder|Enable|OverrideResponder|ResponderTimeout|ResponseMaxAge|ResponseTimeSkew|UseRequestNonce)|OpenSSLConfCmd|Options|PassPhraseDialog|Protocol|Proxy(?:CACertificateFile|CACertificatePath|CARevocation(?:Check|File|Path)|CheckPeer(?:CN|Expire|Name)|CipherSuite|Engine|MachineCertificate(?:ChainFile|File|Path)|Protocol|Verify|VerifyDepth)|RandomSeed|RenegBufferSize|Require|RequireSSL|Session(?:Cache|CacheTimeout|TicketKeyFile|Tickets)|SRPUnknownUserSeed|SRPVerifierFile|Stapling(?:Cache|ErrorCacheTimeout|FakeTryLater|ForceURL|ResponderTimeout|ResponseMaxAge|ResponseTimeSkew|ReturnResponderErrors|StandardCacheTimeout)|StrictSNIVHostCheck|UserName|UseStapling|VerifyClient|VerifyDepth)|StartServers|StartThreads|Substitute|Suexec|SuexecUserGroup|ThreadLimit|ThreadsPerChild|ThreadStackSize|TimeOut|TraceEnable|TransferLog|TypesConfig|UnDefine|UndefMacro|UnsetEnv|Use|UseCanonicalName|UseCanonicalPhysicalPort|User|UserDir|VHostCGIMode|VHostCGIPrivs|VHostGroup|VHostPrivs|VHostSecure|VHostUser|Virtual(?:DocumentRoot|ScriptAlias)(?:IP)?|WatchdogInterval|XBitHack|xml2EncAlias|xml2EncDefault|xml2StartParse)\b/im,lookbehind:!0,alias:"property"},"directive-block":{pattern:/<\/?\b(?:Auth[nz]ProviderAlias|Directory|DirectoryMatch|Else|ElseIf|Files|FilesMatch|If|IfDefine|IfModule|IfVersion|Limit|LimitExcept|Location|LocationMatch|Macro|Proxy|Require(?:All|Any|None)|VirtualHost)\b.*>/i,inside:{"directive-block":{pattern:/^<\/?\w+/,inside:{punctuation:/^<\/?/},alias:"tag"},"directive-block-parameter":{pattern:/.*[^>]/,inside:{punctuation:/:/,string:{pattern:/("|').*\1/,inside:{variable:/[$%]\{?(?:\w\.?[-+:]?)+\}?/}}},alias:"attr-value"},punctuation:/>/},alias:"tag"},"directive-flags":{pattern:/\[(?:[\w=],?)+\]/,alias:"keyword"},string:{pattern:/("|').*\1/,inside:{variable:/[$%]\{?(?:\w\.?[-+:]?)+\}?/}},variable:/[$%]\{?(?:\w\.?[-+:]?)+\}?/,regex:/\^?.*\$|\^.*\$?/}},2594:function(){(function(e){var t=/\b(?:(?:after|before)(?=\s+[a-z])|abstract|activate|and|any|array|as|asc|autonomous|begin|bigdecimal|blob|boolean|break|bulk|by|byte|case|cast|catch|char|class|collect|commit|const|continue|currency|date|datetime|decimal|default|delete|desc|do|double|else|end|enum|exception|exit|export|extends|final|finally|float|for|from|get(?=\s*[{};])|global|goto|group|having|hint|if|implements|import|in|inner|insert|instanceof|int|integer|interface|into|join|like|limit|list|long|loop|map|merge|new|not|null|nulls|number|object|of|on|or|outer|override|package|parallel|pragma|private|protected|public|retrieve|return|rollback|select|set|short|sObject|sort|static|string|super|switch|synchronized|system|testmethod|then|this|throw|time|transaction|transient|trigger|try|undelete|update|upsert|using|virtual|void|webservice|when|where|while|(?:inherited|with|without)\s+sharing)\b/i,n=/\b(?:(?=[a-z_]\w*\s*[<\[])|(?!))[A-Z_]\w*(?:\s*\.\s*[A-Z_]\w*)*\b(?:\s*(?:\[\s*\]|<(?:[^<>]|<(?:[^<>]|<[^<>]*>)*>)*>))*/.source.replace(//g,(function(){return t.source}));function r(e){return RegExp(e.replace(//g,(function(){return n})),"i")}var i={keyword:t,punctuation:/[()\[\]{};,:.<>]/};e.languages.apex={comment:e.languages.clike.comment,string:e.languages.clike.string,sql:{pattern:/((?:[=,({:]|\breturn)\s*)\[[^\[\]]*\]/i,lookbehind:!0,greedy:!0,alias:"language-sql",inside:e.languages.sql},annotation:{pattern:/@\w+\b/,alias:"punctuation"},"class-name":[{pattern:r(/(\b(?:class|enum|extends|implements|instanceof|interface|new|trigger\s+\w+\s+on)\s+)/.source),lookbehind:!0,inside:i},{pattern:r(/(\(\s*)(?=\s*\)\s*[\w(])/.source),lookbehind:!0,inside:i},{pattern:r(/(?=\s*\w+\s*[;=,(){:])/.source),inside:i}],trigger:{pattern:/(\btrigger\s+)\w+\b/i,lookbehind:!0,alias:"class-name"},keyword:t,function:/\b[a-z_]\w*(?=\s*\()/i,boolean:/\b(?:false|true)\b/i,number:/(?:\B\.\d+|\b\d+(?:\.\d+|L)?)\b/i,operator:/[!=](?:==?)?|\?\.?|&&|\|\||--|\+\+|[-+*/^&|]=?|:|<{1,3}=?/,punctuation:/[()\[\]{};,.]/}})(Prism)},8508:function(){Prism.languages.apl={comment:/(?:⍝|#[! ]).*$/m,string:{pattern:/'(?:[^'\r\n]|'')*'/,greedy:!0},number:/¯?(?:\d*\.?\b\d+(?:e[+¯]?\d+)?|¯|∞)(?:j¯?(?:(?:\d+(?:\.\d+)?|\.\d+)(?:e[+¯]?\d+)?|¯|∞))?/i,statement:/:[A-Z][a-z][A-Za-z]*\b/,"system-function":{pattern:/⎕[A-Z]+/i,alias:"function"},constant:/[⍬⌾#⎕⍞]/,function:/[-+×÷⌈⌊∣|⍳⍸?*⍟○!⌹<≤=>≥≠≡≢∊⍷∪∩~∨∧⍱⍲⍴,⍪⌽⊖⍉↑↓⊂⊃⊆⊇⌷⍋⍒⊤⊥⍕⍎⊣⊢⍁⍂≈⍯↗¤→]/,"monadic-operator":{pattern:/[\\\/⌿⍀¨⍨⌶&∥]/,alias:"operator"},"dyadic-operator":{pattern:/[.⍣⍠⍤∘⌸@⌺⍥]/,alias:"operator"},assignment:{pattern:/←/,alias:"keyword"},punctuation:/[\[;\]()◇⋄]/,dfn:{pattern:/[{}⍺⍵⍶⍹∇⍫:]/,alias:"builtin"}}},1093:function(){Prism.languages.applescript={comment:[/\(\*(?:\(\*(?:[^*]|\*(?!\)))*\*\)|(?!\(\*)[\s\S])*?\*\)/,/--.+/,/#.+/],string:/"(?:\\.|[^"\\\r\n])*"/,number:/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e-?\d+)?\b/i,operator:[/[&=≠≤≥*+\-\/÷^]|[<>]=?/,/\b(?:(?:begin|end|start)s? with|(?:contains?|(?:does not|doesn't) contain)|(?:is|isn't|is not) (?:contained by|in)|(?:(?:is|isn't|is not) )?(?:greater|less) than(?: or equal)?(?: to)?|(?:comes|(?:does not|doesn't) come) (?:after|before)|(?:is|isn't|is not) equal(?: to)?|(?:(?:does not|doesn't) equal|equal to|equals|is not|isn't)|(?:a )?(?:ref(?: to)?|reference to)|(?:and|as|div|mod|not|or))\b/],keyword:/\b(?:about|above|after|against|apart from|around|aside from|at|back|before|beginning|behind|below|beneath|beside|between|but|by|considering|continue|copy|does|eighth|else|end|equal|error|every|exit|false|fifth|first|for|fourth|from|front|get|given|global|if|ignoring|in|instead of|into|is|it|its|last|local|me|middle|my|ninth|of|on|onto|out of|over|prop|property|put|repeat|return|returning|second|set|seventh|since|sixth|some|tell|tenth|that|the|then|third|through|thru|timeout|times|to|transaction|true|try|until|where|while|whose|with|without)\b/,"class-name":/\b(?:POSIX file|RGB color|alias|application|boolean|centimeters|centimetres|class|constant|cubic centimeters|cubic centimetres|cubic feet|cubic inches|cubic meters|cubic metres|cubic yards|date|degrees Celsius|degrees Fahrenheit|degrees Kelvin|feet|file|gallons|grams|inches|integer|kilograms|kilometers|kilometres|list|liters|litres|meters|metres|miles|number|ounces|pounds|quarts|real|record|reference|script|square feet|square kilometers|square kilometres|square meters|square metres|square miles|square yards|text|yards)\b/,punctuation:/[{}():,¬«»《》]/}},5691:function(){Prism.languages.aql={comment:/\/\/.*|\/\*[\s\S]*?\*\//,property:{pattern:/([{,]\s*)(?:(?!\d)\w+|(["'´`])(?:(?!\2)[^\\\r\n]|\\.)*\2)(?=\s*:)/,lookbehind:!0,greedy:!0},string:{pattern:/(["'])(?:(?!\1)[^\\\r\n]|\\.)*\1/,greedy:!0},identifier:{pattern:/([´`])(?:(?!\1)[^\\\r\n]|\\.)*\1/,greedy:!0},variable:/@@?\w+/,keyword:[{pattern:/(\bWITH\s+)COUNT(?=\s+INTO\b)/i,lookbehind:!0},/\b(?:AGGREGATE|ALL|AND|ANY|ASC|COLLECT|DESC|DISTINCT|FILTER|FOR|GRAPH|IN|INBOUND|INSERT|INTO|K_PATHS|K_SHORTEST_PATHS|LET|LIKE|LIMIT|NONE|NOT|NULL|OR|OUTBOUND|REMOVE|REPLACE|RETURN|SHORTEST_PATH|SORT|UPDATE|UPSERT|WINDOW|WITH)\b/i,{pattern:/(^|[^\w.[])(?:KEEP|PRUNE|SEARCH|TO)\b/i,lookbehind:!0},{pattern:/(^|[^\w.[])(?:CURRENT|NEW|OLD)\b/,lookbehind:!0},{pattern:/\bOPTIONS(?=\s*\{)/i}],function:/\b(?!\d)\w+(?=\s*\()/,boolean:/\b(?:false|true)\b/i,range:{pattern:/\.\./,alias:"operator"},number:[/\b0b[01]+/i,/\b0x[0-9a-f]+/i,/(?:\B\.\d+|\b(?:0|[1-9]\d*)(?:\.\d+)?)(?:e[+-]?\d+)?/i],operator:/\*{2,}|[=!]~|[!=<>]=?|&&|\|\||[-+*/%]/,punctuation:/::|[?.:,;()[\]{}]/}},1849:function(){Prism.languages.arduino=Prism.languages.extend("cpp",{keyword:/\b(?:String|array|bool|boolean|break|byte|case|catch|continue|default|do|double|else|finally|for|function|goto|if|in|instanceof|int|integer|long|loop|new|null|return|setup|string|switch|throw|try|void|while|word)\b/,constant:/\b(?:ANALOG_MESSAGE|DEFAULT|DIGITAL_MESSAGE|EXTERNAL|FIRMATA_STRING|HIGH|INPUT|INPUT_PULLUP|INTERNAL|INTERNAL1V1|INTERNAL2V56|LED_BUILTIN|LOW|OUTPUT|REPORT_ANALOG|REPORT_DIGITAL|SET_PIN_MODE|SYSEX_START|SYSTEM_RESET)\b/,builtin:/\b(?:Audio|BSSID|Bridge|Client|Console|EEPROM|Esplora|EsploraTFT|Ethernet|EthernetClient|EthernetServer|EthernetUDP|File|FileIO|FileSystem|Firmata|GPRS|GSM|GSMBand|GSMClient|GSMModem|GSMPIN|GSMScanner|GSMServer|GSMVoiceCall|GSM_SMS|HttpClient|IPAddress|IRread|Keyboard|KeyboardController|LiquidCrystal|LiquidCrystal_I2C|Mailbox|Mouse|MouseController|PImage|Process|RSSI|RobotControl|RobotMotor|SD|SPI|SSID|Scheduler|Serial|Server|Servo|SoftwareSerial|Stepper|Stream|TFT|Task|USBHost|WiFi|WiFiClient|WiFiServer|WiFiUDP|Wire|YunClient|YunServer|abs|addParameter|analogRead|analogReadResolution|analogReference|analogWrite|analogWriteResolution|answerCall|attach|attachGPRS|attachInterrupt|attached|autoscroll|available|background|beep|begin|beginPacket|beginSD|beginSMS|beginSpeaker|beginTFT|beginTransmission|beginWrite|bit|bitClear|bitRead|bitSet|bitWrite|blink|blinkVersion|buffer|changePIN|checkPIN|checkPUK|checkReg|circle|cityNameRead|cityNameWrite|clear|clearScreen|click|close|compassRead|config|connect|connected|constrain|cos|countryNameRead|countryNameWrite|createChar|cursor|debugPrint|delay|delayMicroseconds|detach|detachInterrupt|digitalRead|digitalWrite|disconnect|display|displayLogos|drawBMP|drawCompass|encryptionType|end|endPacket|endSMS|endTransmission|endWrite|exists|exitValue|fill|find|findUntil|flush|gatewayIP|get|getAsynchronously|getBand|getButton|getCurrentCarrier|getIMEI|getKey|getModifiers|getOemKey|getPINUsed|getResult|getSignalStrength|getSocket|getVoiceCallStatus|getXChange|getYChange|hangCall|height|highByte|home|image|interrupts|isActionDone|isDirectory|isListening|isPIN|isPressed|isValid|keyPressed|keyReleased|keyboardRead|knobRead|leftToRight|line|lineFollowConfig|listen|listenOnLocalhost|loadImage|localIP|lowByte|macAddress|maintain|map|max|messageAvailable|micros|millis|min|mkdir|motorsStop|motorsWrite|mouseDragged|mouseMoved|mousePressed|mouseReleased|move|noAutoscroll|noBlink|noBuffer|noCursor|noDisplay|noFill|noInterrupts|noListenOnLocalhost|noStroke|noTone|onReceive|onRequest|open|openNextFile|overflow|parseCommand|parseFloat|parseInt|parsePacket|pauseMode|peek|pinMode|playFile|playMelody|point|pointTo|position|pow|prepare|press|print|printFirmwareVersion|printVersion|println|process|processInput|pulseIn|put|random|randomSeed|read|readAccelerometer|readBlue|readButton|readBytes|readBytesUntil|readGreen|readJoystickButton|readJoystickSwitch|readJoystickX|readJoystickY|readLightSensor|readMessage|readMicrophone|readNetworks|readRed|readSlider|readString|readStringUntil|readTemperature|ready|rect|release|releaseAll|remoteIP|remoteNumber|remotePort|remove|requestFrom|retrieveCallingNumber|rewindDirectory|rightToLeft|rmdir|robotNameRead|robotNameWrite|run|runAsynchronously|runShellCommand|runShellCommandAsynchronously|running|scanNetworks|scrollDisplayLeft|scrollDisplayRight|seek|sendAnalog|sendDigitalPortPair|sendDigitalPorts|sendString|sendSysex|serialEvent|setBand|setBitOrder|setClockDivider|setCursor|setDNS|setDataMode|setFirmwareVersion|setMode|setPINUsed|setSpeed|setTextSize|setTimeout|shiftIn|shiftOut|shutdown|sin|size|sqrt|startLoop|step|stop|stroke|subnetMask|switchPIN|tan|tempoWrite|text|tone|transfer|tuneWrite|turn|updateIR|userNameRead|userNameWrite|voiceCall|waitContinue|width|write|writeBlue|writeGreen|writeJSON|writeMessage|writeMicroseconds|writeRGB|writeRed|yield)\b/}),Prism.languages.ino=Prism.languages.arduino},3253:function(){Prism.languages.arff={comment:/%.*/,string:{pattern:/(["'])(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0},keyword:/@(?:attribute|data|end|relation)\b/i,number:/\b\d+(?:\.\d+)?\b/,punctuation:/[{},]/}},4029:function(){Prism.languages.armasm={comment:{pattern:/;.*/,greedy:!0},string:{pattern:/"(?:[^"\r\n]|"")*"/,greedy:!0,inside:{variable:{pattern:/((?:^|[^$])(?:\${2})*)\$\w+/,lookbehind:!0}}},char:{pattern:/'(?:[^'\r\n]{0,4}|'')'/,greedy:!0},"version-symbol":{pattern:/\|[\w@]+\|/,greedy:!0,alias:"property"},boolean:/\b(?:FALSE|TRUE)\b/,directive:{pattern:/\b(?:ALIAS|ALIGN|AREA|ARM|ASSERT|ATTR|CN|CODE|CODE16|CODE32|COMMON|CP|DATA|DCB|DCD|DCDO|DCDU|DCFD|DCFDU|DCI|DCQ|DCQU|DCW|DCWU|DN|ELIF|ELSE|END|ENDFUNC|ENDIF|ENDP|ENTRY|EQU|EXPORT|EXPORTAS|EXTERN|FIELD|FILL|FN|FUNCTION|GBLA|GBLL|GBLS|GET|GLOBAL|IF|IMPORT|INCBIN|INCLUDE|INFO|KEEP|LCLA|LCLL|LCLS|LTORG|MACRO|MAP|MEND|MEXIT|NOFP|OPT|PRESERVE8|PROC|QN|READONLY|RELOC|REQUIRE|REQUIRE8|RLIST|ROUT|SETA|SETL|SETS|SN|SPACE|SUBT|THUMB|THUMBX|TTL|WEND|WHILE)\b/,alias:"property"},instruction:{pattern:/((?:^|(?:^|[^\\])(?:\r\n?|\n))[ \t]*(?:(?:[A-Z][A-Z0-9_]*[a-z]\w*|[a-z]\w*|\d+)[ \t]+)?)\b[A-Z.]+\b/,lookbehind:!0,alias:"keyword"},variable:/\$\w+/,number:/(?:\b[2-9]_\d+|(?:\b\d+(?:\.\d+)?|\B\.\d+)(?:e-?\d+)?|\b0(?:[fd]_|x)[0-9a-f]+|&[0-9a-f]+)\b/i,register:{pattern:/\b(?:r\d|lr)\b/,alias:"symbol"},operator:/<>|<<|>>|&&|\|\||[=!<>/]=?|[+\-*%#?&|^]|:[A-Z]+:/,punctuation:/[()[\],]/},Prism.languages["arm-asm"]=Prism.languages.armasm},2481:function(){(function(e){var t=function(t,n){return{pattern:RegExp(/\{!/.source+"(?:"+(n||t)+")"+/$[\s\S]*\}/.source,"m"),greedy:!0,inside:{embedded:{pattern:/(^\{!\w+\b)[\s\S]+(?=\}$)/,lookbehind:!0,alias:"language-"+t,inside:e.languages[t]},string:/[\s\S]+/}}};e.languages.arturo={comment:{pattern:/;.*/,greedy:!0},character:{pattern:/`.`/,alias:"char",greedy:!0},number:{pattern:/\b\d+(?:\.\d+(?:\.\d+(?:-[\w+-]+)?)?)?\b/},string:{pattern:/"(?:[^"\\\r\n]|\\.)*"/,greedy:!0},regex:{pattern:/\{\/.*?\/\}/,greedy:!0},"html-string":t("html"),"css-string":t("css"),"js-string":t("js"),"md-string":t("md"),"sql-string":t("sql"),"sh-string":t("shell","sh"),multistring:{pattern:/».*|\{:[\s\S]*?:\}|\{[\s\S]*?\}|^-{6}$[\s\S]*/m,alias:"string",greedy:!0},label:{pattern:/\w+\b\??:/,alias:"property"},literal:{pattern:/'(?:\w+\b\??:?)/,alias:"constant"},type:{pattern:/:(?:\w+\b\??:?)/,alias:"class-name"},color:/#\w+/,predicate:{pattern:/\b(?:all|and|any|ascii|attr|attribute|attributeLabel|binary|block|char|contains|database|date|dictionary|empty|equal|even|every|exists|false|floating|function|greater|greaterOrEqual|if|in|inline|integer|is|key|label|leap|less|lessOrEqual|literal|logical|lower|nand|negative|nor|not|notEqual|null|numeric|odd|or|path|pathLabel|positive|prefix|prime|regex|same|set|some|sorted|standalone|string|subset|suffix|superset|symbol|symbolLiteral|true|try|type|unless|upper|when|whitespace|word|xnor|xor|zero)\?/,alias:"keyword"},"builtin-function":{pattern:/\b(?:abs|acos|acosh|acsec|acsech|actan|actanh|add|after|alert|alias|and|angle|append|arg|args|arity|array|as|asec|asech|asin|asinh|atan|atan2|atanh|attr|attrs|average|before|benchmark|blend|break|call|capitalize|case|ceil|chop|clear|clip|close|color|combine|conj|continue|copy|cos|cosh|crc|csec|csech|ctan|ctanh|cursor|darken|dec|decode|define|delete|desaturate|deviation|dialog|dictionary|difference|digest|digits|div|do|download|drop|dup|e|else|empty|encode|ensure|env|escape|execute|exit|exp|extend|extract|factors|fdiv|filter|first|flatten|floor|fold|from|function|gamma|gcd|get|goto|hash|hypot|if|inc|indent|index|infinity|info|input|insert|inspect|intersection|invert|jaro|join|keys|kurtosis|last|let|levenshtein|lighten|list|ln|log|loop|lower|mail|map|match|max|median|min|mod|module|mul|nand|neg|new|nor|normalize|not|now|null|open|or|outdent|pad|palette|panic|path|pause|permissions|permutate|pi|pop|popup|pow|powerset|powmod|prefix|print|prints|process|product|query|random|range|read|relative|remove|rename|render|repeat|replace|request|return|reverse|round|sample|saturate|script|sec|sech|select|serve|set|shl|shr|shuffle|sin|sinh|size|skewness|slice|sort|spin|split|sqrt|squeeze|stack|strip|sub|suffix|sum|switch|symbols|symlink|sys|take|tan|tanh|terminal|terminate|to|truncate|try|type|unclip|union|unique|unless|until|unzip|upper|values|var|variance|volume|webview|while|with|wordwrap|write|xnor|xor|zip)\b/,alias:"keyword"},sugar:{pattern:/->|=>|\||::/,alias:"operator"},punctuation:/[()[\],]/,symbol:{pattern:/<:|-:|ø|@|#|\+|\||\*|\$|---|-|%|\/|\.\.|\^|~|=|<|>|\\/},boolean:{pattern:/\b(?:false|maybe|true)\b/}},e.languages.art=e.languages["arturo"]})(Prism)},856:function(){(function(e){var t={pattern:/(^[ \t]*)\[(?!\[)(?:(["'$`])(?:(?!\2)[^\\]|\\.)*\2|\[(?:[^\[\]\\]|\\.)*\]|[^\[\]\\"'$`]|\\.)*\]/m,lookbehind:!0,inside:{quoted:{pattern:/([$`])(?:(?!\1)[^\\]|\\.)*\1/,inside:{punctuation:/^[$`]|[$`]$/}},interpreted:{pattern:/'(?:[^'\\]|\\.)*'/,inside:{punctuation:/^'|'$/}},string:/"(?:[^"\\]|\\.)*"/,variable:/\w+(?==)/,punctuation:/^\[|\]$|,/,operator:/=/,"attr-value":/(?!^\s+$).+/}},n=e.languages.asciidoc={"comment-block":{pattern:/^(\/{4,})$[\s\S]*?^\1/m,alias:"comment"},table:{pattern:/^\|={3,}(?:(?:\r?\n|\r(?!\n)).*)*?(?:\r?\n|\r)\|={3,}$/m,inside:{specifiers:{pattern:/(?:(?:(?:\d+(?:\.\d+)?|\.\d+)[+*](?:[<^>](?:\.[<^>])?|\.[<^>])?|[<^>](?:\.[<^>])?|\.[<^>])[a-z]*|[a-z]+)(?=\|)/,alias:"attr-value"},punctuation:{pattern:/(^|[^\\])[|!]=*/,lookbehind:!0}}},"passthrough-block":{pattern:/^(\+{4,})$[\s\S]*?^\1$/m,inside:{punctuation:/^\++|\++$/}},"literal-block":{pattern:/^(-{4,}|\.{4,})$[\s\S]*?^\1$/m,inside:{punctuation:/^(?:-+|\.+)|(?:-+|\.+)$/}},"other-block":{pattern:/^(--|\*{4,}|_{4,}|={4,})$[\s\S]*?^\1$/m,inside:{punctuation:/^(?:-+|\*+|_+|=+)|(?:-+|\*+|_+|=+)$/}},"list-punctuation":{pattern:/(^[ \t]*)(?:-|\*{1,5}|\.{1,5}|(?:[a-z]|\d+)\.|[xvi]+\))(?= )/im,lookbehind:!0,alias:"punctuation"},"list-label":{pattern:/(^[ \t]*)[a-z\d].+(?::{2,4}|;;)(?=\s)/im,lookbehind:!0,alias:"symbol"},"indented-block":{pattern:/((\r?\n|\r)\2)([ \t]+)\S.*(?:(?:\r?\n|\r)\3.+)*(?=\2{2}|$)/,lookbehind:!0},comment:/^\/\/.*/m,title:{pattern:/^.+(?:\r?\n|\r)(?:={3,}|-{3,}|~{3,}|\^{3,}|\+{3,})$|^={1,5} .+|^\.(?![\s.]).*/m,alias:"important",inside:{punctuation:/^(?:\.|=+)|(?:=+|-+|~+|\^+|\++)$/}},"attribute-entry":{pattern:/^:[^:\r\n]+:(?: .*?(?: \+(?:\r?\n|\r).*?)*)?$/m,alias:"tag"},attributes:t,hr:{pattern:/^'{3,}$/m,alias:"punctuation"},"page-break":{pattern:/^<{3,}$/m,alias:"punctuation"},admonition:{pattern:/^(?:CAUTION|IMPORTANT|NOTE|TIP|WARNING):/m,alias:"keyword"},callout:[{pattern:/(^[ \t]*)/m,lookbehind:!0,alias:"symbol"},{pattern:/<\d+>/,alias:"symbol"}],macro:{pattern:/\b[a-z\d][a-z\d-]*::?(?:[^\s\[\]]*\[(?:[^\]\\"']|(["'])(?:(?!\1)[^\\]|\\.)*\1|\\.)*\])/,inside:{function:/^[a-z\d-]+(?=:)/,punctuation:/^::?/,attributes:{pattern:/(?:\[(?:[^\]\\"']|(["'])(?:(?!\1)[^\\]|\\.)*\1|\\.)*\])/,inside:t.inside}}},inline:{pattern:/(^|[^\\])(?:(?:\B\[(?:[^\]\\"']|(["'])(?:(?!\2)[^\\]|\\.)*\2|\\.)*\])?(?:\b_(?!\s)(?: _|[^_\\\r\n]|\\.)+(?:(?:\r?\n|\r)(?: _|[^_\\\r\n]|\\.)+)*_\b|\B``(?!\s).+?(?:(?:\r?\n|\r).+?)*''\B|\B`(?!\s)(?:[^`'\s]|\s+\S)+['`]\B|\B(['*+#])(?!\s)(?: \3|(?!\3)[^\\\r\n]|\\.)+(?:(?:\r?\n|\r)(?: \3|(?!\3)[^\\\r\n]|\\.)+)*\3\B)|(?:\[(?:[^\]\\"']|(["'])(?:(?!\4)[^\\]|\\.)*\4|\\.)*\])?(?:(__|\*\*|\+\+\+?|##|\$\$|[~^]).+?(?:(?:\r?\n|\r).+?)*\5|\{[^}\r\n]+\}|\[\[\[?.+?(?:(?:\r?\n|\r).+?)*\]?\]\]|<<.+?(?:(?:\r?\n|\r).+?)*>>|\(\(\(?.+?(?:(?:\r?\n|\r).+?)*\)?\)\)))/m,lookbehind:!0,inside:{attributes:t,url:{pattern:/^(?:\[\[\[?.+?\]?\]\]|<<.+?>>)$/,inside:{punctuation:/^(?:\[\[\[?|<<)|(?:\]\]\]?|>>)$/}},"attribute-ref":{pattern:/^\{.+\}$/,inside:{variable:{pattern:/(^\{)[a-z\d,+_-]+/,lookbehind:!0},operator:/^[=?!#%@$]|!(?=[:}])/,punctuation:/^\{|\}$|::?/}},italic:{pattern:/^(['_])[\s\S]+\1$/,inside:{punctuation:/^(?:''?|__?)|(?:''?|__?)$/}},bold:{pattern:/^\*[\s\S]+\*$/,inside:{punctuation:/^\*\*?|\*\*?$/}},punctuation:/^(?:``?|\+{1,3}|##?|\$\$|[~^]|\(\(\(?)|(?:''?|\+{1,3}|##?|\$\$|[~^`]|\)?\)\))$/}},replacement:{pattern:/\((?:C|R|TM)\)/,alias:"builtin"},entity:/&#?[\da-z]{1,8};/i,"line-continuation":{pattern:/(^| )\+$/m,lookbehind:!0,alias:"punctuation"}};function r(e){e=e.split(" ");for(var t={},r=0,i=e.length;r>=?|<<=?|&[&=]?|\|[\|=]?|[-+*/%^!=<>?]=?/,punctuation:/[(),:]/}},4019:function(){Prism.languages.aspnet=Prism.languages.extend("markup",{"page-directive":{pattern:/<%\s*@.*%>/,alias:"tag",inside:{"page-directive":{pattern:/<%\s*@\s*(?:Assembly|Control|Implements|Import|Master(?:Type)?|OutputCache|Page|PreviousPageType|Reference|Register)?|%>/i,alias:"tag"},rest:Prism.languages.markup.tag.inside}},directive:{pattern:/<%.*%>/,alias:"tag",inside:{directive:{pattern:/<%\s*?[$=%#:]{0,2}|%>/,alias:"tag"},rest:Prism.languages.csharp}}}),Prism.languages.aspnet.tag.pattern=/<(?!%)\/?[^\s>\/]+(?:\s+[^\s>\/=]+(?:=(?:("|')(?:\\[\s\S]|(?!\1)[^\\])*\1|[^\s'">=]+))?)*\s*\/?>/,Prism.languages.insertBefore("inside","punctuation",{directive:Prism.languages.aspnet["directive"]},Prism.languages.aspnet.tag.inside["attr-value"]),Prism.languages.insertBefore("aspnet","comment",{"asp-comment":{pattern:/<%--[\s\S]*?--%>/,alias:["asp","comment"]}}),Prism.languages.insertBefore("aspnet",Prism.languages.javascript?"script":"tag",{"asp-script":{pattern:/(]*>)[\s\S]*?(?=<\/script>)/i,lookbehind:!0,alias:["asp","script"],inside:Prism.languages.csharp||{}}})},2776:function(){Prism.languages.autohotkey={comment:[{pattern:/(^|\s);.*/,lookbehind:!0},{pattern:/(^[\t ]*)\/\*(?:[\r\n](?![ \t]*\*\/)|[^\r\n])*(?:[\r\n][ \t]*\*\/)?/m,lookbehind:!0,greedy:!0}],tag:{pattern:/^([ \t]*)[^\s,`":]+(?=:[ \t]*$)/m,lookbehind:!0},string:/"(?:[^"\n\r]|"")*"/,variable:/%\w+%/,number:/\b0x[\dA-Fa-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee]-?\d+)?/,operator:/\?|\/\/?=?|:=|\|[=|]?|&[=&]?|\+[=+]?|-[=-]?|\*[=*]?|<(?:<=?|>|=)?|>>?=?|[.^!=~]=?|\b(?:AND|NOT|OR)\b/,boolean:/\b(?:false|true)\b/,command:{pattern:/\b(?:AutoTrim|BlockInput|Break|Click|ClipWait|Continue|Control|ControlClick|ControlFocus|ControlGet|ControlGetFocus|ControlGetPos|ControlGetText|ControlMove|ControlSend|ControlSendRaw|ControlSetText|CoordMode|Critical|DetectHiddenText|DetectHiddenWindows|Drive|DriveGet|DriveSpaceFree|EnvAdd|EnvDiv|EnvGet|EnvMult|EnvSet|EnvSub|EnvUpdate|Exit|ExitApp|FileAppend|FileCopy|FileCopyDir|FileCreateDir|FileCreateShortcut|FileDelete|FileEncoding|FileGetAttrib|FileGetShortcut|FileGetSize|FileGetTime|FileGetVersion|FileInstall|FileMove|FileMoveDir|FileRead|FileReadLine|FileRecycle|FileRecycleEmpty|FileRemoveDir|FileSelectFile|FileSelectFolder|FileSetAttrib|FileSetTime|FormatTime|GetKeyState|Gosub|Goto|GroupActivate|GroupAdd|GroupClose|GroupDeactivate|Gui|GuiControl|GuiControlGet|Hotkey|ImageSearch|IniDelete|IniRead|IniWrite|Input|InputBox|KeyWait|ListHotkeys|ListLines|ListVars|Loop|Menu|MouseClick|MouseClickDrag|MouseGetPos|MouseMove|MsgBox|OnExit|OutputDebug|Pause|PixelGetColor|PixelSearch|PostMessage|Process|Progress|Random|RegDelete|RegRead|RegWrite|Reload|Repeat|Return|Run|RunAs|RunWait|Send|SendEvent|SendInput|SendMessage|SendMode|SendPlay|SendRaw|SetBatchLines|SetCapslockState|SetControlDelay|SetDefaultMouseSpeed|SetEnv|SetFormat|SetKeyDelay|SetMouseDelay|SetNumlockState|SetRegView|SetScrollLockState|SetStoreCapslockMode|SetTimer|SetTitleMatchMode|SetWinDelay|SetWorkingDir|Shutdown|Sleep|Sort|SoundBeep|SoundGet|SoundGetWaveVolume|SoundPlay|SoundSet|SoundSetWaveVolume|SplashImage|SplashTextOff|SplashTextOn|SplitPath|StatusBarGetText|StatusBarWait|StringCaseSense|StringGetPos|StringLeft|StringLen|StringLower|StringMid|StringReplace|StringRight|StringSplit|StringTrimLeft|StringTrimRight|StringUpper|Suspend|SysGet|Thread|ToolTip|Transform|TrayTip|URLDownloadToFile|WinActivate|WinActivateBottom|WinClose|WinGet|WinGetActiveStats|WinGetActiveTitle|WinGetClass|WinGetPos|WinGetText|WinGetTitle|WinHide|WinKill|WinMaximize|WinMenuSelectItem|WinMinimize|WinMinimizeAll|WinMinimizeAllUndo|WinMove|WinRestore|WinSet|WinSetTitle|WinShow|WinWait|WinWaitActive|WinWaitClose|WinWaitNotActive)\b/i,alias:"selector"},constant:/\b(?:a_ahkpath|a_ahkversion|a_appdata|a_appdatacommon|a_autotrim|a_batchlines|a_caretx|a_carety|a_computername|a_controldelay|a_cursor|a_dd|a_ddd|a_dddd|a_defaultmousespeed|a_desktop|a_desktopcommon|a_detecthiddentext|a_detecthiddenwindows|a_endchar|a_eventinfo|a_exitreason|a_fileencoding|a_formatfloat|a_formatinteger|a_gui|a_guicontrol|a_guicontrolevent|a_guievent|a_guiheight|a_guiwidth|a_guix|a_guiy|a_hour|a_iconfile|a_iconhidden|a_iconnumber|a_icontip|a_index|a_ipaddress1|a_ipaddress2|a_ipaddress3|a_ipaddress4|a_is64bitos|a_isadmin|a_iscompiled|a_iscritical|a_ispaused|a_issuspended|a_isunicode|a_keydelay|a_language|a_lasterror|a_linefile|a_linenumber|a_loopfield|a_loopfileattrib|a_loopfiledir|a_loopfileext|a_loopfilefullpath|a_loopfilelongpath|a_loopfilename|a_loopfileshortname|a_loopfileshortpath|a_loopfilesize|a_loopfilesizekb|a_loopfilesizemb|a_loopfiletimeaccessed|a_loopfiletimecreated|a_loopfiletimemodified|a_loopreadline|a_loopregkey|a_loopregname|a_loopregsubkey|a_loopregtimemodified|a_loopregtype|a_mday|a_min|a_mm|a_mmm|a_mmmm|a_mon|a_mousedelay|a_msec|a_mydocuments|a_now|a_nowutc|a_numbatchlines|a_ostype|a_osversion|a_priorhotkey|a_priorkey|a_programfiles|a_programs|a_programscommon|a_ptrsize|a_regview|a_screendpi|a_screenheight|a_screenwidth|a_scriptdir|a_scriptfullpath|a_scripthwnd|a_scriptname|a_sec|a_space|a_startmenu|a_startmenucommon|a_startup|a_startupcommon|a_stringcasesense|a_tab|a_temp|a_thisfunc|a_thishotkey|a_thislabel|a_thismenu|a_thismenuitem|a_thismenuitempos|a_tickcount|a_timeidle|a_timeidlephysical|a_timesincepriorhotkey|a_timesincethishotkey|a_titlematchmode|a_titlematchmodespeed|a_username|a_wday|a_windelay|a_windir|a_workingdir|a_yday|a_year|a_yweek|a_yyyy|clipboard|clipboardall|comspec|errorlevel|programfiles)\b/i,builtin:/\b(?:abs|acos|asc|asin|atan|ceil|chr|class|comobjactive|comobjarray|comobjconnect|comobjcreate|comobjerror|comobjflags|comobjget|comobjquery|comobjtype|comobjvalue|cos|dllcall|exp|fileexist|Fileopen|floor|format|il_add|il_create|il_destroy|instr|isfunc|islabel|IsObject|ln|log|ltrim|lv_add|lv_delete|lv_deletecol|lv_getcount|lv_getnext|lv_gettext|lv_insert|lv_insertcol|lv_modify|lv_modifycol|lv_setimagelist|mod|numget|numput|onmessage|regexmatch|regexreplace|registercallback|round|rtrim|sb_seticon|sb_setparts|sb_settext|sin|sqrt|strlen|strreplace|strsplit|substr|tan|tv_add|tv_delete|tv_get|tv_getchild|tv_getcount|tv_getnext|tv_getparent|tv_getprev|tv_getselection|tv_gettext|tv_modify|varsetcapacity|winactive|winexist|__Call|__Get|__New|__Set)\b/i,symbol:/\b(?:alt|altdown|altup|appskey|backspace|browser_back|browser_favorites|browser_forward|browser_home|browser_refresh|browser_search|browser_stop|bs|capslock|ctrl|ctrlbreak|ctrldown|ctrlup|del|delete|down|end|enter|esc|escape|f1|f10|f11|f12|f13|f14|f15|f16|f17|f18|f19|f2|f20|f21|f22|f23|f24|f3|f4|f5|f6|f7|f8|f9|home|ins|insert|joy1|joy10|joy11|joy12|joy13|joy14|joy15|joy16|joy17|joy18|joy19|joy2|joy20|joy21|joy22|joy23|joy24|joy25|joy26|joy27|joy28|joy29|joy3|joy30|joy31|joy32|joy4|joy5|joy6|joy7|joy8|joy9|joyaxes|joybuttons|joyinfo|joyname|joypov|joyr|joyu|joyv|joyx|joyy|joyz|lalt|launch_app1|launch_app2|launch_mail|launch_media|lbutton|lcontrol|lctrl|left|lshift|lwin|lwindown|lwinup|mbutton|media_next|media_play_pause|media_prev|media_stop|numlock|numpad0|numpad1|numpad2|numpad3|numpad4|numpad5|numpad6|numpad7|numpad8|numpad9|numpadadd|numpadclear|numpaddel|numpaddiv|numpaddot|numpaddown|numpadend|numpadenter|numpadhome|numpadins|numpadleft|numpadmult|numpadpgdn|numpadpgup|numpadright|numpadsub|numpadup|pgdn|pgup|printscreen|ralt|rbutton|rcontrol|rctrl|right|rshift|rwin|rwindown|rwinup|scrolllock|shift|shiftdown|shiftup|space|tab|up|volume_down|volume_mute|volume_up|wheeldown|wheelleft|wheelright|wheelup|xbutton1|xbutton2)\b/i,directive:{pattern:/#[a-z]+\b/i,alias:"important"},keyword:/\b(?:Abort|AboveNormal|Add|ahk_class|ahk_exe|ahk_group|ahk_id|ahk_pid|All|Alnum|Alpha|AltSubmit|AltTab|AltTabAndMenu|AltTabMenu|AltTabMenuDismiss|AlwaysOnTop|AutoSize|Background|BackgroundTrans|BelowNormal|between|BitAnd|BitNot|BitOr|BitShiftLeft|BitShiftRight|BitXOr|Bold|Border|Button|ByRef|Catch|Checkbox|Checked|CheckedGray|Choose|ChooseString|Close|Color|ComboBox|Contains|ControlList|Count|Date|DateTime|Days|DDL|Default|DeleteAll|Delimiter|Deref|Destroy|Digit|Disable|Disabled|DropDownList|Edit|Eject|Else|Enable|Enabled|Error|Exist|Expand|ExStyle|FileSystem|Finally|First|Flash|Float|FloatFast|Focus|Font|for|global|Grid|Group|GroupBox|GuiClose|GuiContextMenu|GuiDropFiles|GuiEscape|GuiSize|Hdr|Hidden|Hide|High|HKCC|HKCR|HKCU|HKEY_CLASSES_ROOT|HKEY_CURRENT_CONFIG|HKEY_CURRENT_USER|HKEY_LOCAL_MACHINE|HKEY_USERS|HKLM|HKU|Hours|HScroll|Icon|IconSmall|ID|IDLast|If|IfEqual|IfExist|IfGreater|IfGreaterOrEqual|IfInString|IfLess|IfLessOrEqual|IfMsgBox|IfNotEqual|IfNotExist|IfNotInString|IfWinActive|IfWinExist|IfWinNotActive|IfWinNotExist|Ignore|ImageList|in|Integer|IntegerFast|Interrupt|is|italic|Join|Label|LastFound|LastFoundExist|Limit|Lines|List|ListBox|ListView|local|Lock|Logoff|Low|Lower|Lowercase|MainWindow|Margin|Maximize|MaximizeBox|MaxSize|Minimize|MinimizeBox|MinMax|MinSize|Minutes|MonthCal|Mouse|Move|Multi|NA|No|NoActivate|NoDefault|NoHide|NoIcon|NoMainWindow|norm|Normal|NoSort|NoSortHdr|NoStandard|Not|NoTab|NoTimers|Number|Off|Ok|On|OwnDialogs|Owner|Parse|Password|Picture|Pixel|Pos|Pow|Priority|ProcessName|Radio|Range|Read|ReadOnly|Realtime|Redraw|Region|REG_BINARY|REG_DWORD|REG_EXPAND_SZ|REG_MULTI_SZ|REG_SZ|Relative|Rename|Report|Resize|Restore|Retry|RGB|Screen|Seconds|Section|Serial|SetLabel|ShiftAltTab|Show|Single|Slider|SortDesc|Standard|static|Status|StatusBar|StatusCD|strike|Style|Submit|SysMenu|Tab2|TabStop|Text|Theme|Throw|Tile|ToggleCheck|ToggleEnable|ToolWindow|Top|Topmost|TransColor|Transparent|Tray|TreeView|Try|TryAgain|Type|UnCheck|underline|Unicode|Unlock|Until|UpDown|Upper|Uppercase|UseErrorLevel|Vis|VisFirst|Visible|VScroll|Wait|WaitClose|WantCtrlA|WantF2|WantReturn|While|Wrap|Xdigit|xm|xp|xs|Yes|ym|yp|ys)\b/i,function:/[^(); \t,\n+*\-=?>:\\\/<&%\[\]]+(?=\()/,punctuation:/[{}[\]():,]/}},4940:function(){Prism.languages.autoit={comment:[/;.*/,{pattern:/(^[\t ]*)#(?:comments-start|cs)[\s\S]*?^[ \t]*#(?:ce|comments-end)/m,lookbehind:!0}],url:{pattern:/(^[\t ]*#include\s+)(?:<[^\r\n>]+>|"[^\r\n"]+")/m,lookbehind:!0},string:{pattern:/(["'])(?:\1\1|(?!\1)[^\r\n])*\1/,greedy:!0,inside:{variable:/([%$@])\w+\1/}},directive:{pattern:/(^[\t ]*)#[\w-]+/m,lookbehind:!0,alias:"keyword"},function:/\b\w+(?=\()/,variable:/[$@]\w+/,keyword:/\b(?:Case|Const|Continue(?:Case|Loop)|Default|Dim|Do|Else(?:If)?|End(?:Func|If|Select|Switch|With)|Enum|Exit(?:Loop)?|For|Func|Global|If|In|Local|Next|Null|ReDim|Select|Static|Step|Switch|Then|To|Until|Volatile|WEnd|While|With)\b/i,number:/\b(?:0x[\da-f]+|\d+(?:\.\d+)?(?:e[+-]?\d+)?)\b/i,boolean:/\b(?:False|True)\b/i,operator:/<[=>]?|[-+*\/=&>]=?|[?^]|\b(?:And|Not|Or)\b/i,punctuation:/[\[\]().,:]/}},8060:function(){(function(e){function t(e,t){return e.replace(/<<(\d+)>>/g,(function(e,n){return t[+n]}))}function n(e,n,r){return RegExp(t(e,n),r||"")}var r=/bool|clip|float|int|string|val/.source,i=[/is(?:bool|clip|float|int|string)|defined|(?:(?:internal)?function|var)?exists?/.source,/apply|assert|default|eval|import|nop|select|undefined/.source,/opt_(?:allowfloataudio|avipadscanlines|dwchannelmask|enable_(?:b64a|planartopackedrgb|v210|y3_10_10|y3_10_16)|usewaveextensible|vdubplanarhack)|set(?:cachemode|maxcpu|memorymax|planarlegacyalignment|workingdir)/.source,/hex(?:value)?|value/.source,/abs|ceil|continued(?:denominator|numerator)?|exp|floor|fmod|frac|log(?:10)?|max|min|muldiv|pi|pow|rand|round|sign|spline|sqrt/.source,/a?sinh?|a?cosh?|a?tan[2h]?/.source,/(?:bit(?:and|not|x?or|[lr]?shift[aslu]?|sh[lr]|sa[lr]|[lr]rotatel?|ro[rl]|te?st|set(?:count)?|cl(?:ea)?r|ch(?:an)?ge?))/.source,/average(?:[bgr]|chroma[uv]|luma)|(?:[rgb]|chroma[uv]|luma|rgb|[yuv](?=difference(?:fromprevious|tonext)))difference(?:fromprevious|tonext)?|[yuvrgb]plane(?:median|min|max|minmaxdifference)/.source,/getprocessinfo|logmsg|script(?:dir(?:utf8)?|file(?:utf8)?|name(?:utf8)?)|setlogparams/.source,/chr|(?:fill|find|left|mid|replace|rev|right)str|format|[lu]case|ord|str(?:cmpi?|fromutf8|len|toutf8)|time|trim(?:all|left|right)/.source,/isversionorgreater|version(?:number|string)/.source,/buildpixeltype|colorspacenametopixeltype/.source,/addautoloaddir|on(?:cpu|cuda)|prefetch|setfiltermtmode/.source].join("|"),s=[/has(?:audio|video)/.source,/height|width/.source,/frame(?:count|rate)|framerate(?:denominator|numerator)/.source,/getparity|is(?:field|frame)based/.source,/bitspercomponent|componentsize|hasalpha|is(?:planar(?:rgba?)?|interleaved|rgb(?:24|32|48|64)?|y(?:8|u(?:va?|y2))?|yv(?:12|16|24|411)|420|422|444|packedrgb)|numcomponents|pixeltype/.source,/audio(?:bits|channels|duration|length(?:[fs]|hi|lo)?|rate)|isaudio(?:float|int)/.source].join("|"),o=[/avi(?:file)?source|directshowsource|image(?:reader|source|sourceanim)|opendmlsource|segmented(?:avisource|directshowsource)|wavsource/.source,/coloryuv|convertbacktoyuy2|convertto(?:RGB(?:24|32|48|64)|(?:planar)?RGBA?|Y8?|YV(?:12|16|24|411)|YUVA?(?:411|420|422|444)|YUY2)|fixluminance|gr[ae]yscale|invert|levels|limiter|mergea?rgb|merge(?:chroma|luma)|rgbadjust|show(?:alpha|blue|green|red)|swapuv|tweak|[uv]toy8?|ytouv/.source,/(?:colorkey|reset)mask|layer|mask(?:hs)?|merge|overlay|subtract/.source,/addborders|(?:bicubic|bilinear|blackman|gauss|lanczos4|lanczos|point|sinc|spline(?:16|36|64))resize|crop(?:bottom)?|flip(?:horizontal|vertical)|(?:horizontal|vertical)?reduceby2|letterbox|skewrows|turn(?:180|left|right)/.source,/blur|fixbrokenchromaupsampling|generalconvolution|(?:spatial|temporal)soften|sharpen/.source,/trim|(?:un)?alignedsplice|(?:assume|assumescaled|change|convert)FPS|(?:delete|duplicate)frame|dissolve|fade(?:in|io|out)[02]?|freezeframe|interleave|loop|reverse|select(?:even|odd|(?:range)?every)/.source,/assume[bt]ff|assume(?:field|frame)based|bob|complementparity|doubleweave|peculiarblend|pulldown|separate(?:columns|fields|rows)|swapfields|weave(?:columns|rows)?/.source,/amplify(?:db)?|assumesamplerate|audiodub(?:ex)?|audiotrim|convertaudioto(?:(?:8|16|24|32)bit|float)|converttomono|delayaudio|ensurevbrmp3sync|get(?:left|right)?channel|kill(?:audio|video)|mergechannels|mixaudio|monotostereo|normalize|resampleaudio|ssrc|supereq|timestretch/.source,/animate|applyrange|conditional(?:filter|reader|select)|frameevaluate|scriptclip|tcp(?:server|source)|writefile(?:end|if|start)?/.source,/imagewriter/.source,/blackness|blankclip|colorbars(?:hd)?|compare|dumpfiltergraph|echo|histogram|info|messageclip|preroll|setgraphanalysis|show(?:framenumber|smpte|time)|showfiveversions|stack(?:horizontal|vertical)|subtitle|tone|version/.source].join("|"),a=[i,s,o].join("|");e.languages.avisynth={comment:[{pattern:/(^|[^\\])\[\*(?:[^\[*]|\[(?!\*)|\*(?!\])|\[\*(?:[^\[*]|\[(?!\*)|\*(?!\]))*\*\])*\*\]/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\$])#.*/,lookbehind:!0,greedy:!0}],argument:{pattern:n(/\b(?:<<0>>)\s+("?)\w+\1/.source,[r],"i"),inside:{keyword:/^\w+/}},"argument-label":{pattern:/([,(][\s\\]*)\w+\s*=(?!=)/,lookbehind:!0,inside:{"argument-name":{pattern:/^\w+/,alias:"punctuation"},punctuation:/=$/}},string:[{pattern:/"""[\s\S]*?"""/,greedy:!0},{pattern:/"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"/,greedy:!0,inside:{constant:{pattern:/\b(?:DEFAULT_MT_MODE|(?:MAINSCRIPT|PROGRAM|SCRIPT)DIR|(?:MACHINE|USER)_(?:CLASSIC|PLUS)_PLUGINS)\b/}}}],variable:/\b(?:last)\b/i,boolean:/\b(?:false|no|true|yes)\b/i,keyword:/\b(?:catch|else|for|function|global|if|return|try|while|__END__)\b/i,constant:/\bMT_(?:MULTI_INSTANCE|NICE_FILTER|SERIALIZED|SPECIAL_MT)\b/,"builtin-function":{pattern:n(/\b(?:<<0>>)\b/.source,[a],"i"),alias:"function"},"type-cast":{pattern:n(/\b(?:<<0>>)(?=\s*\()/.source,[r],"i"),alias:"keyword"},function:{pattern:/\b[a-z_]\w*(?=\s*\()|(\.)[a-z_]\w*\b/i,lookbehind:!0},"line-continuation":{pattern:/(^[ \t]*)\\|\\(?=[ \t]*$)/m,lookbehind:!0,alias:"punctuation"},number:/\B\$(?:[\da-f]{6}|[\da-f]{8})\b|(?:(?:\b|\B-)\d+(?:\.\d*)?\b|\B\.\d+\b)/i,operator:/\+\+?|[!=<>]=?|&&|\|\||[?:*/%-]/,punctuation:/[{}\[\]();,.]/},e.languages.avs=e.languages.avisynth})(Prism)},639:function(){Prism.languages["avro-idl"]={comment:{pattern:/\/\/.*|\/\*[\s\S]*?\*\//,greedy:!0},string:{pattern:/(^|[^\\])"(?:[^\r\n"\\]|\\.)*"/,lookbehind:!0,greedy:!0},annotation:{pattern:/@(?:[$\w.-]|`[^\r\n`]+`)+/,greedy:!0,alias:"function"},"function-identifier":{pattern:/`[^\r\n`]+`(?=\s*\()/,greedy:!0,alias:"function"},identifier:{pattern:/`[^\r\n`]+`/,greedy:!0},"class-name":{pattern:/(\b(?:enum|error|protocol|record|throws)\b\s+)[$\w]+/,lookbehind:!0,greedy:!0},keyword:/\b(?:array|boolean|bytes|date|decimal|double|enum|error|false|fixed|float|idl|import|int|local_timestamp_ms|long|map|null|oneway|protocol|record|schema|string|throws|time_ms|timestamp_ms|true|union|uuid|void)\b/,function:/\b[a-z_]\w*(?=\s*\()/i,number:[{pattern:/(^|[^\w.])-?(?:(?:\d+(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?|0x(?:[a-f0-9]+(?:\.[a-f0-9]*)?|\.[a-f0-9]+)(?:p[+-]?\d+)?)[dfl]?(?![\w.])/i,lookbehind:!0},/-?\b(?:Infinity|NaN)\b/],operator:/=/,punctuation:/[()\[\]{}<>.:,;-]/},Prism.languages.avdl=Prism.languages["avro-idl"]},4126:function(){Prism.languages.awk={hashbang:{pattern:/^#!.*/,greedy:!0,alias:"comment"},comment:{pattern:/#.*/,greedy:!0},string:{pattern:/(^|[^\\])"(?:[^\\"\r\n]|\\.)*"/,lookbehind:!0,greedy:!0},regex:{pattern:/((?:^|[^\w\s)])\s*)\/(?:[^\/\\\r\n]|\\.)*\//,lookbehind:!0,greedy:!0},variable:/\$\w+/,keyword:/\b(?:BEGIN|BEGINFILE|END|ENDFILE|break|case|continue|default|delete|do|else|exit|for|function|getline|if|in|next|nextfile|printf?|return|switch|while)\b|@(?:include|load)\b/,function:/\b[a-z_]\w*(?=\s*\()/i,number:/\b(?:\d+(?:\.\d+)?(?:e[+-]?\d+)?|0x[a-fA-F0-9]+)\b/,operator:/--|\+\+|!?~|>&|>>|<<|(?:\*\*|[<>!=+\-*/%^])=?|&&|\|[|&]|[?:]/,punctuation:/[()[\]{},;]/},Prism.languages.gawk=Prism.languages.awk},7874:function(){(function(e){var t="\\b(?:BASH|BASHOPTS|BASH_ALIASES|BASH_ARGC|BASH_ARGV|BASH_CMDS|BASH_COMPLETION_COMPAT_DIR|BASH_LINENO|BASH_REMATCH|BASH_SOURCE|BASH_VERSINFO|BASH_VERSION|COLORTERM|COLUMNS|COMP_WORDBREAKS|DBUS_SESSION_BUS_ADDRESS|DEFAULTS_PATH|DESKTOP_SESSION|DIRSTACK|DISPLAY|EUID|GDMSESSION|GDM_LANG|GNOME_KEYRING_CONTROL|GNOME_KEYRING_PID|GPG_AGENT_INFO|GROUPS|HISTCONTROL|HISTFILE|HISTFILESIZE|HISTSIZE|HOME|HOSTNAME|HOSTTYPE|IFS|INSTANCE|JOB|LANG|LANGUAGE|LC_ADDRESS|LC_ALL|LC_IDENTIFICATION|LC_MEASUREMENT|LC_MONETARY|LC_NAME|LC_NUMERIC|LC_PAPER|LC_TELEPHONE|LC_TIME|LESSCLOSE|LESSOPEN|LINES|LOGNAME|LS_COLORS|MACHTYPE|MAILCHECK|MANDATORY_PATH|NO_AT_BRIDGE|OLDPWD|OPTERR|OPTIND|ORBIT_SOCKETDIR|OSTYPE|PAPERSIZE|PATH|PIPESTATUS|PPID|PS1|PS2|PS3|PS4|PWD|RANDOM|REPLY|SECONDS|SELINUX_INIT|SESSION|SESSIONTYPE|SESSION_MANAGER|SHELL|SHELLOPTS|SHLVL|SSH_AUTH_SOCK|TERM|UID|UPSTART_EVENTS|UPSTART_INSTANCE|UPSTART_JOB|UPSTART_SESSION|USER|WINDOWID|XAUTHORITY|XDG_CONFIG_DIRS|XDG_CURRENT_DESKTOP|XDG_DATA_DIRS|XDG_GREETER_DATA_DIR|XDG_MENU_PREFIX|XDG_RUNTIME_DIR|XDG_SEAT|XDG_SEAT_PATH|XDG_SESSION_DESKTOP|XDG_SESSION_ID|XDG_SESSION_PATH|XDG_SESSION_TYPE|XDG_VTNR|XMODIFIERS)\\b",n={pattern:/(^(["']?)\w+\2)[ \t]+\S.*/,lookbehind:!0,alias:"punctuation",inside:null},r={bash:n,environment:{pattern:RegExp("\\$"+t),alias:"constant"},variable:[{pattern:/\$?\(\([\s\S]+?\)\)/,greedy:!0,inside:{variable:[{pattern:/(^\$\(\([\s\S]+)\)\)/,lookbehind:!0},/^\$\(\(/],number:/\b0x[\dA-Fa-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee]-?\d+)?/,operator:/--|\+\+|\*\*=?|<<=?|>>=?|&&|\|\||[=!+\-*/%<>^&|]=?|[?~:]/,punctuation:/\(\(?|\)\)?|,|;/}},{pattern:/\$\((?:\([^)]+\)|[^()])+\)|`[^`]+`/,greedy:!0,inside:{variable:/^\$\(|^`|\)$|`$/}},{pattern:/\$\{[^}]+\}/,greedy:!0,inside:{operator:/:[-=?+]?|[!\/]|##?|%%?|\^\^?|,,?/,punctuation:/[\[\]]/,environment:{pattern:RegExp("(\\{)"+t),lookbehind:!0,alias:"constant"}}},/\$(?:\w+|[#?*!@$])/],entity:/\\(?:[abceEfnrtv\\"]|O?[0-7]{1,3}|U[0-9a-fA-F]{8}|u[0-9a-fA-F]{4}|x[0-9a-fA-F]{1,2})/};e.languages.bash={shebang:{pattern:/^#!\s*\/.*/,alias:"important"},comment:{pattern:/(^|[^"{\\$])#.*/,lookbehind:!0},"function-name":[{pattern:/(\bfunction\s+)[\w-]+(?=(?:\s*\(?:\s*\))?\s*\{)/,lookbehind:!0,alias:"function"},{pattern:/\b[\w-]+(?=\s*\(\s*\)\s*\{)/,alias:"function"}],"for-or-select":{pattern:/(\b(?:for|select)\s+)\w+(?=\s+in\s)/,alias:"variable",lookbehind:!0},"assign-left":{pattern:/(^|[\s;|&]|[<>]\()\w+(?:\.\w+)*(?=\+?=)/,inside:{environment:{pattern:RegExp("(^|[\\s;|&]|[<>]\\()"+t),lookbehind:!0,alias:"constant"}},alias:"variable",lookbehind:!0},parameter:{pattern:/(^|\s)-{1,2}(?:\w+:[+-]?)?\w+(?:\.\w+)*(?=[=\s]|$)/,alias:"variable",lookbehind:!0},string:[{pattern:/((?:^|[^<])<<-?\s*)(\w+)\s[\s\S]*?(?:\r?\n|\r)\2/,lookbehind:!0,greedy:!0,inside:r},{pattern:/((?:^|[^<])<<-?\s*)(["'])(\w+)\2\s[\s\S]*?(?:\r?\n|\r)\3/,lookbehind:!0,greedy:!0,inside:{bash:n}},{pattern:/(^|[^\\](?:\\\\)*)"(?:\\[\s\S]|\$\([^)]+\)|\$(?!\()|`[^`]+`|[^"\\`$])*"/,lookbehind:!0,greedy:!0,inside:r},{pattern:/(^|[^$\\])'[^']*'/,lookbehind:!0,greedy:!0},{pattern:/\$'(?:[^'\\]|\\[\s\S])*'/,greedy:!0,inside:{entity:r.entity}}],environment:{pattern:RegExp("\\$?"+t),alias:"constant"},variable:r.variable,function:{pattern:/(^|[\s;|&]|[<>]\()(?:add|apropos|apt|apt-cache|apt-get|aptitude|aspell|automysqlbackup|awk|basename|bash|bc|bconsole|bg|bzip2|cal|cargo|cat|cfdisk|chgrp|chkconfig|chmod|chown|chroot|cksum|clear|cmp|column|comm|composer|cp|cron|crontab|csplit|curl|cut|date|dc|dd|ddrescue|debootstrap|df|diff|diff3|dig|dir|dircolors|dirname|dirs|dmesg|docker|docker-compose|du|egrep|eject|env|ethtool|expand|expect|expr|fdformat|fdisk|fg|fgrep|file|find|fmt|fold|format|free|fsck|ftp|fuser|gawk|git|gparted|grep|groupadd|groupdel|groupmod|groups|grub-mkconfig|gzip|halt|head|hg|history|host|hostname|htop|iconv|id|ifconfig|ifdown|ifup|import|install|ip|java|jobs|join|kill|killall|less|link|ln|locate|logname|logrotate|look|lpc|lpr|lprint|lprintd|lprintq|lprm|ls|lsof|lynx|make|man|mc|mdadm|mkconfig|mkdir|mke2fs|mkfifo|mkfs|mkisofs|mknod|mkswap|mmv|more|most|mount|mtools|mtr|mutt|mv|nano|nc|netstat|nice|nl|node|nohup|notify-send|npm|nslookup|op|open|parted|passwd|paste|pathchk|ping|pkill|pnpm|podman|podman-compose|popd|pr|printcap|printenv|ps|pushd|pv|quota|quotacheck|quotactl|ram|rar|rcp|reboot|remsync|rename|renice|rev|rm|rmdir|rpm|rsync|scp|screen|sdiff|sed|sendmail|seq|service|sftp|sh|shellcheck|shuf|shutdown|sleep|slocate|sort|split|ssh|stat|strace|su|sudo|sum|suspend|swapon|sync|sysctl|tac|tail|tar|tee|time|timeout|top|touch|tr|traceroute|tsort|tty|umount|uname|unexpand|uniq|units|unrar|unshar|unzip|update-grub|uptime|useradd|userdel|usermod|users|uudecode|uuencode|v|vcpkg|vdir|vi|vim|virsh|vmstat|wait|watch|wc|wget|whereis|which|who|whoami|write|xargs|xdg-open|yarn|yes|zenity|zip|zsh|zypper)(?=$|[)\s;|&])/,lookbehind:!0},keyword:{pattern:/(^|[\s;|&]|[<>]\()(?:case|do|done|elif|else|esac|fi|for|function|if|in|select|then|until|while)(?=$|[)\s;|&])/,lookbehind:!0},builtin:{pattern:/(^|[\s;|&]|[<>]\()(?:\.|:|alias|bind|break|builtin|caller|cd|command|continue|declare|echo|enable|eval|exec|exit|export|getopts|hash|help|let|local|logout|mapfile|printf|pwd|read|readarray|readonly|return|set|shift|shopt|source|test|times|trap|type|typeset|ulimit|umask|unalias|unset)(?=$|[)\s;|&])/,lookbehind:!0,alias:"class-name"},boolean:{pattern:/(^|[\s;|&]|[<>]\()(?:false|true)(?=$|[)\s;|&])/,lookbehind:!0},"file-descriptor":{pattern:/\B&\d\b/,alias:"important"},operator:{pattern:/\d?<>|>\||\+=|=[=~]?|!=?|<<[<-]?|[&\d]?>>|\d[<>]&?|[<>][&=]?|&[>&]?|\|[&|]?/,inside:{"file-descriptor":{pattern:/^\d/,alias:"important"}}},punctuation:/\$?\(\(?|\)\)?|\.\.|[{}[\];\\]/,number:{pattern:/(^|\s)(?:[1-9]\d*|0)(?:[.,]\d+)?\b/,lookbehind:!0}},n.inside=e.languages.bash;for(var i=["comment","function-name","for-or-select","assign-left","parameter","string","environment","function","keyword","builtin","boolean","file-descriptor","operator","punctuation","number"],s=r.variable[1].inside,o=0;o?^\w +\-.])*"/,greedy:!0},number:/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:E[+-]?\d+)?/i,keyword:/\b(?:AS|BEEP|BLOAD|BSAVE|CALL(?: ABSOLUTE)?|CASE|CHAIN|CHDIR|CLEAR|CLOSE|CLS|COM|COMMON|CONST|DATA|DECLARE|DEF(?: FN| SEG|DBL|INT|LNG|SNG|STR)|DIM|DO|DOUBLE|ELSE|ELSEIF|END|ENVIRON|ERASE|ERROR|EXIT|FIELD|FILES|FOR|FUNCTION|GET|GOSUB|GOTO|IF|INPUT|INTEGER|IOCTL|KEY|KILL|LINE INPUT|LOCATE|LOCK|LONG|LOOP|LSET|MKDIR|NAME|NEXT|OFF|ON(?: COM| ERROR| KEY| TIMER)?|OPEN|OPTION BASE|OUT|POKE|PUT|READ|REDIM|REM|RESTORE|RESUME|RETURN|RMDIR|RSET|RUN|SELECT CASE|SHARED|SHELL|SINGLE|SLEEP|STATIC|STEP|STOP|STRING|SUB|SWAP|SYSTEM|THEN|TIMER|TO|TROFF|TRON|TYPE|UNLOCK|UNTIL|USING|VIEW PRINT|WAIT|WEND|WHILE|WRITE)(?:\$|\b)/i,function:/\b(?:ABS|ACCESS|ACOS|ANGLE|AREA|ARITHMETIC|ARRAY|ASIN|ASK|AT|ATN|BASE|BEGIN|BREAK|CAUSE|CEIL|CHR|CLIP|COLLATE|COLOR|CON|COS|COSH|COT|CSC|DATE|DATUM|DEBUG|DECIMAL|DEF|DEG|DEGREES|DELETE|DET|DEVICE|DISPLAY|DOT|ELAPSED|EPS|ERASABLE|EXLINE|EXP|EXTERNAL|EXTYPE|FILETYPE|FIXED|FP|GO|GRAPH|HANDLER|IDN|IMAGE|IN|INT|INTERNAL|IP|IS|KEYED|LBOUND|LCASE|LEFT|LEN|LENGTH|LET|LINE|LINES|LOG|LOG10|LOG2|LTRIM|MARGIN|MAT|MAX|MAXNUM|MID|MIN|MISSING|MOD|NATIVE|NUL|NUMERIC|OF|OPTION|ORD|ORGANIZATION|OUTIN|OUTPUT|PI|POINT|POINTER|POINTS|POS|PRINT|PROGRAM|PROMPT|RAD|RADIANS|RANDOMIZE|RECORD|RECSIZE|RECTYPE|RELATIVE|REMAINDER|REPEAT|REST|RETRY|REWRITE|RIGHT|RND|ROUND|RTRIM|SAME|SEC|SELECT|SEQUENTIAL|SET|SETTER|SGN|SIN|SINH|SIZE|SKIP|SQR|STANDARD|STATUS|STR|STREAM|STYLE|TAB|TAN|TANH|TEMPLATE|TEXT|THERE|TIME|TIMEOUT|TRACE|TRANSFORM|TRUNCATE|UBOUND|UCASE|USE|VAL|VARIABLE|VIEWPORT|WHEN|WINDOW|WITH|ZER|ZONEWIDTH)(?:\$|\b)/i,operator:/<[=>]?|>=?|[+\-*\/^=&]|\b(?:AND|EQV|IMP|NOT|OR|XOR)\b/i,punctuation:/[,;:()]/}},3292:function(){(function(e){var t=/%%?[~:\w]+%?|!\S+!/,n={pattern:/\/[a-z?]+(?=[ :]|$):?|-[a-z]\b|--[a-z-]+\b/im,alias:"attr-name",inside:{punctuation:/:/}},r=/"(?:[\\"]"|[^"])*"(?!")/,i=/(?:\b|-)\d+\b/;e.languages.batch={comment:[/^::.*/m,{pattern:/((?:^|[&(])[ \t]*)rem\b(?:[^^&)\r\n]|\^(?:\r\n|[\s\S]))*/im,lookbehind:!0}],label:{pattern:/^:.*/m,alias:"property"},command:[{pattern:/((?:^|[&(])[ \t]*)for(?: \/[a-z?](?:[ :](?:"[^"]*"|[^\s"/]\S*))?)* \S+ in \([^)]+\) do/im,lookbehind:!0,inside:{keyword:/\b(?:do|in)\b|^for\b/i,string:r,parameter:n,variable:t,number:i,punctuation:/[()',]/}},{pattern:/((?:^|[&(])[ \t]*)if(?: \/[a-z?](?:[ :](?:"[^"]*"|[^\s"/]\S*))?)* (?:not )?(?:cmdextversion \d+|defined \w+|errorlevel \d+|exist \S+|(?:"[^"]*"|(?!")(?:(?!==)\S)+)?(?:==| (?:equ|geq|gtr|leq|lss|neq) )(?:"[^"]*"|[^\s"]\S*))/im,lookbehind:!0,inside:{keyword:/\b(?:cmdextversion|defined|errorlevel|exist|not)\b|^if\b/i,string:r,parameter:n,variable:t,number:i,operator:/\^|==|\b(?:equ|geq|gtr|leq|lss|neq)\b/i}},{pattern:/((?:^|[&()])[ \t]*)else\b/im,lookbehind:!0,inside:{keyword:/^else\b/i}},{pattern:/((?:^|[&(])[ \t]*)set(?: \/[a-z](?:[ :](?:"[^"]*"|[^\s"/]\S*))?)* (?:[^^&)\r\n]|\^(?:\r\n|[\s\S]))*/im,lookbehind:!0,inside:{keyword:/^set\b/i,string:r,parameter:n,variable:[t,/\w+(?=(?:[*\/%+\-&^|]|<<|>>)?=)/],number:i,operator:/[*\/%+\-&^|]=?|<<=?|>>=?|[!~_=]/,punctuation:/[()',]/}},{pattern:/((?:^|[&(])[ \t]*@?)\w+\b(?:"(?:[\\"]"|[^"])*"(?!")|[^"^&)\r\n]|\^(?:\r\n|[\s\S]))*/m,lookbehind:!0,inside:{keyword:/^\w+\b/,string:r,parameter:n,label:{pattern:/(^\s*):\S+/m,lookbehind:!0,alias:"property"},variable:t,number:i,operator:/\^/}}],operator:/[&@]/,punctuation:/[()']/}})(Prism)},6428:function(){Prism.languages.bbcode={tag:{pattern:/\[\/?[^\s=\]]+(?:\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'"\]=]+))?(?:\s+[^\s=\]]+\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'"\]=]+))*\s*\]/,inside:{tag:{pattern:/^\[\/?[^\s=\]]+/,inside:{punctuation:/^\[\/?/}},"attr-value":{pattern:/=\s*(?:"[^"]*"|'[^']*'|[^\s'"\]=]+)/,inside:{punctuation:[/^=/,{pattern:/^(\s*)["']|["']$/,lookbehind:!0}]}},punctuation:/\]/,"attr-name":/[^\s=\]]+/}}},Prism.languages.shortcode=Prism.languages.bbcode},7308:function(){(function(e){e.languages.bbj={comment:{pattern:/(^|[^\\:])rem\s+.*/i,lookbehind:!0,greedy:!0},string:{pattern:/(['"])(?:(?!\1|\\).|\\.)*\1/,greedy:!0},number:/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:E[+-]?\d+)?/i,keyword:/\b(?:abstract|all|argc|begin|bye|callback|case|chn|class|classend|ctl|day|declare|delete|dim|dom|dread|dsz|else|end|endif|err|exitto|extends|fi|field|for|from|gosub|goto|if|implements|interface|interfaceend|iol|iolist|let|list|load|method|methodend|methodret|on|opts|pfx|print|private|process_events|protected|psz|public|read|read_resource|release|remove_callback|repeat|restore|return|rev|seterr|setesc|sqlchn|sqlunt|ssn|start|static|swend|switch|sys|then|tim|unt|until|use|void|wend|where|while)\b/i,function:/\b\w+(?=\()/,boolean:/\b(?:BBjAPI\.TRUE|BBjAPI\.FALSE)\b/i,operator:/<[=>]?|>=?|[+\-*\/^=&]|\b(?:and|not|or|xor)\b/i,punctuation:/[.,;:()]/}})(Prism)},6043:function(){Prism.languages.bicep={comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],property:[{pattern:/([\r\n][ \t]*)[a-z_]\w*(?=[ \t]*:)/i,lookbehind:!0},{pattern:/([\r\n][ \t]*)'(?:\\.|\$(?!\{)|[^'\\\r\n$])*'(?=[ \t]*:)/,lookbehind:!0,greedy:!0}],string:[{pattern:/'''[^'][\s\S]*?'''/,greedy:!0},{pattern:/(^|[^\\'])'(?:\\.|\$(?!\{)|[^'\\\r\n$])*'/,lookbehind:!0,greedy:!0}],"interpolated-string":{pattern:/(^|[^\\'])'(?:\\.|\$(?:(?!\{)|\{[^{}\r\n]*\})|[^'\\\r\n$])*'/,lookbehind:!0,greedy:!0,inside:{interpolation:{pattern:/\$\{[^{}\r\n]*\}/,inside:{expression:{pattern:/(^\$\{)[\s\S]+(?=\}$)/,lookbehind:!0},punctuation:/^\$\{|\}$/}},string:/[\s\S]+/}},datatype:{pattern:/(\b(?:output|param)\b[ \t]+\w+[ \t]+)\w+\b/,lookbehind:!0,alias:"class-name"},boolean:/\b(?:false|true)\b/,keyword:/\b(?:existing|for|if|in|module|null|output|param|resource|targetScope|var)\b/,decorator:/@\w+\b/,function:/\b[a-z_]\w*(?=[ \t]*\()/i,number:/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:E[+-]?\d+)?/i,operator:/--|\+\+|\*\*=?|=>|&&=?|\|\|=?|[!=]==|<<=?|>>>?=?|[-+*/%&|^!=<>]=?|\.{3}|\?\?=?|\?\.?|[~:]/,punctuation:/[{}[\];(),.:]/},Prism.languages.bicep["interpolated-string"].inside["interpolation"].inside["expression"].inside=Prism.languages.bicep},9104:function(){Prism.languages.birb=Prism.languages.extend("clike",{string:{pattern:/r?("|')(?:\\.|(?!\1)[^\\])*\1/,greedy:!0},"class-name":[/\b[A-Z](?:[\d_]*[a-zA-Z]\w*)?\b/,/\b(?:[A-Z]\w*|(?!(?:var|void)\b)[a-z]\w*)(?=\s+\w+\s*[;,=()])/],keyword:/\b(?:assert|break|case|class|const|default|else|enum|final|follows|for|grab|if|nest|new|next|noSeeb|return|static|switch|throw|var|void|while)\b/,operator:/\+\+|--|&&|\|\||<<=?|>>=?|~(?:\/=?)?|[+\-*\/%&^|=!<>]=?|\?|:/,variable:/\b[a-z_]\w*\b/}),Prism.languages.insertBefore("birb","function",{metadata:{pattern:/<\w+>/,greedy:!0,alias:"symbol"}})},7861:function(){Prism.languages.bison=Prism.languages.extend("c",{}),Prism.languages.insertBefore("bison","comment",{bison:{pattern:/^(?:[^%]|%(?!%))*%%[\s\S]*?%%/,inside:{c:{pattern:/%\{[\s\S]*?%\}|\{(?:\{[^}]*\}|[^{}])*\}/,inside:{delimiter:{pattern:/^%?\{|%?\}$/,alias:"punctuation"},"bison-variable":{pattern:/[$@](?:<[^\s>]+>)?[\w$]+/,alias:"variable",inside:{punctuation:/<|>/}},rest:Prism.languages.c}},comment:Prism.languages.c.comment,string:Prism.languages.c.string,property:/\S+(?=:)/,keyword:/%\w+/,number:{pattern:/(^|[^@])\b(?:0x[\da-f]+|\d+)/i,lookbehind:!0},punctuation:/%[%?]|[|:;\[\]<>]/}}})},4115:function(){Prism.languages.bnf={string:{pattern:/"[^\r\n"]*"|'[^\r\n']*'/},definition:{pattern:/<[^<>\r\n\t]+>(?=\s*::=)/,alias:["rule","keyword"],inside:{punctuation:/^<|>$/}},rule:{pattern:/<[^<>\r\n\t]+>/,inside:{punctuation:/^<|>$/}},operator:/::=|[|()[\]{}*+?]|\.{3}/},Prism.languages.rbnf=Prism.languages.bnf},331:function(){Prism.languages.bqn={shebang:{pattern:/^#![ \t]*\/.*/,alias:"important",greedy:!0},comment:{pattern:/#.*/,greedy:!0},"string-literal":{pattern:/"(?:[^"]|"")*"/,greedy:!0,alias:"string"},"character-literal":{pattern:/'(?:[\s\S]|[\uD800-\uDBFF][\uDC00-\uDFFF])'/,greedy:!0,alias:"char"},function:/•[\w¯.∞π]+[\w¯.∞π]*/,"dot-notation-on-brackets":{pattern:/\{(?=.*\}\.)|\}\./,alias:"namespace"},"special-name":{pattern:/(?:𝕨|𝕩|𝕗|𝕘|𝕤|𝕣|𝕎|𝕏|𝔽|𝔾|𝕊|_𝕣_|_𝕣)/,alias:"keyword"},"dot-notation-on-name":{pattern:/[A-Za-z_][\w¯∞π]*\./,alias:"namespace"},"word-number-scientific":{pattern:/\d+(?:\.\d+)?[eE]¯?\d+/,alias:"number"},"word-name":{pattern:/[A-Za-z_][\w¯∞π]*/,alias:"symbol"},"word-number":{pattern:/[¯∞π]?(?:\d*\.?\b\d+(?:e[+¯]?\d+|E[+¯]?\d+)?|¯|∞|π)(?:j¯?(?:(?:\d+(?:\.\d+)?|\.\d+)(?:e[+¯]?\d+|E[+¯]?\d+)?|¯|∞|π))?/,alias:"number"},"null-literal":{pattern:/@/,alias:"char"},"primitive-functions":{pattern:/[-+×÷⋆√⌊⌈|¬∧∨<>≠=≤≥≡≢⊣⊢⥊∾≍⋈↑↓↕«»⌽⍉/⍋⍒⊏⊑⊐⊒∊⍷⊔!]/,alias:"operator"},"primitive-1-operators":{pattern:/[`˜˘¨⁼⌜´˝˙]/,alias:"operator"},"primitive-2-operators":{pattern:/[∘⊸⟜○⌾⎉⚇⍟⊘◶⎊]/,alias:"operator"},punctuation:/[←⇐↩(){}⟨⟩[\]‿·⋄,.;:?]/}},5827:function(){Prism.languages.brainfuck={pointer:{pattern:/<|>/,alias:"keyword"},increment:{pattern:/\+/,alias:"inserted"},decrement:{pattern:/-/,alias:"deleted"},branching:{pattern:/\[|\]/,alias:"important"},operator:/[.,]/,comment:/\S+/}},1275:function(){Prism.languages.brightscript={comment:/(?:\brem|').*/i,"directive-statement":{pattern:/(^[\t ]*)#(?:const|else(?:[\t ]+if)?|end[\t ]+if|error|if).*/im,lookbehind:!0,alias:"property",inside:{"error-message":{pattern:/(^#error).+/,lookbehind:!0},directive:{pattern:/^#(?:const|else(?:[\t ]+if)?|end[\t ]+if|error|if)/,alias:"keyword"},expression:{pattern:/[\s\S]+/,inside:null}}},property:{pattern:/([\r\n{,][\t ]*)(?:(?!\d)\w+|"(?:[^"\r\n]|"")*"(?!"))(?=[ \t]*:)/,lookbehind:!0,greedy:!0},string:{pattern:/"(?:[^"\r\n]|"")*"(?!")/,greedy:!0},"class-name":{pattern:/(\bAs[\t ]+)\w+/i,lookbehind:!0},keyword:/\b(?:As|Dim|Each|Else|Elseif|End|Exit|For|Function|Goto|If|In|Print|Return|Step|Stop|Sub|Then|To|While)\b/i,boolean:/\b(?:false|true)\b/i,function:/\b(?!\d)\w+(?=[\t ]*\()/,number:/(?:\b\d+(?:\.\d+)?(?:[ed][+-]\d+)?|&h[a-f\d]+)\b[%&!#]?/i,operator:/--|\+\+|>>=?|<<=?|<>|[-+*/\\<>]=?|[:^=?]|\b(?:and|mod|not|or)\b/i,punctuation:/[.,;()[\]{}]/,constant:/\b(?:LINE_NUM)\b/i},Prism.languages.brightscript["directive-statement"].inside.expression.inside=Prism.languages.brightscript},6609:function(){Prism.languages.bro={comment:{pattern:/(^|[^\\$])#.*/,lookbehind:!0,inside:{italic:/\b(?:FIXME|TODO|XXX)\b/}},string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},boolean:/\b[TF]\b/,function:{pattern:/(\b(?:event|function|hook)[ \t]+)\w+(?:::\w+)?/,lookbehind:!0},builtin:/(?:@(?:load(?:-(?:plugin|sigs))?|unload|prefixes|ifn?def|else|(?:end)?if|DIR|FILENAME))|(?:&?(?:add_func|create_expire|default|delete_func|encrypt|error_handler|expire_func|group|log|mergeable|optional|persistent|priority|raw_output|read_expire|redef|rotate_interval|rotate_size|synchronized|type_column|write_expire))/,constant:{pattern:/(\bconst[ \t]+)\w+/i,lookbehind:!0},keyword:/\b(?:add|addr|alarm|any|bool|break|const|continue|count|delete|double|else|enum|event|export|file|for|function|global|hook|if|in|int|interval|local|module|next|of|opaque|pattern|port|print|record|return|schedule|set|string|subnet|table|time|timeout|using|vector|when)\b/,operator:/--?|\+\+?|!=?=?|<=?|>=?|==?=?|&&|\|\|?|\?|\*|\/|~|\^|%/,number:/\b0x[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?/i,punctuation:/[{}[\];(),.:]/}},1354:function(){Prism.languages.bsl={comment:/\/\/.*/,string:[{pattern:/"(?:[^"]|"")*"(?!")/,greedy:!0},{pattern:/'(?:[^'\r\n\\]|\\.)*'/}],keyword:[{pattern:/(^|[^\w\u0400-\u0484\u0487-\u052f\u1d2b\u1d78\u2de0-\u2dff\ua640-\ua69f\ufe2e\ufe2f])(?:пока|для|новый|прервать|попытка|исключение|вызватьисключение|иначе|конецпопытки|неопределено|функция|перем|возврат|конецфункции|если|иначеесли|процедура|конецпроцедуры|тогда|знач|экспорт|конецесли|из|каждого|истина|ложь|по|цикл|конеццикла|выполнить)(?![\w\u0400-\u0484\u0487-\u052f\u1d2b\u1d78\u2de0-\u2dff\ua640-\ua69f\ufe2e\ufe2f])/i,lookbehind:!0},{pattern:/\b(?:break|do|each|else|elseif|enddo|endfunction|endif|endprocedure|endtry|except|execute|export|false|for|function|if|in|new|null|procedure|raise|return|then|to|true|try|undefined|val|var|while)\b/i}],number:{pattern:/(^(?=\d)|[^\w\u0400-\u0484\u0487-\u052f\u1d2b\u1d78\u2de0-\u2dff\ua640-\ua69f\ufe2e\ufe2f])(?:\d+(?:\.\d*)?|\.\d+)(?:E[+-]?\d+)?/i,lookbehind:!0},operator:[/[<>+\-*/]=?|[%=]/,{pattern:/(^|[^\w\u0400-\u0484\u0487-\u052f\u1d2b\u1d78\u2de0-\u2dff\ua640-\ua69f\ufe2e\ufe2f])(?:и|или|не)(?![\w\u0400-\u0484\u0487-\u052f\u1d2b\u1d78\u2de0-\u2dff\ua640-\ua69f\ufe2e\ufe2f])/i,lookbehind:!0},{pattern:/\b(?:and|not|or)\b/i}],punctuation:/\(\.|\.\)|[()\[\]:;,.]/,directive:[{pattern:/^([ \t]*)&.*/m,lookbehind:!0,greedy:!0,alias:"important"},{pattern:/^([ \t]*)#.*/gm,lookbehind:!0,greedy:!0,alias:"important"}]},Prism.languages.oscript=Prism.languages["bsl"]},4279:function(){Prism.languages.c=Prism.languages.extend("clike",{comment:{pattern:/\/\/(?:[^\r\n\\]|\\(?:\r\n?|\n|(?![\r\n])))*|\/\*[\s\S]*?(?:\*\/|$)/,greedy:!0},string:{pattern:/"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"/,greedy:!0},"class-name":{pattern:/(\b(?:enum|struct)\s+(?:__attribute__\s*\(\([\s\S]*?\)\)\s*)?)\w+|\b[a-z]\w*_t\b/,lookbehind:!0},keyword:/\b(?:_Alignas|_Alignof|_Atomic|_Bool|_Complex|_Generic|_Imaginary|_Noreturn|_Static_assert|_Thread_local|__attribute__|asm|auto|break|case|char|const|continue|default|do|double|else|enum|extern|float|for|goto|if|inline|int|long|register|return|short|signed|sizeof|static|struct|switch|typedef|typeof|union|unsigned|void|volatile|while)\b/,function:/\b[a-z_]\w*(?=\s*\()/i,number:/(?:\b0x(?:[\da-f]+(?:\.[\da-f]*)?|\.[\da-f]+)(?:p[+-]?\d+)?|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?)[ful]{0,4}/i,operator:/>>=?|<<=?|->|([-+&|:])\1|[?:~]|[-+*/%&|^!=<>]=?/}),Prism.languages.insertBefore("c","string",{char:{pattern:/'(?:\\(?:\r\n|[\s\S])|[^'\\\r\n]){0,32}'/,greedy:!0}}),Prism.languages.insertBefore("c","string",{macro:{pattern:/(^[\t ]*)#\s*[a-z](?:[^\r\n\\/]|\/(?!\*)|\/\*(?:[^*]|\*(?!\/))*\*\/|\\(?:\r\n|[\s\S]))*/im,lookbehind:!0,greedy:!0,alias:"property",inside:{string:[{pattern:/^(#\s*include\s*)<[^>]+>/,lookbehind:!0},Prism.languages.c["string"]],char:Prism.languages.c["char"],comment:Prism.languages.c["comment"],"macro-name":[{pattern:/(^#\s*define\s+)\w+\b(?!\()/i,lookbehind:!0},{pattern:/(^#\s*define\s+)\w+\b(?=\()/i,lookbehind:!0,alias:"function"}],directive:{pattern:/^(#\s*)[a-z]+/,lookbehind:!0,alias:"keyword"},"directive-hash":/^#/,punctuation:/##|\\(?=[\r\n])/,expression:{pattern:/\S[\s\S]*/,inside:Prism.languages.c}}}}),Prism.languages.insertBefore("c","function",{constant:/\b(?:EOF|NULL|SEEK_CUR|SEEK_END|SEEK_SET|__DATE__|__FILE__|__LINE__|__TIMESTAMP__|__TIME__|__func__|stderr|stdin|stdout)\b/}),delete Prism.languages.c["boolean"]},6902:function(){Prism.languages.cfscript=Prism.languages.extend("clike",{comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0,inside:{annotation:{pattern:/(?:^|[^.])@[\w\.]+/,alias:"punctuation"}}},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],keyword:/\b(?:abstract|break|catch|component|continue|default|do|else|extends|final|finally|for|function|if|in|include|package|private|property|public|remote|required|rethrow|return|static|switch|throw|try|var|while|xml)\b(?!\s*=)/,operator:[/\+\+|--|&&|\|\||::|=>|[!=]==|[-+*/%&|^!=<>]=?|\?(?:\.|:)?|:/,/\b(?:and|contains|eq|equal|eqv|gt|gte|imp|is|lt|lte|mod|not|or|xor)\b/],scope:{pattern:/\b(?:application|arguments|cgi|client|cookie|local|session|super|this|variables)\b/,alias:"global"},type:{pattern:/\b(?:any|array|binary|boolean|date|guid|numeric|query|string|struct|uuid|void|xml)\b/,alias:"builtin"}}),Prism.languages.insertBefore("cfscript","keyword",{"function-variable":{pattern:/[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*[=:]\s*(?:\bfunction\b|(?:\((?:[^()]|\([^()]*\))*\)|(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)\s*=>))/,alias:"function"}}),delete Prism.languages.cfscript["class-name"],Prism.languages.cfc=Prism.languages["cfscript"]},4681:function(){Prism.languages.chaiscript=Prism.languages.extend("clike",{string:{pattern:/(^|[^\\])'(?:[^'\\]|\\[\s\S])*'/,lookbehind:!0,greedy:!0},"class-name":[{pattern:/(\bclass\s+)\w+/,lookbehind:!0},{pattern:/(\b(?:attr|def)\s+)\w+(?=\s*::)/,lookbehind:!0}],keyword:/\b(?:attr|auto|break|case|catch|class|continue|def|default|else|finally|for|fun|global|if|return|switch|this|try|var|while)\b/,number:[Prism.languages.cpp.number,/\b(?:Infinity|NaN)\b/],operator:/>>=?|<<=?|\|\||&&|:[:=]?|--|\+\+|[=!<>+\-*/%|&^]=?|[?~]|`[^`\r\n]{1,4}`/}),Prism.languages.insertBefore("chaiscript","operator",{"parameter-type":{pattern:/([,(]\s*)\w+(?=\s+\w)/,lookbehind:!0,alias:"class-name"}}),Prism.languages.insertBefore("chaiscript","string",{"string-interpolation":{pattern:/(^|[^\\])"(?:[^"$\\]|\\[\s\S]|\$(?!\{)|\$\{(?:[^{}]|\{(?:[^{}]|\{[^{}]*\})*\})*\})*"/,lookbehind:!0,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$\{(?:[^{}]|\{(?:[^{}]|\{[^{}]*\})*\})*\}/,lookbehind:!0,inside:{"interpolation-expression":{pattern:/(^\$\{)[\s\S]+(?=\}$)/,lookbehind:!0,inside:Prism.languages.chaiscript},"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"}}},string:/[\s\S]+/}}})},4677:function(){Prism.languages.cil={comment:/\/\/.*/,string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},directive:{pattern:/(^|\W)\.[a-z]+(?=\s)/,lookbehind:!0,alias:"class-name"},variable:/\[[\w\.]+\]/,keyword:/\b(?:abstract|ansi|assembly|auto|autochar|beforefieldinit|bool|bstr|byvalstr|catch|char|cil|class|currency|date|decimal|default|enum|error|explicit|extends|extern|famandassem|family|famorassem|final(?:ly)?|float32|float64|hidebysig|u?int(?:8|16|32|64)?|iant|idispatch|implements|import|initonly|instance|interface|iunknown|literal|lpstr|lpstruct|lptstr|lpwstr|managed|method|native(?:Type)?|nested|newslot|object(?:ref)?|pinvokeimpl|private|privatescope|public|reqsecobj|rtspecialname|runtime|sealed|sequential|serializable|specialname|static|string|struct|syschar|tbstr|unicode|unmanagedexp|unsigned|value(?:type)?|variant|virtual|void)\b/,function:/\b(?:(?:constrained|no|readonly|tail|unaligned|volatile)\.)?(?:conv\.(?:[iu][1248]?|ovf\.[iu][1248]?(?:\.un)?|r\.un|r4|r8)|ldc\.(?:i4(?:\.\d+|\.[mM]1|\.s)?|i8|r4|r8)|ldelem(?:\.[iu][1248]?|\.r[48]|\.ref|a)?|ldind\.(?:[iu][1248]?|r[48]|ref)|stelem\.?(?:i[1248]?|r[48]|ref)?|stind\.(?:i[1248]?|r[48]|ref)?|end(?:fault|filter|finally)|ldarg(?:\.[0-3s]|a(?:\.s)?)?|ldloc(?:\.\d+|\.s)?|sub(?:\.ovf(?:\.un)?)?|mul(?:\.ovf(?:\.un)?)?|add(?:\.ovf(?:\.un)?)?|stloc(?:\.[0-3s])?|refany(?:type|val)|blt(?:\.un)?(?:\.s)?|ble(?:\.un)?(?:\.s)?|bgt(?:\.un)?(?:\.s)?|bge(?:\.un)?(?:\.s)?|unbox(?:\.any)?|init(?:blk|obj)|call(?:i|virt)?|brfalse(?:\.s)?|bne\.un(?:\.s)?|ldloca(?:\.s)?|brzero(?:\.s)?|brtrue(?:\.s)?|brnull(?:\.s)?|brinst(?:\.s)?|starg(?:\.s)?|leave(?:\.s)?|shr(?:\.un)?|rem(?:\.un)?|div(?:\.un)?|clt(?:\.un)?|alignment|castclass|ldvirtftn|beq(?:\.s)?|ckfinite|ldsflda|ldtoken|localloc|mkrefany|rethrow|cgt\.un|arglist|switch|stsfld|sizeof|newobj|newarr|ldsfld|ldnull|ldflda|isinst|throw|stobj|stfld|ldstr|ldobj|ldlen|ldftn|ldfld|cpobj|cpblk|break|br\.s|xor|shl|ret|pop|not|nop|neg|jmp|dup|cgt|ceq|box|and|or|br)\b/,boolean:/\b(?:false|true)\b/,number:/\b-?(?:0x[0-9a-f]+|\d+)(?:\.[0-9a-f]+)?\b/i,punctuation:/[{}[\];(),:=]|IL_[0-9A-Za-z]+/}},1474:function(){Prism.languages.cilkc=Prism.languages.insertBefore("c","function",{"parallel-keyword":{pattern:/\bcilk_(?:for|reducer|s(?:cope|pawn|ync))\b/,alias:"keyword"}}),Prism.languages["cilk-c"]=Prism.languages["cilkc"]},5798:function(){Prism.languages.cilkcpp=Prism.languages.insertBefore("cpp","function",{"parallel-keyword":{pattern:/\bcilk_(?:for|reducer|s(?:cope|pawn|ync))\b/,alias:"keyword"}}),Prism.languages["cilk-cpp"]=Prism.languages["cilkcpp"],Prism.languages["cilk"]=Prism.languages["cilkcpp"]},5433:function(){Prism.languages.clike={comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},"class-name":{pattern:/(\b(?:class|extends|implements|instanceof|interface|new|trait)\s+|\bcatch\s+\()[\w.\\]+/i,lookbehind:!0,inside:{punctuation:/[.\\]/}},keyword:/\b(?:break|catch|continue|do|else|finally|for|function|if|in|instanceof|new|null|return|throw|try|while)\b/,boolean:/\b(?:false|true)\b/,function:/\b\w+(?=\()/,number:/\b0x[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?/i,operator:/[<>]=?|[!=]=?=?|--?|\+\+?|&&?|\|\|?|[?*/~^%]/,punctuation:/[{}[\];(),.:]/}},2812:function(){Prism.languages.clojure={comment:{pattern:/;.*/,greedy:!0},string:{pattern:/"(?:[^"\\]|\\.)*"/,greedy:!0},char:/\\\w+/,symbol:{pattern:/(^|[\s()\[\]{},])::?[\w*+!?'<>=/.-]+/,lookbehind:!0},keyword:{pattern:/(\()(?:-|->|->>|\.|\.\.|\*|\/|\+|<|<=|=|==|>|>=|accessor|agent|agent-errors|aget|alength|all-ns|alter|and|append-child|apply|array-map|aset|aset-boolean|aset-byte|aset-char|aset-double|aset-float|aset-int|aset-long|aset-short|assert|assoc|await|await-for|bean|binding|bit-and|bit-not|bit-or|bit-shift-left|bit-shift-right|bit-xor|boolean|branch\?|butlast|byte|cast|char|children|class|clear-agent-errors|comment|commute|comp|comparator|complement|concat|cond|conj|cons|constantly|construct-proxy|contains\?|count|create-ns|create-struct|cycle|dec|declare|def|def-|definline|definterface|defmacro|defmethod|defmulti|defn|defn-|defonce|defproject|defprotocol|defrecord|defstruct|deftype|deref|difference|disj|dissoc|distinct|do|doall|doc|dorun|doseq|dosync|dotimes|doto|double|down|drop|drop-while|edit|end\?|ensure|eval|every\?|false\?|ffirst|file-seq|filter|find|find-doc|find-ns|find-var|first|float|flush|fn|fnseq|for|frest|gensym|get|get-proxy-class|hash-map|hash-set|identical\?|identity|if|if-let|if-not|import|in-ns|inc|index|insert-child|insert-left|insert-right|inspect-table|inspect-tree|instance\?|int|interleave|intersection|into|into-array|iterate|join|key|keys|keyword|keyword\?|last|lazy-cat|lazy-cons|left|lefts|let|line-seq|list|list\*|load|load-file|locking|long|loop|macroexpand|macroexpand-1|make-array|make-node|map|map-invert|map\?|mapcat|max|max-key|memfn|merge|merge-with|meta|min|min-key|monitor-enter|name|namespace|neg\?|new|newline|next|nil\?|node|not|not-any\?|not-every\?|not=|ns|ns-imports|ns-interns|ns-map|ns-name|ns-publics|ns-refers|ns-resolve|ns-unmap|nth|nthrest|or|parse|partial|path|peek|pop|pos\?|pr|pr-str|print|print-str|println|println-str|prn|prn-str|project|proxy|proxy-mappings|quot|quote|rand|rand-int|range|re-find|re-groups|re-matcher|re-matches|re-pattern|re-seq|read|read-line|recur|reduce|ref|ref-set|refer|rem|remove|remove-method|remove-ns|rename|rename-keys|repeat|replace|replicate|resolve|rest|resultset-seq|reverse|rfirst|right|rights|root|rrest|rseq|second|select|select-keys|send|send-off|seq|seq-zip|seq\?|set|set!|short|slurp|some|sort|sort-by|sorted-map|sorted-map-by|sorted-set|special-symbol\?|split-at|split-with|str|string\?|struct|struct-map|subs|subvec|symbol|symbol\?|sync|take|take-nth|take-while|test|throw|time|to-array|to-array-2d|tree-seq|true\?|try|union|up|update-proxy|val|vals|var|var-get|var-set|var\?|vector|vector-zip|vector\?|when|when-first|when-let|when-not|with-local-vars|with-meta|with-open|with-out-str|xml-seq|xml-zip|zero\?|zipmap|zipper)(?=[\s)]|$)/,lookbehind:!0},boolean:/\b(?:false|nil|true)\b/,number:{pattern:/(^|[^\w$@])(?:\d+(?:[/.]\d+)?(?:e[+-]?\d+)?|0x[a-f0-9]+|[1-9]\d?r[a-z0-9]+)[lmn]?(?![\w$@])/i,lookbehind:!0},function:{pattern:/((?:^|[^'])\()[\w*+!?'<>=/.-]+(?=[\s)]|$)/,lookbehind:!0},operator:/[#@^`~]/,punctuation:/[{}\[\](),]/}},4225:function(){Prism.languages.cmake={comment:/#.*/,string:{pattern:/"(?:[^\\"]|\\.)*"/,greedy:!0,inside:{interpolation:{pattern:/\$\{(?:[^{}$]|\$\{[^{}$]*\})*\}/,inside:{punctuation:/\$\{|\}/,variable:/\w+/}}}},variable:/\b(?:CMAKE_\w+|\w+_(?:(?:BINARY|SOURCE)_DIR|DESCRIPTION|HOMEPAGE_URL|ROOT|VERSION(?:_MAJOR|_MINOR|_PATCH|_TWEAK)?)|(?:ANDROID|APPLE|BORLAND|BUILD_SHARED_LIBS|CACHE|CPACK_(?:ABSOLUTE_DESTINATION_FILES|COMPONENT_INCLUDE_TOPLEVEL_DIRECTORY|ERROR_ON_ABSOLUTE_INSTALL_DESTINATION|INCLUDE_TOPLEVEL_DIRECTORY|INSTALL_DEFAULT_DIRECTORY_PERMISSIONS|INSTALL_SCRIPT|PACKAGING_INSTALL_PREFIX|SET_DESTDIR|WARN_ON_ABSOLUTE_INSTALL_DESTINATION)|CTEST_(?:BINARY_DIRECTORY|BUILD_COMMAND|BUILD_NAME|BZR_COMMAND|BZR_UPDATE_OPTIONS|CHANGE_ID|CHECKOUT_COMMAND|CONFIGURATION_TYPE|CONFIGURE_COMMAND|COVERAGE_COMMAND|COVERAGE_EXTRA_FLAGS|CURL_OPTIONS|CUSTOM_(?:COVERAGE_EXCLUDE|ERROR_EXCEPTION|ERROR_MATCH|ERROR_POST_CONTEXT|ERROR_PRE_CONTEXT|MAXIMUM_FAILED_TEST_OUTPUT_SIZE|MAXIMUM_NUMBER_OF_(?:ERRORS|WARNINGS)|MAXIMUM_PASSED_TEST_OUTPUT_SIZE|MEMCHECK_IGNORE|POST_MEMCHECK|POST_TEST|PRE_MEMCHECK|PRE_TEST|TESTS_IGNORE|WARNING_EXCEPTION|WARNING_MATCH)|CVS_CHECKOUT|CVS_COMMAND|CVS_UPDATE_OPTIONS|DROP_LOCATION|DROP_METHOD|DROP_SITE|DROP_SITE_CDASH|DROP_SITE_PASSWORD|DROP_SITE_USER|EXTRA_COVERAGE_GLOB|GIT_COMMAND|GIT_INIT_SUBMODULES|GIT_UPDATE_CUSTOM|GIT_UPDATE_OPTIONS|HG_COMMAND|HG_UPDATE_OPTIONS|LABELS_FOR_SUBPROJECTS|MEMORYCHECK_(?:COMMAND|COMMAND_OPTIONS|SANITIZER_OPTIONS|SUPPRESSIONS_FILE|TYPE)|NIGHTLY_START_TIME|P4_CLIENT|P4_COMMAND|P4_OPTIONS|P4_UPDATE_OPTIONS|RUN_CURRENT_SCRIPT|SCP_COMMAND|SITE|SOURCE_DIRECTORY|SUBMIT_URL|SVN_COMMAND|SVN_OPTIONS|SVN_UPDATE_OPTIONS|TEST_LOAD|TEST_TIMEOUT|TRIGGER_SITE|UPDATE_COMMAND|UPDATE_OPTIONS|UPDATE_VERSION_ONLY|USE_LAUNCHERS)|CYGWIN|ENV|EXECUTABLE_OUTPUT_PATH|GHS-MULTI|IOS|LIBRARY_OUTPUT_PATH|MINGW|MSVC(?:10|11|12|14|60|70|71|80|90|_IDE|_TOOLSET_VERSION|_VERSION)?|MSYS|PROJECT_NAME|UNIX|WIN32|WINCE|WINDOWS_PHONE|WINDOWS_STORE|XCODE))\b/,property:/\b(?:cxx_\w+|(?:ARCHIVE_OUTPUT_(?:DIRECTORY|NAME)|COMPILE_DEFINITIONS|COMPILE_PDB_NAME|COMPILE_PDB_OUTPUT_DIRECTORY|EXCLUDE_FROM_DEFAULT_BUILD|IMPORTED_(?:IMPLIB|LIBNAME|LINK_DEPENDENT_LIBRARIES|LINK_INTERFACE_LANGUAGES|LINK_INTERFACE_LIBRARIES|LINK_INTERFACE_MULTIPLICITY|LOCATION|NO_SONAME|OBJECTS|SONAME)|INTERPROCEDURAL_OPTIMIZATION|LIBRARY_OUTPUT_DIRECTORY|LIBRARY_OUTPUT_NAME|LINK_FLAGS|LINK_INTERFACE_LIBRARIES|LINK_INTERFACE_MULTIPLICITY|LOCATION|MAP_IMPORTED_CONFIG|OSX_ARCHITECTURES|OUTPUT_NAME|PDB_NAME|PDB_OUTPUT_DIRECTORY|RUNTIME_OUTPUT_DIRECTORY|RUNTIME_OUTPUT_NAME|STATIC_LIBRARY_FLAGS|VS_CSHARP|VS_DOTNET_REFERENCEPROP|VS_DOTNET_REFERENCE|VS_GLOBAL_SECTION_POST|VS_GLOBAL_SECTION_PRE|VS_GLOBAL|XCODE_ATTRIBUTE)_\w+|\w+_(?:CLANG_TIDY|COMPILER_LAUNCHER|CPPCHECK|CPPLINT|INCLUDE_WHAT_YOU_USE|OUTPUT_NAME|POSTFIX|VISIBILITY_PRESET)|ABSTRACT|ADDITIONAL_MAKE_CLEAN_FILES|ADVANCED|ALIASED_TARGET|ALLOW_DUPLICATE_CUSTOM_TARGETS|ANDROID_(?:ANT_ADDITIONAL_OPTIONS|API|API_MIN|ARCH|ASSETS_DIRECTORIES|GUI|JAR_DEPENDENCIES|NATIVE_LIB_DEPENDENCIES|NATIVE_LIB_DIRECTORIES|PROCESS_MAX|PROGUARD|PROGUARD_CONFIG_PATH|SECURE_PROPS_PATH|SKIP_ANT_STEP|STL_TYPE)|ARCHIVE_OUTPUT_DIRECTORY|ATTACHED_FILES|ATTACHED_FILES_ON_FAIL|AUTOGEN_(?:BUILD_DIR|ORIGIN_DEPENDS|PARALLEL|SOURCE_GROUP|TARGETS_FOLDER|TARGET_DEPENDS)|AUTOMOC|AUTOMOC_(?:COMPILER_PREDEFINES|DEPEND_FILTERS|EXECUTABLE|MACRO_NAMES|MOC_OPTIONS|SOURCE_GROUP|TARGETS_FOLDER)|AUTORCC|AUTORCC_EXECUTABLE|AUTORCC_OPTIONS|AUTORCC_SOURCE_GROUP|AUTOUIC|AUTOUIC_EXECUTABLE|AUTOUIC_OPTIONS|AUTOUIC_SEARCH_PATHS|BINARY_DIR|BUILDSYSTEM_TARGETS|BUILD_RPATH|BUILD_RPATH_USE_ORIGIN|BUILD_WITH_INSTALL_NAME_DIR|BUILD_WITH_INSTALL_RPATH|BUNDLE|BUNDLE_EXTENSION|CACHE_VARIABLES|CLEAN_NO_CUSTOM|COMMON_LANGUAGE_RUNTIME|COMPATIBLE_INTERFACE_(?:BOOL|NUMBER_MAX|NUMBER_MIN|STRING)|COMPILE_(?:DEFINITIONS|FEATURES|FLAGS|OPTIONS|PDB_NAME|PDB_OUTPUT_DIRECTORY)|COST|CPACK_DESKTOP_SHORTCUTS|CPACK_NEVER_OVERWRITE|CPACK_PERMANENT|CPACK_STARTUP_SHORTCUTS|CPACK_START_MENU_SHORTCUTS|CPACK_WIX_ACL|CROSSCOMPILING_EMULATOR|CUDA_EXTENSIONS|CUDA_PTX_COMPILATION|CUDA_RESOLVE_DEVICE_SYMBOLS|CUDA_SEPARABLE_COMPILATION|CUDA_STANDARD|CUDA_STANDARD_REQUIRED|CXX_EXTENSIONS|CXX_STANDARD|CXX_STANDARD_REQUIRED|C_EXTENSIONS|C_STANDARD|C_STANDARD_REQUIRED|DEBUG_CONFIGURATIONS|DEFINE_SYMBOL|DEFINITIONS|DEPENDS|DEPLOYMENT_ADDITIONAL_FILES|DEPLOYMENT_REMOTE_DIRECTORY|DISABLED|DISABLED_FEATURES|ECLIPSE_EXTRA_CPROJECT_CONTENTS|ECLIPSE_EXTRA_NATURES|ENABLED_FEATURES|ENABLED_LANGUAGES|ENABLE_EXPORTS|ENVIRONMENT|EXCLUDE_FROM_ALL|EXCLUDE_FROM_DEFAULT_BUILD|EXPORT_NAME|EXPORT_PROPERTIES|EXTERNAL_OBJECT|EchoString|FAIL_REGULAR_EXPRESSION|FIND_LIBRARY_USE_LIB32_PATHS|FIND_LIBRARY_USE_LIB64_PATHS|FIND_LIBRARY_USE_LIBX32_PATHS|FIND_LIBRARY_USE_OPENBSD_VERSIONING|FIXTURES_CLEANUP|FIXTURES_REQUIRED|FIXTURES_SETUP|FOLDER|FRAMEWORK|Fortran_FORMAT|Fortran_MODULE_DIRECTORY|GENERATED|GENERATOR_FILE_NAME|GENERATOR_IS_MULTI_CONFIG|GHS_INTEGRITY_APP|GHS_NO_SOURCE_GROUP_FILE|GLOBAL_DEPENDS_DEBUG_MODE|GLOBAL_DEPENDS_NO_CYCLES|GNUtoMS|HAS_CXX|HEADER_FILE_ONLY|HELPSTRING|IMPLICIT_DEPENDS_INCLUDE_TRANSFORM|IMPORTED|IMPORTED_(?:COMMON_LANGUAGE_RUNTIME|CONFIGURATIONS|GLOBAL|IMPLIB|LIBNAME|LINK_DEPENDENT_LIBRARIES|LINK_INTERFACE_(?:LANGUAGES|LIBRARIES|MULTIPLICITY)|LOCATION|NO_SONAME|OBJECTS|SONAME)|IMPORT_PREFIX|IMPORT_SUFFIX|INCLUDE_DIRECTORIES|INCLUDE_REGULAR_EXPRESSION|INSTALL_NAME_DIR|INSTALL_RPATH|INSTALL_RPATH_USE_LINK_PATH|INTERFACE_(?:AUTOUIC_OPTIONS|COMPILE_DEFINITIONS|COMPILE_FEATURES|COMPILE_OPTIONS|INCLUDE_DIRECTORIES|LINK_DEPENDS|LINK_DIRECTORIES|LINK_LIBRARIES|LINK_OPTIONS|POSITION_INDEPENDENT_CODE|SOURCES|SYSTEM_INCLUDE_DIRECTORIES)|INTERPROCEDURAL_OPTIMIZATION|IN_TRY_COMPILE|IOS_INSTALL_COMBINED|JOB_POOLS|JOB_POOL_COMPILE|JOB_POOL_LINK|KEEP_EXTENSION|LABELS|LANGUAGE|LIBRARY_OUTPUT_DIRECTORY|LINKER_LANGUAGE|LINK_(?:DEPENDS|DEPENDS_NO_SHARED|DIRECTORIES|FLAGS|INTERFACE_LIBRARIES|INTERFACE_MULTIPLICITY|LIBRARIES|OPTIONS|SEARCH_END_STATIC|SEARCH_START_STATIC|WHAT_YOU_USE)|LISTFILE_STACK|LOCATION|MACOSX_BUNDLE|MACOSX_BUNDLE_INFO_PLIST|MACOSX_FRAMEWORK_INFO_PLIST|MACOSX_PACKAGE_LOCATION|MACOSX_RPATH|MACROS|MANUALLY_ADDED_DEPENDENCIES|MEASUREMENT|MODIFIED|NAME|NO_SONAME|NO_SYSTEM_FROM_IMPORTED|OBJECT_DEPENDS|OBJECT_OUTPUTS|OSX_ARCHITECTURES|OUTPUT_NAME|PACKAGES_FOUND|PACKAGES_NOT_FOUND|PARENT_DIRECTORY|PASS_REGULAR_EXPRESSION|PDB_NAME|PDB_OUTPUT_DIRECTORY|POSITION_INDEPENDENT_CODE|POST_INSTALL_SCRIPT|PREDEFINED_TARGETS_FOLDER|PREFIX|PRE_INSTALL_SCRIPT|PRIVATE_HEADER|PROCESSORS|PROCESSOR_AFFINITY|PROJECT_LABEL|PUBLIC_HEADER|REPORT_UNDEFINED_PROPERTIES|REQUIRED_FILES|RESOURCE|RESOURCE_LOCK|RULE_LAUNCH_COMPILE|RULE_LAUNCH_CUSTOM|RULE_LAUNCH_LINK|RULE_MESSAGES|RUNTIME_OUTPUT_DIRECTORY|RUN_SERIAL|SKIP_AUTOGEN|SKIP_AUTOMOC|SKIP_AUTORCC|SKIP_AUTOUIC|SKIP_BUILD_RPATH|SKIP_RETURN_CODE|SOURCES|SOURCE_DIR|SOVERSION|STATIC_LIBRARY_FLAGS|STATIC_LIBRARY_OPTIONS|STRINGS|SUBDIRECTORIES|SUFFIX|SYMBOLIC|TARGET_ARCHIVES_MAY_BE_SHARED_LIBS|TARGET_MESSAGES|TARGET_SUPPORTS_SHARED_LIBS|TESTS|TEST_INCLUDE_FILE|TEST_INCLUDE_FILES|TIMEOUT|TIMEOUT_AFTER_MATCH|TYPE|USE_FOLDERS|VALUE|VARIABLES|VERSION|VISIBILITY_INLINES_HIDDEN|VS_(?:CONFIGURATION_TYPE|COPY_TO_OUT_DIR|DEBUGGER_(?:COMMAND|COMMAND_ARGUMENTS|ENVIRONMENT|WORKING_DIRECTORY)|DEPLOYMENT_CONTENT|DEPLOYMENT_LOCATION|DOTNET_REFERENCES|DOTNET_REFERENCES_COPY_LOCAL|INCLUDE_IN_VSIX|IOT_STARTUP_TASK|KEYWORD|RESOURCE_GENERATOR|SCC_AUXPATH|SCC_LOCALPATH|SCC_PROJECTNAME|SCC_PROVIDER|SDK_REFERENCES|SHADER_(?:DISABLE_OPTIMIZATIONS|ENABLE_DEBUG|ENTRYPOINT|FLAGS|MODEL|OBJECT_FILE_NAME|OUTPUT_HEADER_FILE|TYPE|VARIABLE_NAME)|STARTUP_PROJECT|TOOL_OVERRIDE|USER_PROPS|WINRT_COMPONENT|WINRT_EXTENSIONS|WINRT_REFERENCES|XAML_TYPE)|WILL_FAIL|WIN32_EXECUTABLE|WINDOWS_EXPORT_ALL_SYMBOLS|WORKING_DIRECTORY|WRAP_EXCLUDE|XCODE_(?:EMIT_EFFECTIVE_PLATFORM_NAME|EXPLICIT_FILE_TYPE|FILE_ATTRIBUTES|LAST_KNOWN_FILE_TYPE|PRODUCT_TYPE|SCHEME_(?:ADDRESS_SANITIZER|ADDRESS_SANITIZER_USE_AFTER_RETURN|ARGUMENTS|DISABLE_MAIN_THREAD_CHECKER|DYNAMIC_LIBRARY_LOADS|DYNAMIC_LINKER_API_USAGE|ENVIRONMENT|EXECUTABLE|GUARD_MALLOC|MAIN_THREAD_CHECKER_STOP|MALLOC_GUARD_EDGES|MALLOC_SCRIBBLE|MALLOC_STACK|THREAD_SANITIZER(?:_STOP)?|UNDEFINED_BEHAVIOUR_SANITIZER(?:_STOP)?|ZOMBIE_OBJECTS))|XCTEST)\b/,keyword:/\b(?:add_compile_definitions|add_compile_options|add_custom_command|add_custom_target|add_definitions|add_dependencies|add_executable|add_library|add_link_options|add_subdirectory|add_test|aux_source_directory|break|build_command|build_name|cmake_host_system_information|cmake_minimum_required|cmake_parse_arguments|cmake_policy|configure_file|continue|create_test_sourcelist|ctest_build|ctest_configure|ctest_coverage|ctest_empty_binary_directory|ctest_memcheck|ctest_read_custom_files|ctest_run_script|ctest_sleep|ctest_start|ctest_submit|ctest_test|ctest_update|ctest_upload|define_property|else|elseif|enable_language|enable_testing|endforeach|endfunction|endif|endmacro|endwhile|exec_program|execute_process|export|export_library_dependencies|file|find_file|find_library|find_package|find_path|find_program|fltk_wrap_ui|foreach|function|get_cmake_property|get_directory_property|get_filename_component|get_property|get_source_file_property|get_target_property|get_test_property|if|include|include_directories|include_external_msproject|include_guard|include_regular_expression|install|install_files|install_programs|install_targets|link_directories|link_libraries|list|load_cache|load_command|macro|make_directory|mark_as_advanced|math|message|option|output_required_files|project|qt_wrap_cpp|qt_wrap_ui|remove|remove_definitions|return|separate_arguments|set|set_directory_properties|set_property|set_source_files_properties|set_target_properties|set_tests_properties|site_name|source_group|string|subdir_depends|subdirs|target_compile_definitions|target_compile_features|target_compile_options|target_include_directories|target_link_directories|target_link_libraries|target_link_options|target_sources|try_compile|try_run|unset|use_mangled_mesa|utility_source|variable_requires|variable_watch|while|write_file)(?=\s*\()\b/,boolean:/\b(?:FALSE|OFF|ON|TRUE)\b/,namespace:/\b(?:INTERFACE|PRIVATE|PROPERTIES|PUBLIC|SHARED|STATIC|TARGET_OBJECTS)\b/,operator:/\b(?:AND|DEFINED|EQUAL|GREATER|LESS|MATCHES|NOT|OR|STREQUAL|STRGREATER|STRLESS|VERSION_EQUAL|VERSION_GREATER|VERSION_LESS)\b/,inserted:{pattern:/\b\w+::\w+\b/,alias:"class-name"},number:/\b\d+(?:\.\d+)*\b/,function:/\b[a-z_]\w*(?=\s*\()\b/i,punctuation:/[()>}]|\$[<{]/}},7649:function(){Prism.languages.cobol={comment:{pattern:/\*>.*|(^[ \t]*)\*.*/m,lookbehind:!0,greedy:!0},string:{pattern:/[xzgn]?(?:"(?:[^\r\n"]|"")*"(?!")|'(?:[^\r\n']|'')*'(?!'))/i,greedy:!0},level:{pattern:/(^[ \t]*)\d+\b/m,lookbehind:!0,greedy:!0,alias:"number"},"class-name":{pattern:/(\bpic(?:ture)?\s+)(?:(?:[-\w$/,:*+<>]|\.(?!\s|$))(?:\(\d+\))?)+/i,lookbehind:!0,inside:{number:{pattern:/(\()\d+/,lookbehind:!0},punctuation:/[()]/}},keyword:{pattern:/(^|[^\w-])(?:ABORT|ACCEPT|ACCESS|ADD|ADDRESS|ADVANCING|AFTER|ALIGNED|ALL|ALPHABET|ALPHABETIC|ALPHABETIC-LOWER|ALPHABETIC-UPPER|ALPHANUMERIC|ALPHANUMERIC-EDITED|ALSO|ALTER|ALTERNATE|ANY|ARE|AREA|AREAS|AS|ASCENDING|ASCII|ASSIGN|ASSOCIATED-DATA|ASSOCIATED-DATA-LENGTH|AT|ATTRIBUTE|AUTHOR|AUTO|AUTO-SKIP|BACKGROUND-COLOR|BACKGROUND-COLOUR|BASIS|BEEP|BEFORE|BEGINNING|BELL|BINARY|BIT|BLANK|BLINK|BLOCK|BOTTOM|BOUNDS|BY|BYFUNCTION|BYTITLE|CALL|CANCEL|CAPABLE|CCSVERSION|CD|CF|CH|CHAINING|CHANGED|CHANNEL|CHARACTER|CHARACTERS|CLASS|CLASS-ID|CLOCK-UNITS|CLOSE|CLOSE-DISPOSITION|COBOL|CODE|CODE-SET|COL|COLLATING|COLUMN|COM-REG|COMMA|COMMITMENT|COMMON|COMMUNICATION|COMP|COMP-1|COMP-2|COMP-3|COMP-4|COMP-5|COMPUTATIONAL|COMPUTATIONAL-1|COMPUTATIONAL-2|COMPUTATIONAL-3|COMPUTATIONAL-4|COMPUTATIONAL-5|COMPUTE|CONFIGURATION|CONTAINS|CONTENT|CONTINUE|CONTROL|CONTROL-POINT|CONTROLS|CONVENTION|CONVERTING|COPY|CORR|CORRESPONDING|COUNT|CRUNCH|CURRENCY|CURSOR|DATA|DATA-BASE|DATE|DATE-COMPILED|DATE-WRITTEN|DAY|DAY-OF-WEEK|DBCS|DE|DEBUG-CONTENTS|DEBUG-ITEM|DEBUG-LINE|DEBUG-NAME|DEBUG-SUB-1|DEBUG-SUB-2|DEBUG-SUB-3|DEBUGGING|DECIMAL-POINT|DECLARATIVES|DEFAULT|DEFAULT-DISPLAY|DEFINITION|DELETE|DELIMITED|DELIMITER|DEPENDING|DESCENDING|DESTINATION|DETAIL|DFHRESP|DFHVALUE|DISABLE|DISK|DISPLAY|DISPLAY-1|DIVIDE|DIVISION|DONTCARE|DOUBLE|DOWN|DUPLICATES|DYNAMIC|EBCDIC|EGCS|EGI|ELSE|EMI|EMPTY-CHECK|ENABLE|END|END-ACCEPT|END-ADD|END-CALL|END-COMPUTE|END-DELETE|END-DIVIDE|END-EVALUATE|END-IF|END-MULTIPLY|END-OF-PAGE|END-PERFORM|END-READ|END-RECEIVE|END-RETURN|END-REWRITE|END-SEARCH|END-START|END-STRING|END-SUBTRACT|END-UNSTRING|END-WRITE|ENDING|ENTER|ENTRY|ENTRY-PROCEDURE|ENVIRONMENT|EOL|EOP|EOS|ERASE|ERROR|ESCAPE|ESI|EVALUATE|EVENT|EVERY|EXCEPTION|EXCLUSIVE|EXHIBIT|EXIT|EXPORT|EXTEND|EXTENDED|EXTERNAL|FD|FILE|FILE-CONTROL|FILLER|FINAL|FIRST|FOOTING|FOR|FOREGROUND-COLOR|FOREGROUND-COLOUR|FROM|FULL|FUNCTION|FUNCTION-POINTER|FUNCTIONNAME|GENERATE|GIVING|GLOBAL|GO|GOBACK|GRID|GROUP|HEADING|HIGH-VALUE|HIGH-VALUES|HIGHLIGHT|I-O|I-O-CONTROL|ID|IDENTIFICATION|IF|IMPLICIT|IMPORT|IN|INDEX|INDEXED|INDICATE|INITIAL|INITIALIZE|INITIATE|INPUT|INPUT-OUTPUT|INSPECT|INSTALLATION|INTEGER|INTO|INVALID|INVOKE|IS|JUST|JUSTIFIED|KANJI|KEPT|KEY|KEYBOARD|LABEL|LANGUAGE|LAST|LB|LD|LEADING|LEFT|LEFTLINE|LENGTH|LENGTH-CHECK|LIBACCESS|LIBPARAMETER|LIBRARY|LIMIT|LIMITS|LINAGE|LINAGE-COUNTER|LINE|LINE-COUNTER|LINES|LINKAGE|LIST|LOCAL|LOCAL-STORAGE|LOCK|LONG-DATE|LONG-TIME|LOW-VALUE|LOW-VALUES|LOWER|LOWLIGHT|MEMORY|MERGE|MESSAGE|MMDDYYYY|MODE|MODULES|MORE-LABELS|MOVE|MULTIPLE|MULTIPLY|NAMED|NATIONAL|NATIONAL-EDITED|NATIVE|NEGATIVE|NETWORK|NEXT|NO|NO-ECHO|NULL|NULLS|NUMBER|NUMERIC|NUMERIC-DATE|NUMERIC-EDITED|NUMERIC-TIME|OBJECT-COMPUTER|OCCURS|ODT|OF|OFF|OMITTED|ON|OPEN|OPTIONAL|ORDER|ORDERLY|ORGANIZATION|OTHER|OUTPUT|OVERFLOW|OVERLINE|OWN|PACKED-DECIMAL|PADDING|PAGE|PAGE-COUNTER|PASSWORD|PERFORM|PF|PH|PIC|PICTURE|PLUS|POINTER|PORT|POSITION|POSITIVE|PRINTER|PRINTING|PRIVATE|PROCEDURE|PROCEDURE-POINTER|PROCEDURES|PROCEED|PROCESS|PROGRAM|PROGRAM-ID|PROGRAM-LIBRARY|PROMPT|PURGE|QUEUE|QUOTE|QUOTES|RANDOM|RD|READ|READER|REAL|RECEIVE|RECEIVED|RECORD|RECORDING|RECORDS|RECURSIVE|REDEFINES|REEL|REF|REFERENCE|REFERENCES|RELATIVE|RELEASE|REMAINDER|REMARKS|REMOTE|REMOVAL|REMOVE|RENAMES|REPLACE|REPLACING|REPORT|REPORTING|REPORTS|REQUIRED|RERUN|RESERVE|RESET|RETURN|RETURN-CODE|RETURNING|REVERSE-VIDEO|REVERSED|REWIND|REWRITE|RF|RH|RIGHT|ROUNDED|RUN|SAME|SAVE|SCREEN|SD|SEARCH|SECTION|SECURE|SECURITY|SEGMENT|SEGMENT-LIMIT|SELECT|SEND|SENTENCE|SEPARATE|SEQUENCE|SEQUENTIAL|SET|SHARED|SHAREDBYALL|SHAREDBYRUNUNIT|SHARING|SHIFT-IN|SHIFT-OUT|SHORT-DATE|SIGN|SIZE|SORT|SORT-CONTROL|SORT-CORE-SIZE|SORT-FILE-SIZE|SORT-MERGE|SORT-MESSAGE|SORT-MODE-SIZE|SORT-RETURN|SOURCE|SOURCE-COMPUTER|SPACE|SPACES|SPECIAL-NAMES|STANDARD|STANDARD-1|STANDARD-2|START|STATUS|STOP|STRING|SUB-QUEUE-1|SUB-QUEUE-2|SUB-QUEUE-3|SUBTRACT|SUM|SUPPRESS|SYMBOL|SYMBOLIC|SYNC|SYNCHRONIZED|TABLE|TALLY|TALLYING|TAPE|TASK|TERMINAL|TERMINATE|TEST|TEXT|THEN|THREAD|THREAD-LOCAL|THROUGH|THRU|TIME|TIMER|TIMES|TITLE|TO|TODAYS-DATE|TODAYS-NAME|TOP|TRAILING|TRUNCATED|TYPE|TYPEDEF|UNDERLINE|UNIT|UNSTRING|UNTIL|UP|UPON|USAGE|USE|USING|VALUE|VALUES|VARYING|VIRTUAL|WAIT|WHEN|WHEN-COMPILED|WITH|WORDS|WORKING-STORAGE|WRITE|YEAR|YYYYDDD|YYYYMMDD|ZERO-FILL|ZEROES|ZEROS)(?![\w-])/i,lookbehind:!0},boolean:{pattern:/(^|[^\w-])(?:false|true)(?![\w-])/i,lookbehind:!0},number:{pattern:/(^|[^\w-])(?:[+-]?(?:(?:\d+(?:[.,]\d+)?|[.,]\d+)(?:e[+-]?\d+)?|zero))(?![\w-])/i,lookbehind:!0},operator:[/<>|[<>]=?|[=+*/&]/,{pattern:/(^|[^\w-])(?:-|and|equal|greater|less|not|or|than)(?![\w-])/i,lookbehind:!0}],punctuation:/[.:,()]/}},6213:function(){(function(e){var t=/#(?!\{).+/,n={pattern:/#\{[^}]+\}/,alias:"variable"};e.languages.coffeescript=e.languages.extend("javascript",{comment:t,string:[{pattern:/'(?:\\[\s\S]|[^\\'])*'/,greedy:!0},{pattern:/"(?:\\[\s\S]|[^\\"])*"/,greedy:!0,inside:{interpolation:n}}],keyword:/\b(?:and|break|by|catch|class|continue|debugger|delete|do|each|else|extend|extends|false|finally|for|if|in|instanceof|is|isnt|let|loop|namespace|new|no|not|null|of|off|on|or|own|return|super|switch|then|this|throw|true|try|typeof|undefined|unless|until|when|while|window|with|yes|yield)\b/,"class-member":{pattern:/@(?!\d)\w+/,alias:"variable"}}),e.languages.insertBefore("coffeescript","comment",{"multiline-comment":{pattern:/###[\s\S]+?###/,alias:"comment"},"block-regex":{pattern:/\/{3}[\s\S]*?\/{3}/,alias:"regex",inside:{comment:t,interpolation:n}}}),e.languages.insertBefore("coffeescript","string",{"inline-javascript":{pattern:/`(?:\\[\s\S]|[^\\`])*`/,inside:{delimiter:{pattern:/^`|`$/,alias:"punctuation"},script:{pattern:/[\s\S]+/,alias:"language-javascript",inside:e.languages.javascript}}},"multiline-string":[{pattern:/'''[\s\S]*?'''/,greedy:!0,alias:"string"},{pattern:/"""[\s\S]*?"""/,greedy:!0,alias:"string",inside:{interpolation:n}}]}),e.languages.insertBefore("coffeescript","keyword",{property:/(?!\d)\w+(?=\s*:(?!:))/}),delete e.languages.coffeescript["template-string"],e.languages.coffee=e.languages.coffeescript})(Prism)},9467:function(){Prism.languages.concurnas={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?(?:\*\/|$)|\/\/.*)/,lookbehind:!0,greedy:!0},langext:{pattern:/\b\w+\s*\|\|[\s\S]+?\|\|/,greedy:!0,inside:{"class-name":/^\w+/,string:{pattern:/(^\s*\|\|)[\s\S]+(?=\|\|$)/,lookbehind:!0},punctuation:/\|\|/}},function:{pattern:/((?:^|\s)def[ \t]+)[a-zA-Z_]\w*(?=\s*\()/,lookbehind:!0},keyword:/\b(?:abstract|actor|also|annotation|assert|async|await|bool|boolean|break|byte|case|catch|changed|char|class|closed|constant|continue|def|default|del|double|elif|else|enum|every|extends|false|finally|float|for|from|global|gpudef|gpukernel|if|import|in|init|inject|int|lambda|local|long|loop|match|new|nodefault|null|of|onchange|open|out|override|package|parfor|parforsync|post|pre|private|protected|provide|provider|public|return|shared|short|single|size_t|sizeof|super|sync|this|throw|trait|trans|transient|true|try|typedef|unchecked|using|val|var|void|while|with)\b/,boolean:/\b(?:false|true)\b/,number:/\b0b[01][01_]*L?\b|\b0x(?:[\da-f_]*\.)?[\da-f_p+-]+\b|(?:\b\d[\d_]*(?:\.[\d_]*)?|\B\.\d[\d_]*)(?:e[+-]?\d[\d_]*)?[dfls]?/i,punctuation:/[{}[\];(),.:]/,operator:/<==|>==|=>|->|<-|<>|&==|&<>|\?:?|\.\?|\+\+|--|[-+*/=<>]=?|[!^~]|\b(?:and|as|band|bor|bxor|comp|is|isnot|mod|or)\b=?/,annotation:{pattern:/@(?:\w+:)?(?:\w+|\[[^\]]+\])?/,alias:"builtin"}},Prism.languages.insertBefore("concurnas","langext",{"regex-literal":{pattern:/\br("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}/,lookbehind:!0,inside:Prism.languages.concurnas},regex:/[\s\S]+/}},"string-literal":{pattern:/(?:\B|\bs)("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}/,lookbehind:!0,inside:Prism.languages.concurnas},string:/[\s\S]+/}}}),Prism.languages.conc=Prism.languages.concurnas},5867:function(){(function(e){var t=/(?:(?!\s)[\d$+<=a-zA-Z\x80-\uFFFF])+/.source,n=/[^{}@#]+/.source,r=/\{[^}#@]*\}/.source,i=n+r,s=/(?:h|hours|hrs|m|min|minutes)/.source,o={pattern:/\{[^{}]*\}/,inside:{amount:{pattern:/([\{|])[^{}|*%]+/,lookbehind:!0,alias:"number"},unit:{pattern:/(%)[^}]+/,lookbehind:!0,alias:"symbol"},"servings-scaler":{pattern:/\*/,alias:"operator"},"servings-alternative-separator":{pattern:/\|/,alias:"operator"},"unit-separator":{pattern:/(?:%|(\*)%)/,lookbehind:!0,alias:"operator"},punctuation:/[{}]/}};e.languages.cooklang={comment:{pattern:/\[-[\s\S]*?-\]|--.*/,greedy:!0},meta:{pattern:/>>.*:.*/,inside:{property:{pattern:/(>>\s*)[^\s:](?:[^:]*[^\s:])?/,lookbehind:!0}}},"cookware-group":{pattern:new RegExp("#(?:"+i+"|"+t+")"),inside:{cookware:{pattern:new RegExp("(^#)(?:"+n+")"),lookbehind:!0,alias:"variable"},"cookware-keyword":{pattern:/^#/,alias:"keyword"},"quantity-group":{pattern:new RegExp(/\{[^{}@#]*\}/),inside:{quantity:{pattern:new RegExp(/(^\{)/.source+n),lookbehind:!0,alias:"number"},punctuation:/[{}]/}}}},"ingredient-group":{pattern:new RegExp("@(?:"+i+"|"+t+")"),inside:{ingredient:{pattern:new RegExp("(^@)(?:"+n+")"),lookbehind:!0,alias:"variable"},"ingredient-keyword":{pattern:/^@/,alias:"keyword"},"amount-group":o}},"timer-group":{pattern:/~(?!\s)[^@#~{}]*\{[^{}]*\}/,inside:{timer:{pattern:/(^~)[^{]+/,lookbehind:!0,alias:"variable"},"duration-group":{pattern:/\{[^{}]*\}/,inside:{punctuation:/[{}]/,unit:{pattern:new RegExp(/(%\s*)/.source+s+/\b/.source),lookbehind:!0,alias:"symbol"},operator:/%/,duration:{pattern:/\d+/,alias:"number"}}},"timer-keyword":{pattern:/^~/,alias:"keyword"}}}}})(Prism)},4307:function(){(function(e){for(var t=/\(\*(?:[^(*]|\((?!\*)|\*(?!\))|)*\*\)/.source,n=0;n<2;n++)t=t.replace(//g,(function(){return t}));t=t.replace(//g,"[]"),e.languages.coq={comment:RegExp(t),string:{pattern:/"(?:[^"]|"")*"(?!")/,greedy:!0},attribute:[{pattern:RegExp(/#\[(?:[^\[\]("]|"(?:[^"]|"")*"(?!")|\((?!\*)|)*\]/.source.replace(//g,(function(){return t}))),greedy:!0,alias:"attr-name",inside:{comment:RegExp(t),string:{pattern:/"(?:[^"]|"")*"(?!")/,greedy:!0},operator:/=/,punctuation:/^#\[|\]$|[,()]/}},{pattern:/\b(?:Cumulative|Global|Local|Monomorphic|NonCumulative|Polymorphic|Private|Program)\b/,alias:"attr-name"}],keyword:/\b(?:Abort|About|Add|Admit|Admitted|All|Arguments|As|Assumptions|Axiom|Axioms|Back|BackTo|Backtrace|BinOp|BinOpSpec|BinRel|Bind|Blacklist|Canonical|Case|Cd|Check|Class|Classes|Close|CoFixpoint|CoInductive|Coercion|Coercions|Collection|Combined|Compute|Conjecture|Conjectures|Constant|Constants|Constraint|Constructors|Context|Corollary|Create|CstOp|Custom|Cut|Debug|Declare|Defined|Definition|Delimit|Dependencies|Dependent|Derive|Diffs|Drop|Elimination|End|Entry|Equality|Eval|Example|Existential|Existentials|Existing|Export|Extern|Extraction|Fact|Fail|Field|File|Firstorder|Fixpoint|Flags|Focus|From|Funclass|Function|Functional|GC|Generalizable|Goal|Grab|Grammar|Graph|Guarded|Haskell|Heap|Hide|Hint|HintDb|Hints|Hypotheses|Hypothesis|IF|Identity|Immediate|Implicit|Implicits|Import|Include|Induction|Inductive|Infix|Info|Initial|InjTyp|Inline|Inspect|Instance|Instances|Intro|Intros|Inversion|Inversion_clear|JSON|Language|Left|Lemma|Let|Lia|Libraries|Library|Load|LoadPath|Locate|Ltac|Ltac2|ML|Match|Method|Minimality|Module|Modules|Morphism|Next|NoInline|Notation|Number|OCaml|Obligation|Obligations|Opaque|Open|Optimize|Parameter|Parameters|Parametric|Path|Paths|Prenex|Preterm|Primitive|Print|Profile|Projections|Proof|Prop|PropBinOp|PropOp|PropUOp|Property|Proposition|Pwd|Qed|Quit|Rec|Record|Recursive|Redirect|Reduction|Register|Relation|Remark|Remove|Require|Reserved|Reset|Resolve|Restart|Rewrite|Right|Ring|Rings|SProp|Saturate|Save|Scheme|Scope|Scopes|Search|SearchHead|SearchPattern|SearchRewrite|Section|Separate|Set|Setoid|Show|Signatures|Solve|Solver|Sort|Sortclass|Sorted|Spec|Step|Strategies|Strategy|String|Structure|SubClass|Subgraph|SuchThat|Tactic|Term|TestCompile|Theorem|Time|Timeout|To|Transparent|Type|Typeclasses|Types|Typing|UnOp|UnOpSpec|Undelimit|Undo|Unfocus|Unfocused|Unfold|Universe|Universes|Unshelve|Variable|Variables|Variant|Verbose|View|Visibility|Zify|_|apply|as|at|by|cofix|else|end|exists|exists2|fix|for|forall|fun|if|in|let|match|measure|move|removed|return|struct|then|using|wf|where|with)\b/,number:/\b(?:0x[a-f0-9][a-f0-9_]*(?:\.[a-f0-9_]+)?(?:p[+-]?\d[\d_]*)?|\d[\d_]*(?:\.[\d_]+)?(?:e[+-]?\d[\d_]*)?)\b/i,punct:{pattern:/@\{|\{\||\[=|:>/,alias:"punctuation"},operator:/\/\\|\\\/|\.{2,3}|:{1,2}=|\*\*|[-=]>|<(?:->?|[+:=>]|<:)|>(?:=|->)|\|[-|]?|[-!%&*+/<=>?@^~']/,punctuation:/\.\(|`\(|@\{|`\{|\{\||\[=|:>|[:.,;(){}\[\]]/}})(Prism)},8325:function(e,t,n){var r="undefined"!==typeof window?window:"undefined"!==typeof WorkerGlobalScope&&self instanceof WorkerGlobalScope?self:{},i=function(e){var t=/(?:^|\s)lang(?:uage)?-([\w-]+)(?=\s|$)/i,n=0,r={},i={manual:e.Prism&&e.Prism.manual,disableWorkerMessageHandler:e.Prism&&e.Prism.disableWorkerMessageHandler,util:{encode:function e(t){return t instanceof s?new s(t.type,e(t.content),t.alias):Array.isArray(t)?t.map(e):t.replace(/&/g,"&").replace(/=d.reach)break;var w=x.value;if(t.length>e.length)return;if(!(w instanceof s)){var T,A=1;if(_){if(T=o(E,S,e,b),!T||T.index>=e.length)break;var C=T.index,I=T.index+T[0].length,R=S;R+=x.value.length;while(C>=R)x=x.next,R+=x.value.length;if(R-=x.value.length,S=R,x.value instanceof s)continue;for(var k=x;k!==t.tail&&(Rd.reach&&(d.reach=M);var D=x.prev;O&&(D=c(t,D,O),S+=O.length),u(t,D,A);var L=new s(h,m?i.tokenize(P,m):P,y,P);if(x=c(t,D,L),N&&c(t,x,N),A>1){var F={cause:h+","+f,reach:M};a(e,t,n,x.prev,S,F),d&&F.reach>d.reach&&(d.reach=F.reach)}}}}}}function l(){var e={value:null,prev:null,next:null},t={value:null,prev:e,next:null};e.next=t,this.head=e,this.tail=t,this.length=0}function c(e,t,n){var r=t.next,i={value:n,prev:t,next:r};return t.next=i,r.prev=i,e.length++,i}function u(e,t,n){for(var r=t.next,i=0;i"+s.content+""},!e.document)return e.addEventListener?(i.disableWorkerMessageHandler||e.addEventListener("message",(function(t){var n=JSON.parse(t.data),r=n.language,s=n.code,o=n.immediateClose;e.postMessage(i.highlight(s,i.languages[r],r)),o&&e.close()}),!1),i):i;var h=i.util.currentScript();function p(){i.manual||i.highlightAll()}if(h&&(i.filename=h.src,h.hasAttribute("data-manual")&&(i.manual=!0)),!i.manual){var f=document.readyState;"loading"===f||"interactive"===f&&h&&h.defer?document.addEventListener("DOMContentLoaded",p):window.requestAnimationFrame?window.requestAnimationFrame(p):window.setTimeout(p,16)}return i}(r); -/** - * Prism: Lightweight, robust, elegant syntax highlighting - * - * @license MIT - * @author Lea Verou - * @namespace - * @public - */e.exports&&(e.exports=i),"undefined"!==typeof n.g&&(n.g.Prism=i)},2731:function(){(function(e){var t=/\b(?:alignas|alignof|asm|auto|bool|break|case|catch|char|char16_t|char32_t|char8_t|class|co_await|co_return|co_yield|compl|concept|const|const_cast|consteval|constexpr|constinit|continue|decltype|default|delete|do|double|dynamic_cast|else|enum|explicit|export|extern|final|float|for|friend|goto|if|import|inline|int|int16_t|int32_t|int64_t|int8_t|long|module|mutable|namespace|new|noexcept|nullptr|operator|override|private|protected|public|register|reinterpret_cast|requires|return|short|signed|sizeof|static|static_assert|static_cast|struct|switch|template|this|thread_local|throw|try|typedef|typeid|typename|uint16_t|uint32_t|uint64_t|uint8_t|union|unsigned|using|virtual|void|volatile|wchar_t|while)\b/,n=/\b(?!)\w+(?:\s*\.\s*\w+)*\b/.source.replace(//g,(function(){return t.source}));e.languages.cpp=e.languages.extend("c",{"class-name":[{pattern:RegExp(/(\b(?:class|concept|enum|struct|typename)\s+)(?!)\w+/.source.replace(//g,(function(){return t.source}))),lookbehind:!0},/\b[A-Z]\w*(?=\s*::\s*\w+\s*\()/,/\b[A-Z_]\w*(?=\s*::\s*~\w+\s*\()/i,/\b\w+(?=\s*<(?:[^<>]|<(?:[^<>]|<[^<>]*>)*>)*>\s*::\s*\w+\s*\()/],keyword:t,number:{pattern:/(?:\b0b[01']+|\b0x(?:[\da-f']+(?:\.[\da-f']*)?|\.[\da-f']+)(?:p[+-]?[\d']+)?|(?:\b[\d']+(?:\.[\d']*)?|\B\.[\d']+)(?:e[+-]?[\d']+)?)[ful]{0,4}/i,greedy:!0},operator:/>>=?|<<=?|->|--|\+\+|&&|\|\||[?:~]|<=>|[-+*/%&|^!=<>]=?|\b(?:and|and_eq|bitand|bitor|not|not_eq|or|or_eq|xor|xor_eq)\b/,boolean:/\b(?:false|true)\b/}),e.languages.insertBefore("cpp","string",{module:{pattern:RegExp(/(\b(?:import|module)\s+)/.source+"(?:"+/"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"|<[^<>\r\n]*>/.source+"|"+/(?:\s*:\s*)?|:\s*/.source.replace(//g,(function(){return n}))+")"),lookbehind:!0,greedy:!0,inside:{string:/^[<"][\s\S]+/,operator:/:/,punctuation:/\./}},"raw-string":{pattern:/R"([^()\\ ]{0,16})\([\s\S]*?\)\1"/,alias:"string",greedy:!0}}),e.languages.insertBefore("cpp","keyword",{"generic-function":{pattern:/\b(?!operator\b)[a-z_]\w*\s*<(?:[^<>]|<[^<>]*>)*>(?=\s*\()/i,inside:{function:/^\w+/,generic:{pattern:/<[\s\S]+/,alias:"class-name",inside:e.languages.cpp}}}}),e.languages.insertBefore("cpp","operator",{"double-colon":{pattern:/::/,alias:"punctuation"}}),e.languages.insertBefore("cpp","class-name",{"base-clause":{pattern:/(\b(?:class|struct)\s+\w+\s*:\s*)[^;{}"'\s]+(?:\s+[^;{}"'\s]+)*(?=\s*[;{])/,lookbehind:!0,greedy:!0,inside:e.languages.extend("cpp",{})}}),e.languages.insertBefore("inside","double-colon",{"class-name":/\b[a-z_]\w*\b(?!\s*::)/i},e.languages.cpp["base-clause"])})(Prism)},8980:function(){(function(e){e.languages.crystal=e.languages.extend("ruby",{keyword:[/\b(?:__DIR__|__END_LINE__|__FILE__|__LINE__|abstract|alias|annotation|as|asm|begin|break|case|class|def|do|else|elsif|end|ensure|enum|extend|for|fun|if|ifdef|include|instance_sizeof|lib|macro|module|next|of|out|pointerof|private|protected|ptr|require|rescue|return|select|self|sizeof|struct|super|then|type|typeof|undef|uninitialized|union|unless|until|when|while|with|yield)\b/,{pattern:/(\.\s*)(?:is_a|responds_to)\?/,lookbehind:!0}],number:/\b(?:0b[01_]*[01]|0o[0-7_]*[0-7]|0x[\da-fA-F_]*[\da-fA-F]|(?:\d(?:[\d_]*\d)?)(?:\.[\d_]*\d)?(?:[eE][+-]?[\d_]*\d)?)(?:_(?:[uif](?:8|16|32|64))?)?\b/,operator:[/->/,e.languages.ruby.operator],punctuation:/[(){}[\].,;\\]/}),e.languages.insertBefore("crystal","string-literal",{attribute:{pattern:/@\[.*?\]/,inside:{delimiter:{pattern:/^@\[|\]$/,alias:"punctuation"},attribute:{pattern:/^(\s*)\w+/,lookbehind:!0,alias:"class-name"},args:{pattern:/\S(?:[\s\S]*\S)?/,inside:e.languages.crystal}}},expansion:{pattern:/\{(?:\{.*?\}|%.*?%)\}/,inside:{content:{pattern:/^(\{.)[\s\S]+(?=.\}$)/,lookbehind:!0,inside:e.languages.crystal},delimiter:{pattern:/^\{[\{%]|[\}%]\}$/,alias:"operator"}}},char:{pattern:/'(?:[^\\\r\n]{1,2}|\\(?:.|u(?:[A-Fa-f0-9]{1,4}|\{[A-Fa-f0-9]{1,6}\})))'/,greedy:!0}})})(Prism)},9016:function(){(function(e){function t(e,t){return e.replace(/<<(\d+)>>/g,(function(e,n){return"(?:"+t[+n]+")"}))}function n(e,n,r){return RegExp(t(e,n),r||"")}function r(e,t){for(var n=0;n>/g,(function(){return"(?:"+e+")"}));return e.replace(/<>/g,"[^\\s\\S]")}var i={type:"bool byte char decimal double dynamic float int long object sbyte short string uint ulong ushort var void",typeDeclaration:"class enum interface record struct",contextual:"add alias and ascending async await by descending from(?=\\s*(?:\\w|$)) get global group into init(?=\\s*;) join let nameof not notnull on or orderby partial remove select set unmanaged value when where with(?=\\s*{)",other:"abstract as base break case catch checked const continue default delegate do else event explicit extern finally fixed for foreach goto if implicit in internal is lock namespace new null operator out override params private protected public readonly ref return sealed sizeof stackalloc static switch this throw try typeof unchecked unsafe using virtual volatile while yield"};function s(e){return"\\b(?:"+e.trim().replace(/ /g,"|")+")\\b"}var o=s(i.typeDeclaration),a=RegExp(s(i.type+" "+i.typeDeclaration+" "+i.contextual+" "+i.other)),l=s(i.typeDeclaration+" "+i.contextual+" "+i.other),c=s(i.type+" "+i.typeDeclaration+" "+i.other),u=r(/<(?:[^<>;=+\-*/%&|^]|<>)*>/.source,2),d=r(/\((?:[^()]|<>)*\)/.source,2),h=/@?\b[A-Za-z_]\w*\b/.source,p=t(/<<0>>(?:\s*<<1>>)?/.source,[h,u]),f=t(/(?!<<0>>)<<1>>(?:\s*\.\s*<<1>>)*/.source,[l,p]),g=/\[\s*(?:,\s*)*\]/.source,m=t(/<<0>>(?:\s*(?:\?\s*)?<<1>>)*(?:\s*\?)?/.source,[f,g]),b=t(/[^,()<>[\];=+\-*/%&|^]|<<0>>|<<1>>|<<2>>/.source,[u,d,g]),_=t(/\(<<0>>+(?:,<<0>>+)+\)/.source,[b]),y=t(/(?:<<0>>|<<1>>)(?:\s*(?:\?\s*)?<<2>>)*(?:\s*\?)?/.source,[_,f,g]),v={keyword:a,punctuation:/[<>()?,.:[\]]/},E=/'(?:[^\r\n'\\]|\\.|\\[Uux][\da-fA-F]{1,8})'/.source,x=/"(?:\\.|[^\\"\r\n])*"/.source,S=/@"(?:""|\\[\s\S]|[^\\"])*"(?!")/.source;e.languages.csharp=e.languages.extend("clike",{string:[{pattern:n(/(^|[^$\\])<<0>>/.source,[S]),lookbehind:!0,greedy:!0},{pattern:n(/(^|[^@$\\])<<0>>/.source,[x]),lookbehind:!0,greedy:!0}],"class-name":[{pattern:n(/(\busing\s+static\s+)<<0>>(?=\s*;)/.source,[f]),lookbehind:!0,inside:v},{pattern:n(/(\busing\s+<<0>>\s*=\s*)<<1>>(?=\s*;)/.source,[h,y]),lookbehind:!0,inside:v},{pattern:n(/(\busing\s+)<<0>>(?=\s*=)/.source,[h]),lookbehind:!0},{pattern:n(/(\b<<0>>\s+)<<1>>/.source,[o,p]),lookbehind:!0,inside:v},{pattern:n(/(\bcatch\s*\(\s*)<<0>>/.source,[f]),lookbehind:!0,inside:v},{pattern:n(/(\bwhere\s+)<<0>>/.source,[h]),lookbehind:!0},{pattern:n(/(\b(?:is(?:\s+not)?|as)\s+)<<0>>/.source,[m]),lookbehind:!0,inside:v},{pattern:n(/\b<<0>>(?=\s+(?!<<1>>|with\s*\{)<<2>>(?:\s*[=,;:{)\]]|\s+(?:in|when)\b))/.source,[y,c,h]),inside:v}],keyword:a,number:/(?:\b0(?:x[\da-f_]*[\da-f]|b[01_]*[01])|(?:\B\.\d+(?:_+\d+)*|\b\d+(?:_+\d+)*(?:\.\d+(?:_+\d+)*)?)(?:e[-+]?\d+(?:_+\d+)*)?)(?:[dflmu]|lu|ul)?\b/i,operator:/>>=?|<<=?|[-=]>|([-+&|])\1|~|\?\?=?|[-+*/%&|^!=<>]=?/,punctuation:/\?\.?|::|[{}[\];(),.:]/}),e.languages.insertBefore("csharp","number",{range:{pattern:/\.\./,alias:"operator"}}),e.languages.insertBefore("csharp","punctuation",{"named-parameter":{pattern:n(/([(,]\s*)<<0>>(?=\s*:)/.source,[h]),lookbehind:!0,alias:"punctuation"}}),e.languages.insertBefore("csharp","class-name",{namespace:{pattern:n(/(\b(?:namespace|using)\s+)<<0>>(?:\s*\.\s*<<0>>)*(?=\s*[;{])/.source,[h]),lookbehind:!0,inside:{punctuation:/\./}},"type-expression":{pattern:n(/(\b(?:default|sizeof|typeof)\s*\(\s*(?!\s))(?:[^()\s]|\s(?!\s)|<<0>>)*(?=\s*\))/.source,[d]),lookbehind:!0,alias:"class-name",inside:v},"return-type":{pattern:n(/<<0>>(?=\s+(?:<<1>>\s*(?:=>|[({]|\.\s*this\s*\[)|this\s*\[))/.source,[y,f]),inside:v,alias:"class-name"},"constructor-invocation":{pattern:n(/(\bnew\s+)<<0>>(?=\s*[[({])/.source,[y]),lookbehind:!0,inside:v,alias:"class-name"},"generic-method":{pattern:n(/<<0>>\s*<<1>>(?=\s*\()/.source,[h,u]),inside:{function:n(/^<<0>>/.source,[h]),generic:{pattern:RegExp(u),alias:"class-name",inside:v}}},"type-list":{pattern:n(/\b((?:<<0>>\s+<<1>>|record\s+<<1>>\s*<<5>>|where\s+<<2>>)\s*:\s*)(?:<<3>>|<<4>>|<<1>>\s*<<5>>|<<6>>)(?:\s*,\s*(?:<<3>>|<<4>>|<<6>>))*(?=\s*(?:where|[{;]|=>|$))/.source,[o,p,h,y,a.source,d,/\bnew\s*\(\s*\)/.source]),lookbehind:!0,inside:{"record-arguments":{pattern:n(/(^(?!new\s*\()<<0>>\s*)<<1>>/.source,[p,d]),lookbehind:!0,greedy:!0,inside:e.languages.csharp},keyword:a,"class-name":{pattern:RegExp(y),greedy:!0,inside:v},punctuation:/[,()]/}},preprocessor:{pattern:/(^[\t ]*)#.*/m,lookbehind:!0,alias:"property",inside:{directive:{pattern:/(#)\b(?:define|elif|else|endif|endregion|error|if|line|nullable|pragma|region|undef|warning)\b/,lookbehind:!0,alias:"keyword"}}}});var w=x+"|"+E,T=t(/\/(?![*/])|\/\/[^\r\n]*[\r\n]|\/\*(?:[^*]|\*(?!\/))*\*\/|<<0>>/.source,[w]),A=r(t(/[^"'/()]|<<0>>|\(<>*\)/.source,[T]),2),C=/\b(?:assembly|event|field|method|module|param|property|return|type)\b/.source,I=t(/<<0>>(?:\s*\(<<1>>*\))?/.source,[f,A]);e.languages.insertBefore("csharp","class-name",{attribute:{pattern:n(/((?:^|[^\s\w>)?])\s*\[\s*)(?:<<0>>\s*:\s*)?<<1>>(?:\s*,\s*<<1>>)*(?=\s*\])/.source,[C,I]),lookbehind:!0,greedy:!0,inside:{target:{pattern:n(/^<<0>>(?=\s*:)/.source,[C]),alias:"keyword"},"attribute-arguments":{pattern:n(/\(<<0>>*\)/.source,[A]),inside:e.languages.csharp},"class-name":{pattern:RegExp(f),inside:{punctuation:/\./}},punctuation:/[:,]/}}});var R=/:[^}\r\n]+/.source,k=r(t(/[^"'/()]|<<0>>|\(<>*\)/.source,[T]),2),P=t(/\{(?!\{)(?:(?![}:])<<0>>)*<<1>>?\}/.source,[k,R]),O=r(t(/[^"'/()]|\/(?!\*)|\/\*(?:[^*]|\*(?!\/))*\*\/|<<0>>|\(<>*\)/.source,[w]),2),N=t(/\{(?!\{)(?:(?![}:])<<0>>)*<<1>>?\}/.source,[O,R]);function M(t,r){return{interpolation:{pattern:n(/((?:^|[^{])(?:\{\{)*)<<0>>/.source,[t]),lookbehind:!0,inside:{"format-string":{pattern:n(/(^\{(?:(?![}:])<<0>>)*)<<1>>(?=\}$)/.source,[r,R]),lookbehind:!0,inside:{punctuation:/^:/}},punctuation:/^\{|\}$/,expression:{pattern:/[\s\S]+/,alias:"language-csharp",inside:e.languages.csharp}}},string:/[\s\S]+/}}e.languages.insertBefore("csharp","string",{"interpolation-string":[{pattern:n(/(^|[^\\])(?:\$@|@\$)"(?:""|\\[\s\S]|\{\{|<<0>>|[^\\{"])*"/.source,[P]),lookbehind:!0,greedy:!0,inside:M(P,k)},{pattern:n(/(^|[^@\\])\$"(?:\\.|\{\{|<<0>>|[^\\"{])*"/.source,[N]),lookbehind:!0,greedy:!0,inside:M(N,O)}],char:{pattern:RegExp(E),greedy:!0}}),e.languages.dotnet=e.languages.cs=e.languages.csharp})(Prism)},3326:function(){(function(e){var t=/\/(?![/*])|\/\/.*[\r\n]|\/\*[^*]*(?:\*(?!\/)[^*]*)*\*\//.source,n=/@(?!")|"(?:[^\r\n\\"]|\\.)*"|@"(?:[^\\"]|""|\\[\s\S])*"(?!")/.source+"|"+/'(?:(?:[^\r\n'\\]|\\.|\\[Uux][\da-fA-F]{1,8})'|(?=[^\\](?!')))/.source;function r(e,r){for(var i=0;i/g,(function(){return"(?:"+e+")"}));return e.replace(//g,"[^\\s\\S]").replace(//g,"(?:"+n+")").replace(//g,"(?:"+t+")")}var i=r(/\((?:[^()'"@/]|||)*\)/.source,2),s=r(/\[(?:[^\[\]'"@/]|||)*\]/.source,1),o=r(/\{(?:[^{}'"@/]|||)*\}/.source,2),a=r(/<(?:[^<>'"@/]||)*>/.source,1),l=/@/.source+/(?:await\b\s*)?/.source+"(?:"+/(?!await\b)\w+\b/.source+"|"+i+")(?:"+/[?!]?\.\w+\b/.source+"|(?:"+a+")?"+i+"|"+s+")*"+/(?![?!\.(\[]|<(?!\/))/.source,c=/@(?![\w()])/.source+"|"+l,u="(?:"+/"[^"@]*"|'[^'@]*'|[^\s'"@>=]+(?=[\s>])/.source+"|[\"'][^\"'@]*(?:(?:"+c+")[^\"'@]*)+[\"'])",d=/(?:\s(?:\s*[^\s>\/=]+(?:\s*=\s*|(?=[\s/>])))+)?/.source.replace(//,u),h=/(?!\d)[^\s>\/=$<%]+/.source+d+/\s*\/?>/.source,p=/\B@?/.source+"(?:"+/<([a-zA-Z][\w:]*)/.source+d+/\s*>/.source+"(?:"+/[^<]/.source+"|"+/<\/?(?!\1\b)/.source+h+"|"+r(/<\1/.source+d+/\s*>/.source+"(?:"+/[^<]/.source+"|"+/<\/?(?!\1\b)/.source+h+"|)*"+/<\/\1\s*>/.source,2)+")*"+/<\/\1\s*>/.source+"|"+/|\+|~|\|\|/,punctuation:/[(),]/}},e.languages.css["atrule"].inside["selector-function-argument"].inside=t,e.languages.insertBefore("css","property",{variable:{pattern:/(^|[^-\w\xA0-\uFFFF])--(?!\s)[-_a-z\xA0-\uFFFF](?:(?!\s)[-\w\xA0-\uFFFF])*/i,lookbehind:!0}});var r={pattern:/(\b\d+)(?:%|[a-z]+(?![\w-]))/,lookbehind:!0},i={pattern:/(^|[^\w.-])-?(?:\d+(?:\.\d+)?|\.\d+)/,lookbehind:!0};e.languages.insertBefore("css","function",{operator:{pattern:/(\s)[+\-*\/](?=\s)/,lookbehind:!0},hexcode:{pattern:/\B#[\da-f]{3,8}\b/i,alias:"color"},color:[{pattern:/(^|[^\w-])(?:AliceBlue|AntiqueWhite|Aqua|Aquamarine|Azure|Beige|Bisque|Black|BlanchedAlmond|Blue|BlueViolet|Brown|BurlyWood|CadetBlue|Chartreuse|Chocolate|Coral|CornflowerBlue|Cornsilk|Crimson|Cyan|DarkBlue|DarkCyan|DarkGoldenRod|DarkGr[ae]y|DarkGreen|DarkKhaki|DarkMagenta|DarkOliveGreen|DarkOrange|DarkOrchid|DarkRed|DarkSalmon|DarkSeaGreen|DarkSlateBlue|DarkSlateGr[ae]y|DarkTurquoise|DarkViolet|DeepPink|DeepSkyBlue|DimGr[ae]y|DodgerBlue|FireBrick|FloralWhite|ForestGreen|Fuchsia|Gainsboro|GhostWhite|Gold|GoldenRod|Gr[ae]y|Green|GreenYellow|HoneyDew|HotPink|IndianRed|Indigo|Ivory|Khaki|Lavender|LavenderBlush|LawnGreen|LemonChiffon|LightBlue|LightCoral|LightCyan|LightGoldenRodYellow|LightGr[ae]y|LightGreen|LightPink|LightSalmon|LightSeaGreen|LightSkyBlue|LightSlateGr[ae]y|LightSteelBlue|LightYellow|Lime|LimeGreen|Linen|Magenta|Maroon|MediumAquaMarine|MediumBlue|MediumOrchid|MediumPurple|MediumSeaGreen|MediumSlateBlue|MediumSpringGreen|MediumTurquoise|MediumVioletRed|MidnightBlue|MintCream|MistyRose|Moccasin|NavajoWhite|Navy|OldLace|Olive|OliveDrab|Orange|OrangeRed|Orchid|PaleGoldenRod|PaleGreen|PaleTurquoise|PaleVioletRed|PapayaWhip|PeachPuff|Peru|Pink|Plum|PowderBlue|Purple|RebeccaPurple|Red|RosyBrown|RoyalBlue|SaddleBrown|Salmon|SandyBrown|SeaGreen|SeaShell|Sienna|Silver|SkyBlue|SlateBlue|SlateGr[ae]y|Snow|SpringGreen|SteelBlue|Tan|Teal|Thistle|Tomato|Transparent|Turquoise|Violet|Wheat|White|WhiteSmoke|Yellow|YellowGreen)(?![\w-])/i,lookbehind:!0},{pattern:/\b(?:hsl|rgb)\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*\)\B|\b(?:hsl|rgb)a\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*,\s*(?:0|0?\.\d+|1)\s*\)\B/i,inside:{unit:r,number:i,function:/[\w-]+(?=\()/,punctuation:/[(),]/}}],entity:/\\[\da-f]{1,8}/i,unit:r,number:i})})(Prism)},5251:function(){(function(e){var t=/(?:"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"|'(?:\\(?:\r\n|[\s\S])|[^'\\\r\n])*')/;e.languages.css={comment:/\/\*[\s\S]*?\*\//,atrule:{pattern:RegExp("@[\\w-](?:"+/[^;{\s"']|\s+(?!\s)/.source+"|"+t.source+")*?"+/(?:;|(?=\s*\{))/.source),inside:{rule:/^@[\w-]+/,"selector-function-argument":{pattern:/(\bselector\s*\(\s*(?![\s)]))(?:[^()\s]|\s+(?![\s)])|\((?:[^()]|\([^()]*\))*\))+(?=\s*\))/,lookbehind:!0,alias:"selector"},keyword:{pattern:/(^|[^\w-])(?:and|not|only|or)(?![\w-])/,lookbehind:!0}}},url:{pattern:RegExp("\\burl\\((?:"+t.source+"|"+/(?:[^\\\r\n()"']|\\[\s\S])*/.source+")\\)","i"),greedy:!0,inside:{function:/^url/i,punctuation:/^\(|\)$/,string:{pattern:RegExp("^"+t.source+"$"),alias:"url"}}},selector:{pattern:RegExp("(^|[{}\\s])[^{}\\s](?:[^{};\"'\\s]|\\s+(?![\\s{])|"+t.source+")*(?=\\s*\\{)"),lookbehind:!0},string:{pattern:t,greedy:!0},property:{pattern:/(^|[^-\w\xA0-\uFFFF])(?!\s)[-_a-z\xA0-\uFFFF](?:(?!\s)[-\w\xA0-\uFFFF])*(?=\s*:)/i,lookbehind:!0},important:/!important\b/i,function:{pattern:/(^|[^-a-z0-9])[-a-z0-9]+(?=\()/i,lookbehind:!0},punctuation:/[(){};:,]/},e.languages.css["atrule"].inside.rest=e.languages.css;var n=e.languages.markup;n&&(n.tag.addInlined("style","css"),n.tag.addAttribute("style","css"))})(Prism)},7899:function(){Prism.languages.csv={value:/[^\r\n,"]+|"(?:[^"]|"")*"(?!")/,punctuation:/,/}},2946:function(){(function(e){var t=/\\(?:(?!\2)|\2(?:[^()\r\n]|\([^()]*\)))/.source,n=/"""(?:[^\\"]|"(?!""\2)|)*"""/.source+"|"+/'''(?:[^\\']|'(?!''\2)|)*'''/.source+"|"+/"(?:[^\\\r\n"]|"(?!\2)|)*"/.source+"|"+/'(?:[^\\\r\n']|'(?!\2)|)*'/.source,r="(?:"+n.replace(//g,t)+")";e.languages.cue={comment:{pattern:/\/\/.*/,greedy:!0},"string-literal":{pattern:RegExp(/(^|[^#"'\\])(#*)/.source+r+/(?!["'])\2/.source),lookbehind:!0,greedy:!0,inside:{escape:{pattern:/(?=[\s\S]*["'](#*)$)\\\1(?:U[a-fA-F0-9]{1,8}|u[a-fA-F0-9]{1,4}|x[a-fA-F0-9]{1,2}|\d{2,3}|[^(])/,greedy:!0,alias:"string"},interpolation:{pattern:/(?=[\s\S]*["'](#*)$)\\\1\([^()]*\)/,greedy:!0,inside:{punctuation:/^\\#*\(|\)$/,expression:{pattern:/[\s\S]+/,inside:null}}},string:/[\s\S]+/}},keyword:{pattern:/(^|[^\w$])(?:for|if|import|in|let|null|package)(?![\w$])/,lookbehind:!0},boolean:{pattern:/(^|[^\w$])(?:false|true)(?![\w$])/,lookbehind:!0},builtin:{pattern:/(^|[^\w$])(?:bool|bytes|float|float(?:32|64)|u?int(?:8|16|32|64|128)?|number|rune|string)(?![\w$])/,lookbehind:!0},attribute:{pattern:/@[\w$]+(?=\s*\()/,alias:"function"},function:{pattern:/(^|[^\w$])[a-z_$][\w$]*(?=\s*\()/i,lookbehind:!0},number:{pattern:/(^|[^\w$.])(?:0b[01]+(?:_[01]+)*|0o[0-7]+(?:_[0-7]+)*|0[xX][0-9A-Fa-f]+(?:_[0-9A-Fa-f]+)*|(?:\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\.\d+(?:_\d+)*)(?:[eE][+-]?\d+(?:_\d+)*)?(?:[KMGTP]i?)?)(?![\w$])/,lookbehind:!0},operator:/\.{3}|_\|_|&&?|\|\|?|[=!]~|[<>=!]=?|[+\-*/?]/,punctuation:/[()[\]{},.:]/},e.languages.cue["string-literal"].inside.interpolation.inside.expression.inside=e.languages.cue})(Prism)},258:function(){Prism.languages.cypher={comment:/\/\/.*/,string:{pattern:/"(?:[^"\\\r\n]|\\.)*"|'(?:[^'\\\r\n]|\\.)*'/,greedy:!0},"class-name":{pattern:/(:\s*)(?:\w+|`(?:[^`\\\r\n])*`)(?=\s*[{):])/,lookbehind:!0,greedy:!0},relationship:{pattern:/(-\[\s*(?:\w+\s*|`(?:[^`\\\r\n])*`\s*)?:\s*|\|\s*:\s*)(?:\w+|`(?:[^`\\\r\n])*`)/,lookbehind:!0,greedy:!0,alias:"property"},identifier:{pattern:/`(?:[^`\\\r\n])*`/,greedy:!0},variable:/\$\w+/,keyword:/\b(?:ADD|ALL|AND|AS|ASC|ASCENDING|ASSERT|BY|CALL|CASE|COMMIT|CONSTRAINT|CONTAINS|CREATE|CSV|DELETE|DESC|DESCENDING|DETACH|DISTINCT|DO|DROP|ELSE|END|ENDS|EXISTS|FOR|FOREACH|IN|INDEX|IS|JOIN|KEY|LIMIT|LOAD|MANDATORY|MATCH|MERGE|NODE|NOT|OF|ON|OPTIONAL|OR|ORDER(?=\s+BY)|PERIODIC|REMOVE|REQUIRE|RETURN|SCALAR|SCAN|SET|SKIP|START|STARTS|THEN|UNION|UNIQUE|UNWIND|USING|WHEN|WHERE|WITH|XOR|YIELD)\b/i,function:/\b\w+\b(?=\s*\()/,boolean:/\b(?:false|null|true)\b/i,number:/\b(?:0x[\da-fA-F]+|\d+(?:\.\d+)?(?:[eE][+-]?\d+)?)\b/,operator:/:|<--?|--?>?|<>|=~?|[<>]=?|[+*/%^|]|\.\.\.?/,punctuation:/[()[\]{},;.]/}},8149:function(){Prism.languages.d=Prism.languages.extend("clike",{comment:[{pattern:/^\s*#!.+/,greedy:!0},{pattern:RegExp(/(^|[^\\])/.source+"(?:"+[/\/\+(?:\/\+(?:[^+]|\+(?!\/))*\+\/|(?!\/\+)[\s\S])*?\+\//.source,/\/\/.*/.source,/\/\*[\s\S]*?\*\//.source].join("|")+")"),lookbehind:!0,greedy:!0}],string:[{pattern:RegExp([/\b[rx]"(?:\\[\s\S]|[^\\"])*"[cwd]?/.source,/\bq"(?:\[[\s\S]*?\]|\([\s\S]*?\)|<[\s\S]*?>|\{[\s\S]*?\})"/.source,/\bq"((?!\d)\w+)$[\s\S]*?^\1"/.source,/\bq"(.)[\s\S]*?\2"/.source,/(["`])(?:\\[\s\S]|(?!\3)[^\\])*\3[cwd]?/.source].join("|"),"m"),greedy:!0},{pattern:/\bq\{(?:\{[^{}]*\}|[^{}])*\}/,greedy:!0,alias:"token-string"}],keyword:/\$|\b(?:__(?:(?:DATE|EOF|FILE|FUNCTION|LINE|MODULE|PRETTY_FUNCTION|TIMESTAMP|TIME|VENDOR|VERSION)__|gshared|parameters|traits|vector)|abstract|alias|align|asm|assert|auto|body|bool|break|byte|case|cast|catch|cdouble|cent|cfloat|char|class|const|continue|creal|dchar|debug|default|delegate|delete|deprecated|do|double|dstring|else|enum|export|extern|false|final|finally|float|for|foreach|foreach_reverse|function|goto|idouble|if|ifloat|immutable|import|inout|int|interface|invariant|ireal|lazy|long|macro|mixin|module|new|nothrow|null|out|override|package|pragma|private|protected|ptrdiff_t|public|pure|real|ref|return|scope|shared|short|size_t|static|string|struct|super|switch|synchronized|template|this|throw|true|try|typedef|typeid|typeof|ubyte|ucent|uint|ulong|union|unittest|ushort|version|void|volatile|wchar|while|with|wstring)\b/,number:[/\b0x\.?[a-f\d_]+(?:(?!\.\.)\.[a-f\d_]*)?(?:p[+-]?[a-f\d_]+)?[ulfi]{0,4}/i,{pattern:/((?:\.\.)?)(?:\b0b\.?|\b|\.)\d[\d_]*(?:(?!\.\.)\.[\d_]*)?(?:e[+-]?\d[\d_]*)?[ulfi]{0,4}/i,lookbehind:!0}],operator:/\|[|=]?|&[&=]?|\+[+=]?|-[-=]?|\.?\.\.|=[>=]?|!(?:i[ns]\b|<>?=?|>=?|=)?|\bi[ns]\b|(?:<[<>]?|>>?>?|\^\^|[*\/%^~])=?/}),Prism.languages.insertBefore("d","string",{char:/'(?:\\(?:\W|\w+)|[^\\])'/}),Prism.languages.insertBefore("d","keyword",{property:/\B@\w*/}),Prism.languages.insertBefore("d","function",{register:{pattern:/\b(?:[ABCD][LHX]|E?(?:BP|DI|SI|SP)|[BS]PL|[ECSDGF]S|CR[0234]|[DS]IL|DR[012367]|E[ABCD]X|X?MM[0-7]|R(?:1[0-5]|[89])[BWD]?|R[ABCD]X|R[BS]P|R[DS]I|TR[3-7]|XMM(?:1[0-5]|[89])|YMM(?:1[0-5]|\d))\b|\bST(?:\([0-7]\)|\b)/,alias:"variable"}})},7065:function(){(function(e){var t=[/\b(?:async|sync|yield)\*/,/\b(?:abstract|assert|async|await|break|case|catch|class|const|continue|covariant|default|deferred|do|dynamic|else|enum|export|extends|extension|external|factory|final|finally|for|get|hide|if|implements|import|in|interface|library|mixin|new|null|on|operator|part|rethrow|return|set|show|static|super|switch|sync|this|throw|try|typedef|var|void|while|with|yield)\b/],n=/(^|[^\w.])(?:[a-z]\w*\s*\.\s*)*(?:[A-Z]\w*\s*\.\s*)*/.source,r={pattern:RegExp(n+/[A-Z](?:[\d_A-Z]*[a-z]\w*)?\b/.source),lookbehind:!0,inside:{namespace:{pattern:/^[a-z]\w*(?:\s*\.\s*[a-z]\w*)*(?:\s*\.)?/,inside:{punctuation:/\./}}}};e.languages.dart=e.languages.extend("clike",{"class-name":[r,{pattern:RegExp(n+/[A-Z]\w*(?=\s+\w+\s*[;,=()])/.source),lookbehind:!0,inside:r.inside}],keyword:t,operator:/\bis!|\b(?:as|is)\b|\+\+|--|&&|\|\||<<=?|>>=?|~(?:\/=?)?|[+\-*\/%&^|=!<>]=?|\?/}),e.languages.insertBefore("dart","string",{"string-literal":{pattern:/r?(?:("""|''')[\s\S]*?\1|(["'])(?:\\.|(?!\2)[^\\\r\n])*\2(?!\2))/,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$(?:\w+|\{(?:[^{}]|\{[^{}]*\})*\})/,lookbehind:!0,inside:{punctuation:/^\$\{?|\}$/,expression:{pattern:/[\s\S]+/,inside:e.languages.dart}}},string:/[\s\S]+/}},string:void 0}),e.languages.insertBefore("dart","class-name",{metadata:{pattern:/@\w+/,alias:"function"}}),e.languages.insertBefore("dart","class-name",{generics:{pattern:/<(?:[\w\s,.&?]|<(?:[\w\s,.&?]|<(?:[\w\s,.&?]|<[\w\s,.&?]*>)*>)*>)*>/,inside:{"class-name":r,keyword:t,punctuation:/[<>(),.:]/,operator:/[?&|]/}}})})(Prism)},3162:function(){(function(e){e.languages.dataweave={url:/\b[A-Za-z]+:\/\/[\w/:.?=&-]+|\burn:[\w:.?=&-]+/,property:{pattern:/(?:\b\w+#)?(?:"(?:\\.|[^\\"\r\n])*"|\b\w+)(?=\s*[:@])/,greedy:!0},string:{pattern:/(["'`])(?:\\[\s\S]|(?!\1)[^\\])*\1/,greedy:!0},"mime-type":/\b(?:application|audio|image|multipart|text|video)\/[\w+-]+/,date:{pattern:/\|[\w:+-]+\|/,greedy:!0},comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],regex:{pattern:/\/(?:[^\\\/\r\n]|\\[^\r\n])+\//,greedy:!0},keyword:/\b(?:and|as|at|case|do|else|fun|if|input|is|match|not|ns|null|or|output|type|unless|update|using|var)\b/,function:/\b[A-Z_]\w*(?=\s*\()/i,number:/-?\b\d+(?:\.\d+)?(?:e[+-]?\d+)?\b/i,punctuation:/[{}[\];(),.:@]/,operator:/<<|>>|->|[<>~=]=?|!=|--?-?|\+\+?|!|\?/,boolean:/\b(?:false|true)\b/}})(Prism)},827:function(){Prism.languages.dax={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|(?:--|\/\/).*)/,lookbehind:!0},"data-field":{pattern:/'(?:[^']|'')*'(?!')(?:\[[ \w\xA0-\uFFFF]+\])?|\w+\[[ \w\xA0-\uFFFF]+\]/,alias:"symbol"},measure:{pattern:/\[[ \w\xA0-\uFFFF]+\]/,alias:"constant"},string:{pattern:/"(?:[^"]|"")*"(?!")/,greedy:!0},function:/\b(?:ABS|ACOS|ACOSH|ACOT|ACOTH|ADDCOLUMNS|ADDMISSINGITEMS|ALL|ALLCROSSFILTERED|ALLEXCEPT|ALLNOBLANKROW|ALLSELECTED|AND|APPROXIMATEDISTINCTCOUNT|ASIN|ASINH|ATAN|ATANH|AVERAGE|AVERAGEA|AVERAGEX|BETA\.DIST|BETA\.INV|BLANK|CALCULATE|CALCULATETABLE|CALENDAR|CALENDARAUTO|CEILING|CHISQ\.DIST|CHISQ\.DIST\.RT|CHISQ\.INV|CHISQ\.INV\.RT|CLOSINGBALANCEMONTH|CLOSINGBALANCEQUARTER|CLOSINGBALANCEYEAR|COALESCE|COMBIN|COMBINA|COMBINEVALUES|CONCATENATE|CONCATENATEX|CONFIDENCE\.NORM|CONFIDENCE\.T|CONTAINS|CONTAINSROW|CONTAINSSTRING|CONTAINSSTRINGEXACT|CONVERT|COS|COSH|COT|COTH|COUNT|COUNTA|COUNTAX|COUNTBLANK|COUNTROWS|COUNTX|CROSSFILTER|CROSSJOIN|CURRENCY|CURRENTGROUP|CUSTOMDATA|DATATABLE|DATE|DATEADD|DATEDIFF|DATESBETWEEN|DATESINPERIOD|DATESMTD|DATESQTD|DATESYTD|DATEVALUE|DAY|DEGREES|DETAILROWS|DISTINCT|DISTINCTCOUNT|DISTINCTCOUNTNOBLANK|DIVIDE|EARLIER|EARLIEST|EDATE|ENDOFMONTH|ENDOFQUARTER|ENDOFYEAR|EOMONTH|ERROR|EVEN|EXACT|EXCEPT|EXP|EXPON\.DIST|FACT|FALSE|FILTER|FILTERS|FIND|FIRSTDATE|FIRSTNONBLANK|FIRSTNONBLANKVALUE|FIXED|FLOOR|FORMAT|GCD|GENERATE|GENERATEALL|GENERATESERIES|GEOMEAN|GEOMEANX|GROUPBY|HASONEFILTER|HASONEVALUE|HOUR|IF|IF\.EAGER|IFERROR|IGNORE|INT|INTERSECT|ISBLANK|ISCROSSFILTERED|ISEMPTY|ISERROR|ISEVEN|ISFILTERED|ISINSCOPE|ISLOGICAL|ISNONTEXT|ISNUMBER|ISO\.CEILING|ISODD|ISONORAFTER|ISSELECTEDMEASURE|ISSUBTOTAL|ISTEXT|KEEPFILTERS|KEYWORDMATCH|LASTDATE|LASTNONBLANK|LASTNONBLANKVALUE|LCM|LEFT|LEN|LN|LOG|LOG10|LOOKUPVALUE|LOWER|MAX|MAXA|MAXX|MEDIAN|MEDIANX|MID|MIN|MINA|MINUTE|MINX|MOD|MONTH|MROUND|NATURALINNERJOIN|NATURALLEFTOUTERJOIN|NEXTDAY|NEXTMONTH|NEXTQUARTER|NEXTYEAR|NONVISUAL|NORM\.DIST|NORM\.INV|NORM\.S\.DIST|NORM\.S\.INV|NOT|NOW|ODD|OPENINGBALANCEMONTH|OPENINGBALANCEQUARTER|OPENINGBALANCEYEAR|OR|PARALLELPERIOD|PATH|PATHCONTAINS|PATHITEM|PATHITEMREVERSE|PATHLENGTH|PERCENTILE\.EXC|PERCENTILE\.INC|PERCENTILEX\.EXC|PERCENTILEX\.INC|PERMUT|PI|POISSON\.DIST|POWER|PREVIOUSDAY|PREVIOUSMONTH|PREVIOUSQUARTER|PREVIOUSYEAR|PRODUCT|PRODUCTX|QUARTER|QUOTIENT|RADIANS|RAND|RANDBETWEEN|RANK\.EQ|RANKX|RELATED|RELATEDTABLE|REMOVEFILTERS|REPLACE|REPT|RIGHT|ROLLUP|ROLLUPADDISSUBTOTAL|ROLLUPGROUP|ROLLUPISSUBTOTAL|ROUND|ROUNDDOWN|ROUNDUP|ROW|SAMEPERIODLASTYEAR|SAMPLE|SEARCH|SECOND|SELECTCOLUMNS|SELECTEDMEASURE|SELECTEDMEASUREFORMATSTRING|SELECTEDMEASURENAME|SELECTEDVALUE|SIGN|SIN|SINH|SQRT|SQRTPI|STARTOFMONTH|STARTOFQUARTER|STARTOFYEAR|STDEV\.P|STDEV\.S|STDEVX\.P|STDEVX\.S|SUBSTITUTE|SUBSTITUTEWITHINDEX|SUM|SUMMARIZE|SUMMARIZECOLUMNS|SUMX|SWITCH|T\.DIST|T\.DIST\.2T|T\.DIST\.RT|T\.INV|T\.INV\.2T|TAN|TANH|TIME|TIMEVALUE|TODAY|TOPN|TOPNPERLEVEL|TOPNSKIP|TOTALMTD|TOTALQTD|TOTALYTD|TREATAS|TRIM|TRUE|TRUNC|UNICHAR|UNICODE|UNION|UPPER|USERELATIONSHIP|USERNAME|USEROBJECTID|USERPRINCIPALNAME|UTCNOW|UTCTODAY|VALUE|VALUES|VAR\.P|VAR\.S|VARX\.P|VARX\.S|WEEKDAY|WEEKNUM|XIRR|XNPV|YEAR|YEARFRAC)(?=\s*\()/i,keyword:/\b(?:DEFINE|EVALUATE|MEASURE|ORDER\s+BY|RETURN|VAR|START\s+AT|ASC|DESC)\b/i,boolean:{pattern:/\b(?:FALSE|NULL|TRUE)\b/i,alias:"constant"},number:/\b\d+(?:\.\d*)?|\B\.\d+\b/,operator:/:=|[-+*\/=^]|&&?|\|\||<(?:=>?|<|>)?|>[>=]?|\b(?:IN|NOT)\b/i,punctuation:/[;\[\](){}`,.]/}},4370:function(){Prism.languages.dhall={comment:/--.*|\{-(?:[^-{]|-(?!\})|\{(?!-)|\{-(?:[^-{]|-(?!\})|\{(?!-))*-\})*-\}/,string:{pattern:/"(?:[^"\\]|\\.)*"|''(?:[^']|'(?!')|'''|''\$\{)*''(?!'|\$)/,greedy:!0,inside:{interpolation:{pattern:/\$\{[^{}]*\}/,inside:{expression:{pattern:/(^\$\{)[\s\S]+(?=\}$)/,lookbehind:!0,alias:"language-dhall",inside:null},punctuation:/\$\{|\}/}}}},label:{pattern:/`[^`]*`/,greedy:!0},url:{pattern:/\bhttps?:\/\/[\w.:%!$&'*+;=@~-]+(?:\/[\w.:%!$&'*+;=@~-]*)*(?:\?[/?\w.:%!$&'*+;=@~-]*)?/,greedy:!0},env:{pattern:/\benv:(?:(?!\d)\w+|"(?:[^"\\=]|\\.)*")/,greedy:!0,inside:{function:/^env/,operator:/^:/,variable:/[\s\S]+/}},hash:{pattern:/\bsha256:[\da-fA-F]{64}\b/,inside:{function:/sha256/,operator:/:/,number:/[\da-fA-F]{64}/}},keyword:/\b(?:as|assert|else|forall|if|in|let|merge|missing|then|toMap|using|with)\b|\u2200/,builtin:/\b(?:None|Some)\b/,boolean:/\b(?:False|True)\b/,number:/\bNaN\b|-?\bInfinity\b|[+-]?\b(?:0x[\da-fA-F]+|\d+(?:\.\d+)?(?:e[+-]?\d+)?)\b/,operator:/\/\\|\/\/\\\\|&&|\|\||===|[!=]=|\/\/|->|\+\+|::|[+*#@=:?<>|\\\u2227\u2a53\u2261\u2afd\u03bb\u2192]/,punctuation:/\.\.|[{}\[\](),./]/,"class-name":/\b[A-Z]\w*\b/},Prism.languages.dhall.string.inside.interpolation.inside.expression.inside=Prism.languages.dhall},728:function(){(function(e){e.languages.diff={coord:[/^(?:\*{3}|-{3}|\+{3}).*$/m,/^@@.*@@$/m,/^\d.*$/m]};var t={"deleted-sign":"-","deleted-arrow":"<","inserted-sign":"+","inserted-arrow":">",unchanged:" ",diff:"!"};Object.keys(t).forEach((function(n){var r=t[n],i=[];/^\w+$/.test(n)||i.push(/\w+/.exec(n)[0]),"diff"===n&&i.push("bold"),e.languages.diff[n]={pattern:RegExp("^(?:["+r+"].*(?:\r\n?|\n|(?![\\s\\S])))+","m"),alias:i,inside:{line:{pattern:/(.)(?=[\s\S]).*(?:\r\n?|\n)?/,lookbehind:!0},prefix:{pattern:/[\s\S]/,alias:/\w+/.exec(n)[0]}}}})),Object.defineProperty(e.languages.diff,"PREFIXES",{value:t})})(Prism)},4409:function(){(function(e){e.languages.django={comment:/^\{#[\s\S]*?#\}$/,tag:{pattern:/(^\{%[+-]?\s*)\w+/,lookbehind:!0,alias:"keyword"},delimiter:{pattern:/^\{[{%][+-]?|[+-]?[}%]\}$/,alias:"punctuation"},string:{pattern:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0},filter:{pattern:/(\|)\w+/,lookbehind:!0,alias:"function"},test:{pattern:/(\bis\s+(?:not\s+)?)(?!not\b)\w+/,lookbehind:!0,alias:"function"},function:/\b[a-z_]\w+(?=\s*\()/i,keyword:/\b(?:and|as|by|else|for|if|import|in|is|loop|not|or|recursive|with|without)\b/,operator:/[-+%=]=?|!=|\*\*?=?|\/\/?=?|<[<=>]?|>[=>]?|[&|^~]/,number:/\b\d+(?:\.\d+)?\b/,boolean:/[Ff]alse|[Nn]one|[Tt]rue/,variable:/\b\w+\b/,punctuation:/[{}[\](),.:;]/};var t=/\{\{[\s\S]*?\}\}|\{%[\s\S]*?%\}|\{#[\s\S]*?#\}/g,n=e.languages["markup-templating"];e.hooks.add("before-tokenize",(function(e){n.buildPlaceholders(e,"django",t)})),e.hooks.add("after-tokenize",(function(e){n.tokenizePlaceholders(e,"django")})),e.languages.jinja2=e.languages.django,e.hooks.add("before-tokenize",(function(e){n.buildPlaceholders(e,"jinja2",t)})),e.hooks.add("after-tokenize",(function(e){n.tokenizePlaceholders(e,"jinja2")}))})(Prism)},8483:function(){Prism.languages["dns-zone-file"]={comment:/;.*/,string:{pattern:/"(?:\\.|[^"\\\r\n])*"/,greedy:!0},variable:[{pattern:/(^\$ORIGIN[ \t]+)\S+/m,lookbehind:!0},{pattern:/(^|\s)@(?=\s|$)/,lookbehind:!0}],keyword:/^\$(?:INCLUDE|ORIGIN|TTL)(?=\s|$)/m,class:{pattern:/(^|\s)(?:CH|CS|HS|IN)(?=\s|$)/,lookbehind:!0,alias:"keyword"},type:{pattern:/(^|\s)(?:A|A6|AAAA|AFSDB|APL|ATMA|CAA|CDNSKEY|CDS|CERT|CNAME|DHCID|DLV|DNAME|DNSKEY|DS|EID|GID|GPOS|HINFO|HIP|IPSECKEY|ISDN|KEY|KX|LOC|MAILA|MAILB|MB|MD|MF|MG|MINFO|MR|MX|NAPTR|NB|NBSTAT|NIMLOC|NINFO|NS|NSAP|NSAP-PTR|NSEC|NSEC3|NSEC3PARAM|NULL|NXT|OPENPGPKEY|PTR|PX|RKEY|RP|RRSIG|RT|SIG|SINK|SMIMEA|SOA|SPF|SRV|SSHFP|TA|TKEY|TLSA|TSIG|TXT|UID|UINFO|UNSPEC|URI|WKS|X25)(?=\s|$)/,lookbehind:!0,alias:"keyword"},punctuation:/[()]/},Prism.languages["dns-zone"]=Prism.languages["dns-zone-file"]},7158:function(){(function(e){var t=/\\[\r\n](?:\s|\\[\r\n]|#.*(?!.))*(?![\s#]|\\[\r\n])/.source,n=/(?:[ \t]+(?![ \t])(?:)?|)/.source.replace(//g,(function(){return t})),r=/"(?:[^"\\\r\n]|\\(?:\r\n|[\s\S]))*"|'(?:[^'\\\r\n]|\\(?:\r\n|[\s\S]))*'/.source,i=/--[\w-]+=(?:|(?!["'])(?:[^\s\\]|\\.)+)/.source.replace(//g,(function(){return r})),s={pattern:RegExp(r),greedy:!0},o={pattern:/(^[ \t]*)#.*/m,lookbehind:!0,greedy:!0};function a(e,t){return e=e.replace(//g,(function(){return i})).replace(//g,(function(){return n})),RegExp(e,t)}e.languages.docker={instruction:{pattern:/(^[ \t]*)(?:ADD|ARG|CMD|COPY|ENTRYPOINT|ENV|EXPOSE|FROM|HEALTHCHECK|LABEL|MAINTAINER|ONBUILD|RUN|SHELL|STOPSIGNAL|USER|VOLUME|WORKDIR)(?=\s)(?:\\.|[^\r\n\\])*(?:\\$(?:\s|#.*$)*(?![\s#])(?:\\.|[^\r\n\\])*)*/im,lookbehind:!0,greedy:!0,inside:{options:{pattern:a(/(^(?:ONBUILD)?\w+)(?:)*/.source,"i"),lookbehind:!0,greedy:!0,inside:{property:{pattern:/(^|\s)--[\w-]+/,lookbehind:!0},string:[s,{pattern:/(=)(?!["'])(?:[^\s\\]|\\.)+/,lookbehind:!0}],operator:/\\$/m,punctuation:/=/}},keyword:[{pattern:a(/(^(?:ONBUILD)?HEALTHCHECK(?:)*)(?:CMD|NONE)\b/.source,"i"),lookbehind:!0,greedy:!0},{pattern:a(/(^(?:ONBUILD)?FROM(?:)*(?!--)[^ \t\\]+)AS/.source,"i"),lookbehind:!0,greedy:!0},{pattern:a(/(^ONBUILD)\w+/.source,"i"),lookbehind:!0,greedy:!0},{pattern:/^\w+/,greedy:!0}],comment:o,string:s,variable:/\$(?:\w+|\{[^{}"'\\]*\})/,operator:/\\$/m}},comment:o},e.languages.dockerfile=e.languages.docker})(Prism)},397:function(){(function(e){var t="(?:"+[/[a-zA-Z_\x80-\uFFFF][\w\x80-\uFFFF]*/.source,/-?(?:\.\d+|\d+(?:\.\d*)?)/.source,/"[^"\\]*(?:\\[\s\S][^"\\]*)*"/.source,/<(?:[^<>]|(?!)*>/.source].join("|")+")",n={markup:{pattern:/(^<)[\s\S]+(?=>$)/,lookbehind:!0,alias:["language-markup","language-html","language-xml"],inside:e.languages.markup}};function r(e,n){return RegExp(e.replace(//g,(function(){return t})),n)}e.languages.dot={comment:{pattern:/\/\/.*|\/\*[\s\S]*?\*\/|^#.*/m,greedy:!0},"graph-name":{pattern:r(/(\b(?:digraph|graph|subgraph)[ \t\r\n]+)/.source,"i"),lookbehind:!0,greedy:!0,alias:"class-name",inside:n},"attr-value":{pattern:r(/(=[ \t\r\n]*)/.source),lookbehind:!0,greedy:!0,inside:n},"attr-name":{pattern:r(/([\[;, \t\r\n])(?=[ \t\r\n]*=)/.source),lookbehind:!0,greedy:!0,inside:n},keyword:/\b(?:digraph|edge|graph|node|strict|subgraph)\b/i,"compass-point":{pattern:/(:[ \t\r\n]*)(?:[ewc_]|[ns][ew]?)(?![\w\x80-\uFFFF])/,lookbehind:!0,alias:"builtin"},node:{pattern:r(/(^|[^-.\w\x80-\uFFFF\\])/.source),lookbehind:!0,greedy:!0,inside:n},operator:/[=:]|-[->]/,punctuation:/[\[\]{};,]/},e.languages.gv=e.languages.dot})(Prism)},8232:function(){Prism.languages.ebnf={comment:/\(\*[\s\S]*?\*\)/,string:{pattern:/"[^"\r\n]*"|'[^'\r\n]*'/,greedy:!0},special:{pattern:/\?[^?\r\n]*\?/,greedy:!0,alias:"class-name"},definition:{pattern:/^([\t ]*)[a-z]\w*(?:[ \t]+[a-z]\w*)*(?=\s*=)/im,lookbehind:!0,alias:["rule","keyword"]},rule:/\b[a-z]\w*(?:[ \t]+[a-z]\w*)*\b/i,punctuation:/\([:/]|[:/]\)|[.,;()[\]{}]/,operator:/[-=|*/!]/}},2456:function(){Prism.languages.editorconfig={comment:/[;#].*/,section:{pattern:/(^[ \t]*)\[.+\]/m,lookbehind:!0,alias:"selector",inside:{regex:/\\\\[\[\]{},!?.*]/,operator:/[!?]|\.\.|\*{1,2}/,punctuation:/[\[\]{},]/}},key:{pattern:/(^[ \t]*)[^\s=]+(?=[ \t]*=)/m,lookbehind:!0,alias:"attr-name"},value:{pattern:/=.*/,alias:"attr-value",inside:{punctuation:/^=/}}}},9979:function(){Prism.languages.eiffel={comment:/--.*/,string:[{pattern:/"([^[]*)\[[\s\S]*?\]\1"/,greedy:!0},{pattern:/"([^{]*)\{[\s\S]*?\}\1"/,greedy:!0},{pattern:/"(?:%(?:(?!\n)\s)*\n\s*%|%\S|[^%"\r\n])*"/,greedy:!0}],char:/'(?:%.|[^%'\r\n])+'/,keyword:/\b(?:across|agent|alias|all|and|as|assign|attached|attribute|check|class|convert|create|Current|debug|deferred|detachable|do|else|elseif|end|ensure|expanded|export|external|feature|from|frozen|if|implies|inherit|inspect|invariant|like|local|loop|not|note|obsolete|old|once|or|Precursor|redefine|rename|require|rescue|Result|retry|select|separate|some|then|undefine|until|variant|Void|when|xor)\b/i,boolean:/\b(?:False|True)\b/i,"class-name":/\b[A-Z][\dA-Z_]*\b/,number:[/\b0[xcb][\da-f](?:_*[\da-f])*\b/i,/(?:\b\d(?:_*\d)*)?\.(?:(?:\d(?:_*\d)*)?e[+-]?)?\d(?:_*\d)*\b|\b\d(?:_*\d)*\b\.?/i],punctuation:/:=|<<|>>|\(\||\|\)|->|\.(?=\w)|[{}[\];(),:?]/,operator:/\\\\|\|\.\.\||\.\.|\/[~\/=]?|[><]=?|[-+*^=~]/}},60:function(){(function(e){e.languages.ejs={delimiter:{pattern:/^<%[-_=]?|[-_]?%>$/,alias:"punctuation"},comment:/^#[\s\S]*/,"language-javascript":{pattern:/[\s\S]+/,inside:e.languages.javascript}},e.hooks.add("before-tokenize",(function(t){var n=/<%(?!%)[\s\S]+?%>/g;e.languages["markup-templating"].buildPlaceholders(t,"ejs",n)})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"ejs")})),e.languages.eta=e.languages.ejs})(Prism)},8805:function(){Prism.languages.elixir={doc:{pattern:/@(?:doc|moduledoc)\s+(?:("""|''')[\s\S]*?\1|("|')(?:\\(?:\r\n|[\s\S])|(?!\2)[^\\\r\n])*\2)/,inside:{attribute:/^@\w+/,string:/['"][\s\S]+/}},comment:{pattern:/#.*/,greedy:!0},regex:{pattern:/~[rR](?:("""|''')(?:\\[\s\S]|(?!\1)[^\\])+\1|([\/|"'])(?:\\.|(?!\2)[^\\\r\n])+\2|\((?:\\.|[^\\)\r\n])+\)|\[(?:\\.|[^\\\]\r\n])+\]|\{(?:\\.|[^\\}\r\n])+\}|<(?:\\.|[^\\>\r\n])+>)[uismxfr]*/,greedy:!0},string:[{pattern:/~[cCsSwW](?:("""|''')(?:\\[\s\S]|(?!\1)[^\\])+\1|([\/|"'])(?:\\.|(?!\2)[^\\\r\n])+\2|\((?:\\.|[^\\)\r\n])+\)|\[(?:\\.|[^\\\]\r\n])+\]|\{(?:\\.|#\{[^}]+\}|#(?!\{)|[^#\\}\r\n])+\}|<(?:\\.|[^\\>\r\n])+>)[csa]?/,greedy:!0,inside:{}},{pattern:/("""|''')[\s\S]*?\1/,greedy:!0,inside:{}},{pattern:/("|')(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0,inside:{}}],atom:{pattern:/(^|[^:]):\w+/,lookbehind:!0,alias:"symbol"},module:{pattern:/\b[A-Z]\w*\b/,alias:"class-name"},"attr-name":/\b\w+\??:(?!:)/,argument:{pattern:/(^|[^&])&\d+/,lookbehind:!0,alias:"variable"},attribute:{pattern:/@\w+/,alias:"variable"},function:/\b[_a-zA-Z]\w*[?!]?(?:(?=\s*(?:\.\s*)?\()|(?=\/\d))/,number:/\b(?:0[box][a-f\d_]+|\d[\d_]*)(?:\.[\d_]+)?(?:e[+-]?[\d_]+)?\b/i,keyword:/\b(?:after|alias|and|case|catch|cond|def(?:callback|delegate|exception|impl|macro|module|n|np|p|protocol|struct)?|do|else|end|fn|for|if|import|not|or|quote|raise|require|rescue|try|unless|unquote|use|when)\b/,boolean:/\b(?:false|nil|true)\b/,operator:[/\bin\b|&&?|\|[|>]?|\\\\|::|\.\.\.?|\+\+?|-[->]?|<[-=>]|>=|!==?|\B!|=(?:==?|[>~])?|[*\/^]/,{pattern:/([^<])<(?!<)/,lookbehind:!0},{pattern:/([^>])>(?!>)/,lookbehind:!0}],punctuation:/<<|>>|[.,%\[\]{}()]/},Prism.languages.elixir.string.forEach((function(e){e.inside={interpolation:{pattern:/#\{[^}]+\}/,inside:{delimiter:{pattern:/^#\{|\}$/,alias:"punctuation"},rest:Prism.languages.elixir}}}}))},5041:function(){Prism.languages.elm={comment:/--.*|\{-[\s\S]*?-\}/,char:{pattern:/'(?:[^\\'\r\n]|\\(?:[abfnrtv\\']|\d+|x[0-9a-fA-F]+|u\{[0-9a-fA-F]+\}))'/,greedy:!0},string:[{pattern:/"""[\s\S]*?"""/,greedy:!0},{pattern:/"(?:[^\\"\r\n]|\\.)*"/,greedy:!0}],"import-statement":{pattern:/(^[\t ]*)import\s+[A-Z]\w*(?:\.[A-Z]\w*)*(?:\s+as\s+(?:[A-Z]\w*)(?:\.[A-Z]\w*)*)?(?:\s+exposing\s+)?/m,lookbehind:!0,inside:{keyword:/\b(?:as|exposing|import)\b/}},keyword:/\b(?:alias|as|case|else|exposing|if|in|infixl|infixr|let|module|of|then|type)\b/,builtin:/\b(?:abs|acos|always|asin|atan|atan2|ceiling|clamp|compare|cos|curry|degrees|e|flip|floor|fromPolar|identity|isInfinite|isNaN|logBase|max|min|negate|never|not|pi|radians|rem|round|sin|sqrt|tan|toFloat|toPolar|toString|truncate|turns|uncurry|xor)\b/,number:/\b(?:\d+(?:\.\d+)?(?:e[+-]?\d+)?|0x[0-9a-f]+)\b/i,operator:/\s\.\s|[+\-/*=.$<>:&|^?%#@~!]{2,}|[+\-/*=$<>:&|^?%#@~!]/,hvariable:/\b(?:[A-Z]\w*\.)*[a-z]\w*\b/,constant:/\b(?:[A-Z]\w*\.)*[A-Z]\w*\b/,punctuation:/[{}[\]|(),.:]/}},6512:function(){(function(e){e.languages.erb={delimiter:{pattern:/^(\s*)<%=?|%>(?=\s*$)/,lookbehind:!0,alias:"punctuation"},ruby:{pattern:/\s*\S[\s\S]*/,alias:"language-ruby",inside:e.languages.ruby}},e.hooks.add("before-tokenize",(function(t){var n=/<%=?(?:[^\r\n]|[\r\n](?!=begin)|[\r\n]=begin\s(?:[^\r\n]|[\r\n](?!=end))*[\r\n]=end)+?%>/g;e.languages["markup-templating"].buildPlaceholders(t,"erb",n)})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"erb")}))})(Prism)},8956:function(){Prism.languages.erlang={comment:/%.+/,string:{pattern:/"(?:\\.|[^\\"\r\n])*"/,greedy:!0},"quoted-function":{pattern:/'(?:\\.|[^\\'\r\n])+'(?=\()/,alias:"function"},"quoted-atom":{pattern:/'(?:\\.|[^\\'\r\n])+'/,alias:"atom"},boolean:/\b(?:false|true)\b/,keyword:/\b(?:after|begin|case|catch|end|fun|if|of|receive|try|when)\b/,number:[/\$\\?./,/\b\d+#[a-z0-9]+/i,/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?/i],function:/\b[a-z][\w@]*(?=\()/,variable:{pattern:/(^|[^@])(?:\b|\?)[A-Z_][\w@]*/,lookbehind:!0},operator:[/[=\/<>:]=|=[:\/]=|\+\+?|--?|[=*\/!]|\b(?:and|andalso|band|bnot|bor|bsl|bsr|bxor|div|not|or|orelse|rem|xor)\b/,{pattern:/(^|[^<])<(?!<)/,lookbehind:!0},{pattern:/(^|[^>])>(?!>)/,lookbehind:!0}],atom:/\b[a-z][\w@]*/,punctuation:/[()[\]{}:;,.#|]|<<|>>/}},9958:function(){(function(e){e.languages.etlua={delimiter:{pattern:/^<%[-=]?|-?%>$/,alias:"punctuation"},"language-lua":{pattern:/[\s\S]+/,inside:e.languages.lua}},e.hooks.add("before-tokenize",(function(t){var n=/<%[\s\S]+?%>/g;e.languages["markup-templating"].buildPlaceholders(t,"etlua",n)})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"etlua")}))})(Prism)},1039:function(){Prism.languages["excel-formula"]={comment:{pattern:/(\bN\(\s*)"(?:[^"]|"")*"(?=\s*\))/i,lookbehind:!0,greedy:!0},string:{pattern:/"(?:[^"]|"")*"(?!")/,greedy:!0},reference:{pattern:/(?:'[^']*'|(?:[^\s()[\]{}<>*?"';,$&]*\[[^^\s()[\]{}<>*?"']+\])?\w+)!/,greedy:!0,alias:"string",inside:{operator:/!$/,punctuation:/'/,sheet:{pattern:/[^[\]]+$/,alias:"function"},file:{pattern:/\[[^[\]]+\]$/,inside:{punctuation:/[[\]]/}},path:/[\s\S]+/}},"function-name":{pattern:/\b[A-Z]\w*(?=\()/i,alias:"builtin"},range:{pattern:/\$?\b(?:[A-Z]+\$?\d+:\$?[A-Z]+\$?\d+|[A-Z]+:\$?[A-Z]+|\d+:\$?\d+)\b/i,alias:"selector",inside:{operator:/:/,cell:/\$?[A-Z]+\$?\d+/i,column:/\$?[A-Z]+/i,row:/\$?\d+/}},cell:{pattern:/\b[A-Z]+\d+\b|\$[A-Za-z]+\$?\d+\b|\b[A-Za-z]+\$\d+\b/,alias:"selector"},number:/(?:\b\d+(?:\.\d+)?|\B\.\d+)(?:e[+-]?\d+)?\b/i,boolean:/\b(?:FALSE|TRUE)\b/i,operator:/[-+*/^%=&,]|<[=>]?|>=?/,punctuation:/[[\]();{}|]/},Prism.languages["xlsx"]=Prism.languages["xls"]=Prism.languages["excel-formula"]},171:function(){(function(e){var t={function:/\b(?:BUGS?|FIX(?:MES?)?|NOTES?|TODOS?|XX+|HACKS?|WARN(?:ING)?|\?{2,}|!{2,})\b/},n={number:/\\[^\s']|%\w/},r={comment:[{pattern:/(^|\s)(?:! .*|!$)/,lookbehind:!0,inside:t},{pattern:/(^|\s)\/\*\s[\s\S]*?\*\/(?=\s|$)/,lookbehind:!0,greedy:!0,inside:t},{pattern:/(^|\s)!\[(={0,6})\[\s[\s\S]*?\]\2\](?=\s|$)/,lookbehind:!0,greedy:!0,inside:t}],number:[{pattern:/(^|\s)[+-]?\d+(?=\s|$)/,lookbehind:!0},{pattern:/(^|\s)[+-]?0(?:b[01]+|o[0-7]+|d\d+|x[\dA-F]+)(?=\s|$)/i,lookbehind:!0},{pattern:/(^|\s)[+-]?\d+\/\d+\.?(?=\s|$)/,lookbehind:!0},{pattern:/(^|\s)\+?\d+\+\d+\/\d+(?=\s|$)/,lookbehind:!0},{pattern:/(^|\s)-\d+-\d+\/\d+(?=\s|$)/,lookbehind:!0},{pattern:/(^|\s)[+-]?(?:\d*\.\d+|\d+\.\d*|\d+)(?:e[+-]?\d+)?(?=\s|$)/i,lookbehind:!0},{pattern:/(^|\s)NAN:\s+[\da-fA-F]+(?=\s|$)/,lookbehind:!0},{pattern:/(^|\s)[+-]?0(?:b1\.[01]*|o1\.[0-7]*|d1\.\d*|x1\.[\dA-F]*)p\d+(?=\s|$)/i,lookbehind:!0}],regexp:{pattern:/(^|\s)R\/\s(?:\\\S|[^\\/])*\/(?:[idmsr]*|[idmsr]+-[idmsr]+)(?=\s|$)/,lookbehind:!0,alias:"number",inside:{variable:/\\\S/,keyword:/[+?*\[\]^$(){}.|]/,operator:{pattern:/(\/)[idmsr]+(?:-[idmsr]+)?/,lookbehind:!0}}},boolean:{pattern:/(^|\s)[tf](?=\s|$)/,lookbehind:!0},"custom-string":{pattern:/(^|\s)[A-Z0-9\-]+"\s(?:\\\S|[^"\\])*"/,lookbehind:!0,greedy:!0,alias:"string",inside:{number:/\\\S|%\w|\//}},"multiline-string":[{pattern:/(^|\s)STRING:\s+\S+(?:\n|\r\n).*(?:\n|\r\n)\s*;(?=\s|$)/,lookbehind:!0,greedy:!0,alias:"string",inside:{number:n.number,"semicolon-or-setlocal":{pattern:/([\r\n][ \t]*);(?=\s|$)/,lookbehind:!0,alias:"function"}}},{pattern:/(^|\s)HEREDOC:\s+\S+(?:\n|\r\n).*(?:\n|\r\n)\s*\S+(?=\s|$)/,lookbehind:!0,greedy:!0,alias:"string",inside:n},{pattern:/(^|\s)\[(={0,6})\[\s[\s\S]*?\]\2\](?=\s|$)/,lookbehind:!0,greedy:!0,alias:"string",inside:n}],"special-using":{pattern:/(^|\s)USING:(?:\s\S+)*(?=\s+;(?:\s|$))/,lookbehind:!0,alias:"function",inside:{string:{pattern:/(\s)[^:\s]+/,lookbehind:!0}}},"stack-effect-delimiter":[{pattern:/(^|\s)(?:call|eval|execute)?\((?=\s)/,lookbehind:!0,alias:"operator"},{pattern:/(\s)--(?=\s)/,lookbehind:!0,alias:"operator"},{pattern:/(\s)\)(?=\s|$)/,lookbehind:!0,alias:"operator"}],combinators:{pattern:null,lookbehind:!0,alias:"keyword"},"kernel-builtin":{pattern:null,lookbehind:!0,alias:"variable"},"sequences-builtin":{pattern:null,lookbehind:!0,alias:"variable"},"math-builtin":{pattern:null,lookbehind:!0,alias:"variable"},"constructor-word":{pattern:/(^|\s)<(?!=+>|-+>)\S+>(?=\s|$)/,lookbehind:!0,alias:"keyword"},"other-builtin-syntax":{pattern:null,lookbehind:!0,alias:"operator"},"conventionally-named-word":{pattern:/(^|\s)(?!")(?:(?:change|new|set|with)-\S+|\$\S+|>[^>\s]+|[^:>\s]+>|[^>\s]+>[^>\s]+|\+[^+\s]+\+|[^?\s]+\?|\?[^?\s]+|[^>\s]+>>|>>[^>\s]+|[^<\s]+<<|\([^()\s]+\)|[^!\s]+!|[^*\s]\S*\*|[^.\s]\S*\.)(?=\s|$)/,lookbehind:!0,alias:"keyword"},"colon-syntax":{pattern:/(^|\s)(?:[A-Z0-9\-]+#?)?:{1,2}\s+(?:;\S+|(?!;)\S+)(?=\s|$)/,lookbehind:!0,greedy:!0,alias:"function"},"semicolon-or-setlocal":{pattern:/(\s)(?:;|:>)(?=\s|$)/,lookbehind:!0,alias:"function"},"curly-brace-literal-delimiter":[{pattern:/(^|\s)[a-z]*\{(?=\s)/i,lookbehind:!0,alias:"operator"},{pattern:/(\s)\}(?=\s|$)/,lookbehind:!0,alias:"operator"}],"quotation-delimiter":[{pattern:/(^|\s)\[(?=\s)/,lookbehind:!0,alias:"operator"},{pattern:/(\s)\](?=\s|$)/,lookbehind:!0,alias:"operator"}],"normal-word":{pattern:/(^|\s)[^"\s]\S*(?=\s|$)/,lookbehind:!0},string:{pattern:/"(?:\\\S|[^"\\])*"/,greedy:!0,inside:n}},i=function(e){return(e+"").replace(/([.?*+\^$\[\]\\(){}|\-])/g,"\\$1")},s=function(e){return new RegExp("(^|\\s)(?:"+e.map(i).join("|")+")(?=\\s|$)")},o={"kernel-builtin":["or","2nipd","4drop","tuck","wrapper","nip","wrapper?","callstack>array","die","dupd","callstack","callstack?","3dup","hashcode","pick","4nip","build",">boolean","nipd","clone","5nip","eq?","?","=","swapd","2over","clear","2dup","get-retainstack","not","tuple?","dup","3nipd","call","-rotd","object","drop","assert=","assert?","-rot","execute","boa","get-callstack","curried?","3drop","pickd","overd","over","roll","3nip","swap","and","2nip","rotd","throw","(clone)","hashcode*","spin","reach","4dup","equal?","get-datastack","assert","2drop","","boolean?","identity-hashcode","identity-tuple?","null","composed?","new","5drop","rot","-roll","xor","identity-tuple","boolean"],"other-builtin-syntax":["=======","recursive","flushable",">>","<<<<<<","M\\","B","PRIVATE>","\\","======","final","inline","delimiter","deprecated",">>>>>","<<<<<<<","parse-complex","malformed-complex","read-only",">>>>>>>","call-next-method","<<","foldable","$","$[","${"],"sequences-builtin":["member-eq?","mismatch","append","assert-sequence=","longer","repetition","clone-like","3sequence","assert-sequence?","last-index-from","reversed","index-from","cut*","pad-tail","join-as","remove-eq!","concat-as","but-last","snip","nths","nth","sequence","longest","slice?","","remove-nth","tail-slice","empty?","tail*","member?","virtual-sequence?","set-length","drop-prefix","iota","unclip","bounds-error?","unclip-last-slice","non-negative-integer-expected","non-negative-integer-expected?","midpoint@","longer?","?set-nth","?first","rest-slice","prepend-as","prepend","fourth","sift","subseq-start","new-sequence","?last","like","first4","1sequence","reverse","slice","virtual@","repetition?","set-last","index","4sequence","max-length","set-second","immutable-sequence","first2","first3","supremum","unclip-slice","suffix!","insert-nth","tail","3append","short","suffix","concat","flip","immutable?","reverse!","2sequence","sum","delete-all","indices","snip-slice","","check-slice","sequence?","head","append-as","halves","sequence=","collapse-slice","?second","slice-error?","product","bounds-check?","bounds-check","immutable","virtual-exemplar","harvest","remove","pad-head","last","set-fourth","cartesian-product","remove-eq","shorten","shorter","reversed?","shorter?","shortest","head-slice","pop*","tail-slice*","but-last-slice","iota?","append!","cut-slice","new-resizable","head-slice*","sequence-hashcode","pop","set-nth","?nth","second","join","immutable-sequence?","","3append-as","virtual-sequence","subseq?","remove-nth!","length","last-index","lengthen","assert-sequence","copy","move","third","first","tail?","set-first","prefix","bounds-error","","exchange","surround","cut","min-length","set-third","push-all","head?","subseq-start-from","delete-slice","rest","sum-lengths","head*","infimum","remove!","glue","slice-error","subseq","push","replace-slice","subseq-as","unclip-last"],"math-builtin":["number=","next-power-of-2","?1+","fp-special?","imaginary-part","float>bits","number?","fp-infinity?","bignum?","fp-snan?","denominator","gcd","*","+","fp-bitwise=","-","u>=","/",">=","bitand","power-of-2?","log2-expects-positive","neg?","<","log2",">","integer?","number","bits>double","2/","zero?","bits>float","float?","shift","ratio?","rect>","even?","ratio","fp-sign","bitnot",">fixnum","complex?","/i","integer>fixnum","/f","sgn",">bignum","next-float","u<","u>","mod","recip","rational",">float","2^","integer","fixnum?","neg","fixnum","sq","bignum",">rect","bit?","fp-qnan?","simple-gcd","complex","","real",">fraction","double>bits","bitor","rem","fp-nan-payload","real-part","log2-expects-positive?","prev-float","align","unordered?","float","fp-nan?","abs","bitxor","integer>fixnum-strict","u<=","odd?","<=","/mod",">integer","real?","rational?","numerator"]};Object.keys(o).forEach((function(e){r[e].pattern=s(o[e])}));var a=["2bi","while","2tri","bi*","4dip","both?","same?","tri@","curry","prepose","3bi","?if","tri*","2keep","3keep","curried","2keepd","when","2bi*","2tri*","4keep","bi@","keepdd","do","unless*","tri-curry","if*","loop","bi-curry*","when*","2bi@","2tri@","with","2with","either?","bi","until","3dip","3curry","tri-curry*","tri-curry@","bi-curry","keepd","compose","2dip","if","3tri","unless","tuple","keep","2curry","tri","most","while*","dip","composed","bi-curry@","find-last-from","trim-head-slice","map-as","each-from","none?","trim-tail","partition","if-empty","accumulate*","reject!","find-from","accumulate-as","collector-for-as","reject","map","map-sum","accumulate!","2each-from","follow","supremum-by","map!","unless-empty","collector","padding","reduce-index","replicate-as","infimum-by","trim-tail-slice","count","find-index","filter","accumulate*!","reject-as","map-integers","map-find","reduce","selector","interleave","2map","filter-as","binary-reduce","map-index-as","find","produce","filter!","replicate","cartesian-map","cartesian-each","find-index-from","map-find-last","3map-as","3map","find-last","selector-as","2map-as","2map-reduce","accumulate","each","each-index","accumulate*-as","when-empty","all?","collector-as","push-either","new-like","collector-for","2selector","push-if","2all?","map-reduce","3each","any?","trim-slice","2reduce","change-nth","produce-as","2each","trim","trim-head","cartesian-find","map-index","if-zero","each-integer","unless-zero","(find-integer)","when-zero","find-last-integer","(all-integers?)","times","(each-integer)","find-integer","all-integers?","unless-negative","if-positive","when-positive","when-negative","unless-positive","if-negative","case","2cleave","cond>quot","case>quot","3cleave","wrong-values","to-fixed-point","alist>quot","cond","cleave","call-effect","recursive-hashcode","spread","deep-spread>quot","2||","0||","n||","0&&","2&&","3||","1||","1&&","n&&","3&&","smart-unless*","keep-inputs","reduce-outputs","smart-when*","cleave>array","smart-with","smart-apply","smart-if","inputs/outputs","output>sequence-n","map-outputs","map-reduce-outputs","dropping","output>array","smart-map-reduce","smart-2map-reduce","output>array-n","nullary","inputsequence"];r.combinators.pattern=s(a),e.languages.factor=r})(Prism)},427:function(){(function(e){e.languages["false"]={comment:{pattern:/\{[^}]*\}/},string:{pattern:/"[^"]*"/,greedy:!0},"character-code":{pattern:/'(?:[^\r]|\r\n?)/,alias:"number"},"assembler-code":{pattern:/\d+`/,alias:"important"},number:/\d+/,operator:/[-!#$%&'*+,./:;=>?@\\^_`|~ßø]/,punctuation:/\[|\]/,variable:/[a-z]/,"non-standard":{pattern:/[()!=]=?|[-+*/%]|\b(?:in|is)\b/}),delete Prism.languages["firestore-security-rules"]["class-name"],Prism.languages.insertBefore("firestore-security-rules","keyword",{path:{pattern:/(^|[\s(),])(?:\/(?:[\w\xA0-\uFFFF]+|\{[\w\xA0-\uFFFF]+(?:=\*\*)?\}|\$\([\w\xA0-\uFFFF.]+\)))+/,lookbehind:!0,greedy:!0,inside:{variable:{pattern:/\{[\w\xA0-\uFFFF]+(?:=\*\*)?\}|\$\([\w\xA0-\uFFFF.]+\)/,inside:{operator:/=/,keyword:/\*\*/,punctuation:/[.$(){}]/}},punctuation:/\//}},method:{pattern:/(\ballow\s+)[a-z]+(?:\s*,\s*[a-z]+)*(?=\s*[:;])/,lookbehind:!0,alias:"builtin",inside:{punctuation:/,/}}})},9220:function(){(function(e){e.languages.flow=e.languages.extend("javascript",{}),e.languages.insertBefore("flow","keyword",{type:[{pattern:/\b(?:[Bb]oolean|Function|[Nn]umber|[Ss]tring|[Ss]ymbol|any|mixed|null|void)\b/,alias:"class-name"}]}),e.languages.flow["function-variable"].pattern=/(?!\s)[_$a-z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*=\s*(?:function\b|(?:\([^()]*\)(?:\s*:\s*\w+)?|(?!\s)[_$a-z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)\s*=>))/i,delete e.languages.flow["parameter"],e.languages.insertBefore("flow","operator",{"flow-punctuation":{pattern:/\{\||\|\}/,alias:"punctuation"}}),Array.isArray(e.languages.flow.keyword)||(e.languages.flow.keyword=[e.languages.flow.keyword]),e.languages.flow.keyword.unshift({pattern:/(^|[^$]\b)(?:Class|declare|opaque|type)\b(?!\$)/,lookbehind:!0},{pattern:/(^|[^$]\B)\$(?:Diff|Enum|Exact|Keys|ObjMap|PropertyType|Record|Shape|Subtype|Supertype|await)\b(?!\$)/,lookbehind:!0})})(Prism)},7915:function(){Prism.languages.fortran={"quoted-number":{pattern:/[BOZ](['"])[A-F0-9]+\1/i,alias:"number"},string:{pattern:/(?:\b\w+_)?(['"])(?:\1\1|&(?:\r\n?|\n)(?:[ \t]*!.*(?:\r\n?|\n)|(?![ \t]*!))|(?!\1).)*(?:\1|&)/,inside:{comment:{pattern:/(&(?:\r\n?|\n)\s*)!.*/,lookbehind:!0}}},comment:{pattern:/!.*/,greedy:!0},boolean:/\.(?:FALSE|TRUE)\.(?:_\w+)?/i,number:/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[ED][+-]?\d+)?(?:_\w+)?/i,keyword:[/\b(?:CHARACTER|COMPLEX|DOUBLE ?PRECISION|INTEGER|LOGICAL|REAL)\b/i,/\b(?:END ?)?(?:BLOCK ?DATA|DO|FILE|FORALL|FUNCTION|IF|INTERFACE|MODULE(?! PROCEDURE)|PROGRAM|SELECT|SUBROUTINE|TYPE|WHERE)\b/i,/\b(?:ALLOCATABLE|ALLOCATE|BACKSPACE|CALL|CASE|CLOSE|COMMON|CONTAINS|CONTINUE|CYCLE|DATA|DEALLOCATE|DIMENSION|DO|END|EQUIVALENCE|EXIT|EXTERNAL|FORMAT|GO ?TO|IMPLICIT(?: NONE)?|INQUIRE|INTENT|INTRINSIC|MODULE PROCEDURE|NAMELIST|NULLIFY|OPEN|OPTIONAL|PARAMETER|POINTER|PRINT|PRIVATE|PUBLIC|READ|RETURN|REWIND|SAVE|SELECT|STOP|TARGET|WHILE|WRITE)\b/i,/\b(?:ASSIGNMENT|DEFAULT|ELEMENTAL|ELSE|ELSEIF|ELSEWHERE|ENTRY|IN|INCLUDE|INOUT|KIND|NULL|ONLY|OPERATOR|OUT|PURE|RECURSIVE|RESULT|SEQUENCE|STAT|THEN|USE)\b/i],operator:[/\*\*|\/\/|=>|[=\/]=|[<>]=?|::|[+\-*=%]|\.[A-Z]+\./i,{pattern:/(^|(?!\().)\/(?!\))/,lookbehind:!0}],punctuation:/\(\/|\/\)|[(),;:&]/}},5045:function(){Prism.languages.fsharp=Prism.languages.extend("clike",{comment:[{pattern:/(^|[^\\])\(\*(?!\))[\s\S]*?\*\)/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],string:{pattern:/(?:"""[\s\S]*?"""|@"(?:""|[^"])*"|"(?:\\[\s\S]|[^\\"])*")B?/,greedy:!0},"class-name":{pattern:/(\b(?:exception|inherit|interface|new|of|type)\s+|\w\s*:\s*|\s:\??>\s*)[.\w]+\b(?:\s*(?:->|\*)\s*[.\w]+\b)*(?!\s*[:.])/,lookbehind:!0,inside:{operator:/->|\*/,punctuation:/\./}},keyword:/\b(?:let|return|use|yield)(?:!\B|\b)|\b(?:abstract|and|as|asr|assert|atomic|base|begin|break|checked|class|component|const|constraint|constructor|continue|default|delegate|do|done|downcast|downto|eager|elif|else|end|event|exception|extern|external|false|finally|fixed|for|fun|function|functor|global|if|in|include|inherit|inline|interface|internal|land|lazy|lor|lsl|lsr|lxor|match|member|method|mixin|mod|module|mutable|namespace|new|not|null|object|of|open|or|override|parallel|private|process|protected|public|pure|rec|sealed|select|sig|static|struct|tailcall|then|to|trait|true|try|type|upcast|val|virtual|void|volatile|when|while|with)\b/,number:[/\b0x[\da-fA-F]+(?:LF|lf|un)?\b/,/\b0b[01]+(?:uy|y)?\b/,/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[fm]|e[+-]?\d+)?\b/i,/\b\d+(?:[IlLsy]|UL|u[lsy]?)?\b/],operator:/([<>~&^])\1\1|([*.:<>&])\2|<-|->|[!=:]=|?|\??(?:<=|>=|<>|[-+*/%=<>])\??|[!?^&]|~[+~-]|:>|:\?>?/}),Prism.languages.insertBefore("fsharp","keyword",{preprocessor:{pattern:/(^[\t ]*)#.*/m,lookbehind:!0,alias:"property",inside:{directive:{pattern:/(^#)\b(?:else|endif|if|light|line|nowarn)\b/,lookbehind:!0,alias:"keyword"}}}}),Prism.languages.insertBefore("fsharp","punctuation",{"computation-expression":{pattern:/\b[_a-z]\w*(?=\s*\{)/i,alias:"keyword"}}),Prism.languages.insertBefore("fsharp","string",{annotation:{pattern:/\[<.+?>\]/,greedy:!0,inside:{punctuation:/^\[<|>\]$/,"class-name":{pattern:/^\w+$|(^|;\s*)[A-Z]\w*(?=\()/,lookbehind:!0},"annotation-content":{pattern:/[\s\S]+/,inside:Prism.languages.fsharp}}},char:{pattern:/'(?:[^\\']|\\(?:.|\d{3}|x[a-fA-F\d]{2}|u[a-fA-F\d]{4}|U[a-fA-F\d]{8}))'B?/,greedy:!0}})},2778:function(){(function(e){for(var t=/[^<()"']|\((?:)*\)|<(?!#--)|<#--(?:[^-]|-(?!->))*-->|"(?:[^\\"]|\\.)*"|'(?:[^\\']|\\.)*'/.source,n=0;n<2;n++)t=t.replace(//g,(function(){return t}));t=t.replace(//g,/[^\s\S]/.source);var r={comment:/<#--[\s\S]*?-->/,string:[{pattern:/\br("|')(?:(?!\1)[^\\]|\\.)*\1/,greedy:!0},{pattern:RegExp(/("|')(?:(?!\1|\$\{)[^\\]|\\.|\$\{(?:(?!\})(?:))*\})*\1/.source.replace(//g,(function(){return t}))),greedy:!0,inside:{interpolation:{pattern:RegExp(/((?:^|[^\\])(?:\\\\)*)\$\{(?:(?!\})(?:))*\}/.source.replace(//g,(function(){return t}))),lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"},rest:null}}}}],keyword:/\b(?:as)\b/,boolean:/\b(?:false|true)\b/,"builtin-function":{pattern:/((?:^|[^?])\?\s*)\w+/,lookbehind:!0,alias:"function"},function:/\b\w+(?=\s*\()/,number:/\b\d+(?:\.\d+)?\b/,operator:/\.\.[<*!]?|->|--|\+\+|&&|\|\||\?{1,2}|[-+*/%!=<>]=?|\b(?:gt|gte|lt|lte)\b/,punctuation:/[,;.:()[\]{}]/};r.string[1].inside.interpolation.inside.rest=r,e.languages.ftl={"ftl-comment":{pattern:/^<#--[\s\S]*/,alias:"comment"},"ftl-directive":{pattern:/^<[\s\S]+>$/,inside:{directive:{pattern:/(^<\/?)[#@][a-z]\w*/i,lookbehind:!0,alias:"keyword"},punctuation:/^<\/?|\/?>$/,content:{pattern:/\s*\S[\s\S]*/,alias:"ftl",inside:r}}},"ftl-interpolation":{pattern:/^\$\{[\s\S]*\}$/,inside:{punctuation:/^\$\{|\}$/,content:{pattern:/\s*\S[\s\S]*/,alias:"ftl",inside:r}}}},e.hooks.add("before-tokenize",(function(n){var r=RegExp(/<#--[\s\S]*?-->|<\/?[#@][a-zA-Z](?:)*?>|\$\{(?:)*?\}/.source.replace(//g,(function(){return t})),"gi");e.languages["markup-templating"].buildPlaceholders(n,"ftl",r)})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"ftl")}))})(Prism)},1709:function(){Prism.languages.gap={shell:{pattern:/^gap>[\s\S]*?(?=^gap>|$(?![\s\S]))/m,greedy:!0,inside:{gap:{pattern:/^(gap>).+(?:(?:\r(?:\n|(?!\n))|\n)>.*)*/,lookbehind:!0,inside:null},punctuation:/^gap>/}},comment:{pattern:/#.*/,greedy:!0},string:{pattern:/(^|[^\\'"])(?:'(?:[^\r\n\\']|\\.){1,10}'|"(?:[^\r\n\\"]|\\.)*"(?!")|"""[\s\S]*?""")/,lookbehind:!0,greedy:!0,inside:{continuation:{pattern:/([\r\n])>/,lookbehind:!0,alias:"punctuation"}}},keyword:/\b(?:Assert|Info|IsBound|QUIT|TryNextMethod|Unbind|and|atomic|break|continue|do|elif|else|end|fi|for|function|if|in|local|mod|not|od|or|quit|readonly|readwrite|rec|repeat|return|then|until|while)\b/,boolean:/\b(?:false|true)\b/,function:/\b[a-z_]\w*(?=\s*\()/i,number:{pattern:/(^|[^\w.]|\.\.)(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?(?:_[a-z]?)?(?=$|[^\w.]|\.\.)/,lookbehind:!0},continuation:{pattern:/([\r\n])>/,lookbehind:!0,alias:"punctuation"},operator:/->|[-+*/^~=!]|<>|[<>]=?|:=|\.\./,punctuation:/[()[\]{},;.:]/},Prism.languages.gap.shell.inside.gap.inside=Prism.languages.gap},8407:function(){Prism.languages.gcode={comment:/;.*|\B\(.*?\)\B/,string:{pattern:/"(?:""|[^"])*"/,greedy:!0},keyword:/\b[GM]\d+(?:\.\d+)?\b/,property:/\b[A-Z]/,checksum:{pattern:/(\*)\d+/,lookbehind:!0,alias:"number"},punctuation:/[:*]/}},5276:function(){Prism.languages.gdscript={comment:/#.*/,string:{pattern:/@?(?:("|')(?:(?!\1)[^\n\\]|\\[\s\S])*\1(?!"|')|"""(?:[^\\]|\\[\s\S])*?""")/,greedy:!0},"class-name":{pattern:/(^(?:class|class_name|extends)[ \t]+|^export\([ \t]*|\bas[ \t]+|(?:\b(?:const|var)[ \t]|[,(])[ \t]*\w+[ \t]*:[ \t]*|->[ \t]*)[a-zA-Z_]\w*/m,lookbehind:!0},keyword:/\b(?:and|as|assert|break|breakpoint|class|class_name|const|continue|elif|else|enum|export|extends|for|func|if|in|is|master|mastersync|match|not|null|onready|or|pass|preload|puppet|puppetsync|remote|remotesync|return|self|setget|signal|static|tool|var|while|yield)\b/,function:/\b[a-z_]\w*(?=[ \t]*\()/i,variable:/\$\w+/,number:[/\b0b[01_]+\b|\b0x[\da-fA-F_]+\b|(?:\b\d[\d_]*(?:\.[\d_]*)?|\B\.[\d_]+)(?:e[+-]?[\d_]+)?\b/,/\b(?:INF|NAN|PI|TAU)\b/],constant:/\b[A-Z][A-Z_\d]*\b/,boolean:/\b(?:false|true)\b/,operator:/->|:=|&&|\|\||<<|>>|[-+*/%&|!<>=]=?|[~^]/,punctuation:/[.:,;()[\]{}]/}},6857:function(){Prism.languages.gedcom={"line-value":{pattern:/(^[\t ]*\d+ +(?:@\w[\w!"$%&'()*+,\-./:;<=>?[\\\]^`{|}~\x80-\xfe #]*@ +)?\w+ ).+/m,lookbehind:!0,inside:{pointer:{pattern:/^@\w[\w!"$%&'()*+,\-./:;<=>?[\\\]^`{|}~\x80-\xfe #]*@$/,alias:"variable"}}},record:{pattern:/(^[\t ]*\d+ +(?:@\w[\w!"$%&'()*+,\-./:;<=>?[\\\]^`{|}~\x80-\xfe #]*@ +)?)\w+/m,lookbehind:!0,alias:"tag"},level:{pattern:/(^[\t ]*)\d+/m,lookbehind:!0,alias:"number"},pointer:{pattern:/@\w[\w!"$%&'()*+,\-./:;<=>?[\\\]^`{|}~\x80-\xfe #]*@/,alias:"variable"}}},1315:function(){Prism.languages.gettext={comment:[{pattern:/# .*/,greedy:!0,alias:"translator-comment"},{pattern:/#\..*/,greedy:!0,alias:"extracted-comment"},{pattern:/#:.*/,greedy:!0,alias:"reference-comment"},{pattern:/#,.*/,greedy:!0,alias:"flag-comment"},{pattern:/#\|.*/,greedy:!0,alias:"previously-untranslated-comment"},{pattern:/#.*/,greedy:!0}],string:{pattern:/(^|[^\\])"(?:[^"\\]|\\.)*"/,lookbehind:!0,greedy:!0},keyword:/^msg(?:ctxt|id|id_plural|str)\b/m,number:/\b\d+\b/,punctuation:/[\[\]]/},Prism.languages.po=Prism.languages.gettext},9472:function(){(function(e){var t=/(?:\r?\n|\r)[ \t]*\|.+\|(?:(?!\|).)*/.source;e.languages.gherkin={pystring:{pattern:/("""|''')[\s\S]+?\1/,alias:"string"},comment:{pattern:/(^[ \t]*)#.*/m,lookbehind:!0},tag:{pattern:/(^[ \t]*)@\S*/m,lookbehind:!0},feature:{pattern:/((?:^|\r?\n|\r)[ \t]*)(?:Ability|Ahoy matey!|Arwedd|Aspekt|Besigheid Behoefte|Business Need|Caracteristica|Característica|Egenskab|Egenskap|Eiginleiki|Feature|Fīča|Fitur|Fonctionnalité|Fonksyonalite|Funcionalidade|Funcionalitat|Functionalitate|Funcţionalitate|Funcționalitate|Functionaliteit|Fungsi|Funkcia|Funkcija|Funkcionalitāte|Funkcionalnost|Funkcja|Funksie|Funktionalität|Funktionalitéit|Funzionalità|Hwaet|Hwæt|Jellemző|Karakteristik|Lastnost|Mak|Mogucnost|laH|Mogućnost|Moznosti|Možnosti|OH HAI|Omadus|Ominaisuus|Osobina|Özellik|Potrzeba biznesowa|perbogh|poQbogh malja'|Požadavek|Požiadavka|Pretty much|Qap|Qu'meH 'ut|Savybė|Tính năng|Trajto|Vermoë|Vlastnosť|Właściwość|Značilnost|Δυνατότητα|Λειτουργία|Могућност|Мөмкинлек|Особина|Свойство|Үзенчәлеклелек|Функционал|Функционалност|Функция|Функціонал|תכונה|خاصية|خصوصیت|صلاحیت|کاروبار کی ضرورت|وِیژگی|रूप लेख|ਖਾਸੀਅਤ|ਨਕਸ਼ ਨੁਹਾਰ|ਮੁਹਾਂਦਰਾ|గుణము|ಹೆಚ್ಚಳ|ความต้องการทางธุรกิจ|ความสามารถ|โครงหลัก|기능|フィーチャ|功能|機能):(?:[^:\r\n]+(?:\r?\n|\r|$))*/,lookbehind:!0,inside:{important:{pattern:/(:)[^\r\n]+/,lookbehind:!0},keyword:/[^:\r\n]+:/}},scenario:{pattern:/(^[ \t]*)(?:Abstract Scenario|Abstrakt Scenario|Achtergrond|Aer|Ær|Agtergrond|All y'all|Antecedentes|Antecedents|Atburðarás|Atburðarásir|Awww, look mate|B4|Background|Baggrund|Bakgrund|Bakgrunn|Bakgrunnur|Beispiele|Beispiller|Bối cảnh|Cefndir|Cenario|Cenário|Cenario de Fundo|Cenário de Fundo|Cenarios|Cenários|Contesto|Context|Contexte|Contexto|Conto|Contoh|Contone|Dæmi|Dasar|Dead men tell no tales|Delineacao do Cenario|Delineação do Cenário|Dis is what went down|Dữ liệu|Dyagram Senaryo|Dyagram senaryo|Egzanp|Ejemplos|Eksempler|Ekzemploj|Enghreifftiau|Esbozo do escenario|Escenari|Escenario|Esempi|Esquema de l'escenari|Esquema del escenario|Esquema do Cenario|Esquema do Cenário|EXAMPLZ|Examples|Exempel|Exemple|Exemples|Exemplos|First off|Fono|Forgatókönyv|Forgatókönyv vázlat|Fundo|Geçmiş|Grundlage|Hannergrond|ghantoH|Háttér|Heave to|Istorik|Juhtumid|Keadaan|Khung kịch bản|Khung tình huống|Kịch bản|Koncept|Konsep skenario|Kontèks|Kontekst|Kontekstas|Konteksts|Kontext|Konturo de la scenaro|Latar Belakang|lut chovnatlh|lut|lutmey|Lýsing Atburðarásar|Lýsing Dæma|MISHUN SRSLY|MISHUN|Menggariskan Senario|mo'|Náčrt Scenára|Náčrt Scénáře|Náčrt Scenáru|Oris scenarija|Örnekler|Osnova|Osnova Scenára|Osnova scénáře|Osnutek|Ozadje|Paraugs|Pavyzdžiai|Példák|Piemēri|Plan du scénario|Plan du Scénario|Plan Senaryo|Plan senaryo|Plang vum Szenario|Pozadí|Pozadie|Pozadina|Príklady|Příklady|Primer|Primeri|Primjeri|Przykłady|Raamstsenaarium|Reckon it's like|Rerefons|Scenár|Scénář|Scenarie|Scenarij|Scenarijai|Scenarijaus šablonas|Scenariji|Scenārijs|Scenārijs pēc parauga|Scenarijus|Scenario|Scénario|Scenario Amlinellol|Scenario Outline|Scenario Template|Scenariomal|Scenariomall|Scenarios|Scenariu|Scenariusz|Scenaro|Schema dello scenario|Se ðe|Se the|Se þe|Senario|Senaryo Deskripsyon|Senaryo deskripsyon|Senaryo|Senaryo taslağı|Shiver me timbers|Situācija|Situai|Situasie Uiteensetting|Situasie|Skenario konsep|Skenario|Skica|Structura scenariu|Structură scenariu|Struktura scenarija|Stsenaarium|Swa hwaer swa|Swa|Swa hwær swa|Szablon scenariusza|Szenario|Szenariogrundriss|Tapaukset|Tapaus|Tapausaihio|Taust|Tausta|Template Keadaan|Template Senario|Template Situai|The thing of it is|Tình huống|Variantai|Voorbeelde|Voorbeelden|Wharrimean is|Yo-ho-ho|You'll wanna|Założenia|Παραδείγματα|Περιγραφή Σεναρίου|Σενάρια|Σενάριο|Υπόβαθρο|Кереш|Контекст|Концепт|Мисаллар|Мисоллар|Основа|Передумова|Позадина|Предистория|Предыстория|Приклади|Пример|Примери|Примеры|Рамка на сценарий|Скица|Структура сценарија|Структура сценария|Структура сценарію|Сценарий|Сценарий структураси|Сценарийның төзелеше|Сценарији|Сценарио|Сценарій|Тарих|Үрнәкләр|דוגמאות|רקע|תבנית תרחיש|תרחיש|الخلفية|الگوی سناریو|امثلة|پس منظر|زمینه|سناریو|سيناريو|سيناريو مخطط|مثالیں|منظر نامے کا خاکہ|منظرنامہ|نمونه ها|उदाहरण|परिदृश्य|परिदृश्य रूपरेखा|पृष्ठभूमि|ਉਦਾਹਰਨਾਂ|ਪਟਕਥਾ|ਪਟਕਥਾ ਢਾਂਚਾ|ਪਟਕਥਾ ਰੂਪ ਰੇਖਾ|ਪਿਛੋਕੜ|ఉదాహరణలు|కథనం|నేపథ్యం|సన్నివేశం|ಉದಾಹರಣೆಗಳು|ಕಥಾಸಾರಾಂಶ|ವಿವರಣೆ|ಹಿನ್ನೆಲೆ|โครงสร้างของเหตุการณ์|ชุดของตัวอย่าง|ชุดของเหตุการณ์|แนวคิด|สรุปเหตุการณ์|เหตุการณ์|배경|시나리오|시나리오 개요|예|サンプル|シナリオ|シナリオアウトライン|シナリオテンプレ|シナリオテンプレート|テンプレ|例|例子|剧本|剧本大纲|劇本|劇本大綱|场景|场景大纲|場景|場景大綱|背景):[^:\r\n]*/m,lookbehind:!0,inside:{important:{pattern:/(:)[^\r\n]*/,lookbehind:!0},keyword:/[^:\r\n]+:/}},"table-body":{pattern:RegExp("("+t+")(?:"+t+")+"),lookbehind:!0,inside:{outline:{pattern:/<[^>]+>/,alias:"variable"},td:{pattern:/\s*[^\s|][^|]*/,alias:"string"},punctuation:/\|/}},"table-head":{pattern:RegExp(t),inside:{th:{pattern:/\s*[^\s|][^|]*/,alias:"variable"},punctuation:/\|/}},atrule:{pattern:/(^[ \t]+)(?:'a|'ach|'ej|7|a|A také|A taktiež|A tiež|A zároveň|Aber|Ac|Adott|Akkor|Ak|Aleshores|Ale|Ali|Allora|Alors|Als|Ama|Amennyiben|Amikor|Ampak|an|AN|Ananging|And y'all|And|Angenommen|Anrhegedig a|An|Apabila|Atès|Atesa|Atunci|Avast!|Aye|A|awer|Bagi|Banjur|Bet|Biết|Blimey!|Buh|But at the end of the day I reckon|But y'all|But|BUT|Cal|Când|Cand|Cando|Ce|Cuando|Če|Ða ðe|Ða|Dadas|Dada|Dados|Dado|DaH ghu' bejlu'|dann|Dann|Dano|Dan|Dar|Dat fiind|Data|Date fiind|Date|Dati fiind|Dati|Daţi fiind|Dați fiind|DEN|Dato|De|Den youse gotta|Dengan|Diberi|Diyelim ki|Donada|Donat|Donitaĵo|Do|Dun|Duota|Ðurh|Eeldades|Ef|Eğer ki|Entao|Então|Entón|E|En|Entonces|Epi|És|Etant donnée|Etant donné|Et|Étant données|Étant donnée|Étant donné|Etant données|Etant donnés|Étant donnés|Fakat|Gangway!|Gdy|Gegeben seien|Gegeben sei|Gegeven|Gegewe|ghu' noblu'|Gitt|Given y'all|Given|Givet|Givun|Ha|Cho|I CAN HAZ|In|Ir|It's just unbelievable|I|Ja|Jeśli|Jeżeli|Kad|Kada|Kadar|Kai|Kaj|Když|Keď|Kemudian|Ketika|Khi|Kiedy|Ko|Kuid|Kui|Kun|Lan|latlh|Le sa a|Let go and haul|Le|Lè sa a|Lè|Logo|Lorsqu'<|Lorsque|mä|Maar|Mais|Mając|Ma|Majd|Maka|Manawa|Mas|Men|Menawa|Mutta|Nalika|Nalikaning|Nanging|Når|När|Nato|Nhưng|Niin|Njuk|O zaman|Och|Og|Oletetaan|Ond|Onda|Oraz|Pak|Pero|Però|Podano|Pokiaľ|Pokud|Potem|Potom|Privzeto|Pryd|Quan|Quand|Quando|qaSDI'|Så|Sed|Se|Siis|Sipoze ke|Sipoze Ke|Sipoze|Si|Şi|Și|Soit|Stel|Tada|Tad|Takrat|Tak|Tapi|Ter|Tetapi|Tha the|Tha|Then y'all|Then|Thì|Thurh|Toda|Too right|Un|Und|ugeholl|Và|vaj|Vendar|Ve|wann|Wanneer|WEN|Wenn|When y'all|When|Wtedy|Wun|Y'know|Yeah nah|Yna|Youse know like when|Youse know when youse got|Y|Za predpokladu|Za předpokladu|Zadan|Zadani|Zadano|Zadate|Zadato|Zakładając|Zaradi|Zatati|Þa þe|Þa|Þá|Þegar|Þurh|Αλλά|Δεδομένου|Και|Όταν|Τότε|А також|Агар|Але|Али|Аммо|А|Әгәр|Әйтик|Әмма|Бирок|Ва|Вә|Дадено|Дано|Допустим|Если|Задате|Задати|Задато|И|І|К тому же|Када|Кад|Когато|Когда|Коли|Ләкин|Лекин|Нәтиҗәдә|Нехай|Но|Онда|Припустимо, що|Припустимо|Пусть|Также|Та|Тогда|Тоді|То|Унда|Һәм|Якщо|אבל|אזי|אז|בהינתן|וגם|כאשר|آنگاه|اذاً|اگر|اما|اور|با فرض|بالفرض|بفرض|پھر|تب|ثم|جب|عندما|فرض کیا|لكن|لیکن|متى|هنگامی|و|अगर|और|कदा|किन्तु|चूंकि|जब|तथा|तदा|तब|परन्तु|पर|यदि|ਅਤੇ|ਜਦੋਂ|ਜਿਵੇਂ ਕਿ|ਜੇਕਰ|ਤਦ|ਪਰ|అప్పుడు|ఈ పరిస్థితిలో|కాని|చెప్పబడినది|మరియు|ಆದರೆ|ನಂತರ|ನೀಡಿದ|ಮತ್ತು|ಸ್ಥಿತಿಯನ್ನು|กำหนดให้|ดังนั้น|แต่|เมื่อ|และ|그러면<|그리고<|단<|만약<|만일<|먼저<|조건<|하지만<|かつ<|しかし<|ただし<|ならば<|もし<|並且<|但し<|但是<|假如<|假定<|假設<|假设<|前提<|同时<|同時<|并且<|当<|當<|而且<|那么<|那麼<)(?=[ \t])/m,lookbehind:!0},string:{pattern:/"(?:\\.|[^"\\\r\n])*"|'(?:\\.|[^'\\\r\n])*'/,inside:{outline:{pattern:/<[^>]+>/,alias:"variable"}}},outline:{pattern:/<[^>]+>/,alias:"variable"}}})(Prism)},9787:function(){Prism.languages.git={comment:/^#.*/m,deleted:/^[-–].*/m,inserted:/^\+.*/m,string:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,command:{pattern:/^.*\$ git .*$/m,inside:{parameter:/\s--?\w+/}},coord:/^@@.*@@$/m,"commit-sha1":/^commit \w{40}$/m}},9812:function(){Prism.languages.glsl=Prism.languages.extend("c",{keyword:/\b(?:active|asm|atomic_uint|attribute|[ibdu]?vec[234]|bool|break|buffer|case|cast|centroid|class|coherent|common|const|continue|d?mat[234](?:x[234])?|default|discard|do|double|else|enum|extern|external|false|filter|fixed|flat|float|for|fvec[234]|goto|half|highp|hvec[234]|[iu]?sampler2DMS(?:Array)?|[iu]?sampler2DRect|[iu]?samplerBuffer|[iu]?samplerCube|[iu]?samplerCubeArray|[iu]?sampler[123]D|[iu]?sampler[12]DArray|[iu]?image2DMS(?:Array)?|[iu]?image2DRect|[iu]?imageBuffer|[iu]?imageCube|[iu]?imageCubeArray|[iu]?image[123]D|[iu]?image[12]DArray|if|in|inline|inout|input|int|interface|invariant|layout|long|lowp|mediump|namespace|noinline|noperspective|out|output|partition|patch|precise|precision|public|readonly|resource|restrict|return|sample|sampler[12]DArrayShadow|sampler[12]DShadow|sampler2DRectShadow|sampler3DRect|samplerCubeArrayShadow|samplerCubeShadow|shared|short|sizeof|smooth|static|struct|subroutine|superp|switch|template|this|true|typedef|uint|uniform|union|unsigned|using|varying|void|volatile|while|writeonly)\b/})},1828:function(){Prism.languages.gamemakerlanguage=Prism.languages.gml=Prism.languages.extend("clike",{keyword:/\b(?:break|case|continue|default|do|else|enum|exit|for|globalvar|if|repeat|return|switch|until|var|while)\b/,number:/(?:\b0x[\da-f]+|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?)[ulf]{0,4}/i,operator:/--|\+\+|[-+%/=]=?|!=|\*\*?=?|<[<=>]?|>[=>]?|&&?|\^\^?|\|\|?|~|\b(?:and|at|not|or|with|xor)\b/,constant:/\b(?:GM_build_date|GM_version|action_(?:continue|restart|reverse|stop)|all|gamespeed_(?:fps|microseconds)|global|local|noone|other|pi|pointer_(?:invalid|null)|self|timezone_(?:local|utc)|undefined|ev_(?:create|destroy|step|alarm|keyboard|mouse|collision|other|draw|draw_(?:begin|end|post|pre)|keypress|keyrelease|trigger|(?:left|middle|no|right)_button|(?:left|middle|right)_press|(?:left|middle|right)_release|mouse_(?:enter|leave|wheel_down|wheel_up)|global_(?:left|middle|right)_button|global_(?:left|middle|right)_press|global_(?:left|middle|right)_release|joystick(?:1|2)_(?:button1|button2|button3|button4|button5|button6|button7|button8|down|left|right|up)|outside|boundary|game_start|game_end|room_start|room_end|no_more_lives|animation_end|end_of_path|no_more_health|user\d|gui|gui_begin|gui_end|step_(?:begin|end|normal))|vk_(?:alt|anykey|backspace|control|delete|down|end|enter|escape|home|insert|left|nokey|pagedown|pageup|pause|printscreen|return|right|shift|space|tab|up|f\d|numpad\d|add|decimal|divide|lalt|lcontrol|lshift|multiply|ralt|rcontrol|rshift|subtract)|achievement_(?:filter_(?:all_players|favorites_only|friends_only)|friends_info|info|leaderboard_info|our_info|pic_loaded|show_(?:achievement|bank|friend_picker|leaderboard|profile|purchase_prompt|ui)|type_challenge|type_score_challenge)|asset_(?:font|object|path|room|script|shader|sound|sprite|tiles|timeline|unknown)|audio_(?:3d|falloff_(?:exponent_distance|exponent_distance_clamped|inverse_distance|inverse_distance_clamped|linear_distance|linear_distance_clamped|none)|mono|new_system|old_system|stereo)|bm_(?:add|complex|dest_alpha|dest_color|dest_colour|inv_dest_alpha|inv_dest_color|inv_dest_colour|inv_src_alpha|inv_src_color|inv_src_colour|max|normal|one|src_alpha|src_alpha_sat|src_color|src_colour|subtract|zero)|browser_(?:chrome|firefox|ie|ie_mobile|not_a_browser|opera|safari|safari_mobile|tizen|unknown|windows_store)|buffer_(?:bool|f16|f32|f64|fast|fixed|generalerror|grow|invalidtype|network|outofbounds|outofspace|s16|s32|s8|seek_end|seek_relative|seek_start|string|text|u16|u32|u64|u8|vbuffer|wrap)|c_(?:aqua|black|blue|dkgray|fuchsia|gray|green|lime|ltgray|maroon|navy|olive|orange|purple|red|silver|teal|white|yellow)|cmpfunc_(?:always|equal|greater|greaterequal|less|lessequal|never|notequal)|cr_(?:appstart|arrow|beam|cross|default|drag|handpoint|hourglass|none|size_all|size_nesw|size_ns|size_nwse|size_we|uparrow)|cull_(?:clockwise|counterclockwise|noculling)|device_(?:emulator|tablet)|device_ios_(?:ipad|ipad_retina|iphone|iphone5|iphone6|iphone6plus|iphone_retina|unknown)|display_(?:landscape|landscape_flipped|portrait|portrait_flipped)|dll_(?:cdecl|cdel|stdcall)|ds_type_(?:grid|list|map|priority|queue|stack)|ef_(?:cloud|ellipse|explosion|firework|flare|rain|ring|smoke|smokeup|snow|spark|star)|fa_(?:archive|bottom|center|directory|hidden|left|middle|readonly|right|sysfile|top|volumeid)|fb_login_(?:default|fallback_to_webview|forcing_safari|forcing_webview|no_fallback_to_webview|use_system_account)|iap_(?:available|canceled|ev_consume|ev_product|ev_purchase|ev_restore|ev_storeload|failed|purchased|refunded|status_available|status_loading|status_processing|status_restoring|status_unavailable|status_uninitialised|storeload_failed|storeload_ok|unavailable)|leaderboard_type_(?:number|time_mins_secs)|lighttype_(?:dir|point)|matrix_(?:projection|view|world)|mb_(?:any|left|middle|none|right)|network_(?:config_(?:connect_timeout|disable_reliable_udp|enable_reliable_udp|use_non_blocking_socket)|socket_(?:bluetooth|tcp|udp)|type_(?:connect|data|disconnect|non_blocking_connect))|of_challenge_(?:lose|tie|win)|os_(?:android|ios|linux|macosx|ps3|ps4|psvita|unknown|uwp|win32|win8native|windows|winphone|xboxone)|phy_debug_render_(?:aabb|collision_pairs|coms|core_shapes|joints|obb|shapes)|phy_joint_(?:anchor_1_x|anchor_1_y|anchor_2_x|anchor_2_y|angle|angle_limits|damping_ratio|frequency|length_1|length_2|lower_angle_limit|max_force|max_length|max_motor_force|max_motor_torque|max_torque|motor_force|motor_speed|motor_torque|reaction_force_x|reaction_force_y|reaction_torque|speed|translation|upper_angle_limit)|phy_particle_data_flag_(?:category|color|colour|position|typeflags|velocity)|phy_particle_flag_(?:colormixing|colourmixing|elastic|powder|spring|tensile|viscous|wall|water|zombie)|phy_particle_group_flag_(?:rigid|solid)|pr_(?:linelist|linestrip|pointlist|trianglefan|trianglelist|trianglestrip)|ps_(?:distr|shape)_(?:diamond|ellipse|gaussian|invgaussian|line|linear|rectangle)|pt_shape_(?:circle|cloud|disk|explosion|flare|line|pixel|ring|smoke|snow|spark|sphere|square|star)|ty_(?:real|string)|gp_(?:face\d|axislh|axislv|axisrh|axisrv|padd|padl|padr|padu|select|shoulderl|shoulderlb|shoulderr|shoulderrb|start|stickl|stickr)|lb_disp_(?:none|numeric|time_ms|time_sec)|lb_sort_(?:ascending|descending|none)|ov_(?:achievements|community|friends|gamegroup|players|settings)|ugc_(?:filetype_(?:community|microtrans)|list_(?:Favorited|Followed|Published|Subscribed|UsedOrPlayed|VotedDown|VotedOn|VotedUp|WillVoteLater)|match_(?:AllGuides|Artwork|Collections|ControllerBindings|IntegratedGuides|Items|Items_Mtx|Items_ReadyToUse|Screenshots|UsableInGame|Videos|WebGuides)|query_(?:AcceptedForGameRankedByAcceptanceDate|CreatedByFriendsRankedByPublicationDate|FavoritedByFriendsRankedByPublicationDate|NotYetRated)|query_RankedBy(?:NumTimesReported|PublicationDate|TextSearch|TotalVotesAsc|Trend|Vote|VotesUp)|result_success|sortorder_CreationOrder(?:Asc|Desc)|sortorder_(?:ForModeration|LastUpdatedDesc|SubscriptionDateDesc|TitleAsc|VoteScoreDesc)|visibility_(?:friends_only|private|public))|vertex_usage_(?:binormal|blendindices|blendweight|color|colour|depth|fog|normal|position|psize|sample|tangent|texcoord|textcoord)|vertex_type_(?:float\d|color|colour|ubyte4)|input_type|layerelementtype_(?:background|instance|oldtilemap|particlesystem|sprite|tile|tilemap|undefined)|se_(?:chorus|compressor|echo|equalizer|flanger|gargle|none|reverb)|text_type|tile_(?:flip|index_mask|mirror|rotate)|(?:obj|rm|scr|spr)\w+)\b/,variable:/\b(?:alarm|application_surface|async_load|background_(?:alpha|blend|color|colour|foreground|height|hspeed|htiled|index|showcolor|showcolour|visible|vspeed|vtiled|width|x|xscale|y|yscale)|bbox_(?:bottom|left|right|top)|browser_(?:height|width)|caption_(?:health|lives|score)|current_(?:day|hour|minute|month|second|time|weekday|year)|cursor_sprite|debug_mode|delta_time|direction|display_aa|error_(?:last|occurred)|event_(?:action|number|object|type)|fps|fps_real|friction|game_(?:display|project|save)_(?:id|name)|gamemaker_(?:pro|registered|version)|gravity|gravity_direction|(?:h|v)speed|health|iap_data|id|image_(?:alpha|angle|blend|depth|index|number|speed|xscale|yscale)|instance_(?:count|id)|keyboard_(?:key|lastchar|lastkey|string)|layer|lives|mask_index|mouse_(?:button|lastbutton|x|y)|object_index|os_(?:browser|device|type|version)|path_(?:endaction|index|orientation|position|positionprevious|scale|speed)|persistent|phy_(?:rotation|(?:col_normal|collision|com|linear_velocity|position|speed)_(?:x|y)|angular_(?:damping|velocity)|position_(?:x|y)previous|speed|linear_damping|bullet|fixed_rotation|active|mass|inertia|dynamic|kinematic|sleeping|collision_points)|pointer_(?:invalid|null)|room|room_(?:caption|first|height|last|persistent|speed|width)|score|secure_mode|show_(?:health|lives|score)|solid|speed|sprite_(?:height|index|width|xoffset|yoffset)|temp_directory|timeline_(?:index|loop|position|running|speed)|transition_(?:color|kind|steps)|undefined|view_(?:angle|current|enabled|(?:h|v)(?:border|speed)|(?:h|w|x|y)port|(?:h|w|x|y)view|object|surface_id|visible)|visible|webgl_enabled|working_directory|(?:x|y)(?:previous|start)|x|y|argument(?:_relitive|_count|\d)|argument|global|local|other|self)\b/})},1415:function(){Prism.languages.gn={comment:{pattern:/#.*/,greedy:!0},"string-literal":{pattern:/(^|[^\\"])"(?:[^\r\n"\\]|\\.)*"/,lookbehind:!0,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$(?:\{[\s\S]*?\}|[a-zA-Z_]\w*|0x[a-fA-F0-9]{2})/,lookbehind:!0,inside:{number:/^\$0x[\s\S]{2}$/,variable:/^\$\w+$/,"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"},expression:{pattern:/[\s\S]+/,inside:null}}},string:/[\s\S]+/}},keyword:/\b(?:else|if)\b/,boolean:/\b(?:false|true)\b/,"builtin-function":{pattern:/\b(?:assert|defined|foreach|import|pool|print|template|tool|toolchain)(?=\s*\()/i,alias:"keyword"},function:/\b[a-z_]\w*(?=\s*\()/i,constant:/\b(?:current_cpu|current_os|current_toolchain|default_toolchain|host_cpu|host_os|root_build_dir|root_gen_dir|root_out_dir|target_cpu|target_gen_dir|target_os|target_out_dir)\b/,number:/-?\b\d+\b/,operator:/[-+!=<>]=?|&&|\|\|/,punctuation:/[(){}[\],.]/},Prism.languages.gn["string-literal"].inside["interpolation"].inside["expression"].inside=Prism.languages.gn,Prism.languages.gni=Prism.languages.gn},7346:function(){Prism.languages["go-mod"]=Prism.languages["go-module"]={comment:{pattern:/\/\/.*/,greedy:!0},version:{pattern:/(^|[\s()[\],])v\d+\.\d+\.\d+(?:[+-][-+.\w]*)?(?![^\s()[\],])/,lookbehind:!0,alias:"number"},"go-version":{pattern:/((?:^|\s)go\s+)\d+(?:\.\d+){1,2}/,lookbehind:!0,alias:"number"},keyword:{pattern:/^([ \t]*)(?:exclude|go|module|replace|require|retract)\b/m,lookbehind:!0},operator:/=>/,punctuation:/[()[\],]/}},7046:function(){Prism.languages.go=Prism.languages.extend("clike",{string:{pattern:/(^|[^\\])"(?:\\.|[^"\\\r\n])*"|`[^`]*`/,lookbehind:!0,greedy:!0},keyword:/\b(?:break|case|chan|const|continue|default|defer|else|fallthrough|for|func|go(?:to)?|if|import|interface|map|package|range|return|select|struct|switch|type|var)\b/,boolean:/\b(?:_|false|iota|nil|true)\b/,number:[/\b0(?:b[01_]+|o[0-7_]+)i?\b/i,/\b0x(?:[a-f\d_]+(?:\.[a-f\d_]*)?|\.[a-f\d_]+)(?:p[+-]?\d+(?:_\d+)*)?i?(?!\w)/i,/(?:\b\d[\d_]*(?:\.[\d_]*)?|\B\.\d[\d_]*)(?:e[+-]?[\d_]+)?i?(?!\w)/i],operator:/[*\/%^!=]=?|\+[=+]?|-[=-]?|\|[=|]?|&(?:=|&|\^=?)?|>(?:>=?|=)?|<(?:<=?|=|-)?|:=|\.\.\./,builtin:/\b(?:append|bool|byte|cap|close|complex|complex(?:64|128)|copy|delete|error|float(?:32|64)|u?int(?:8|16|32|64)?|imag|len|make|new|panic|print(?:ln)?|real|recover|rune|string|uintptr)\b/}),Prism.languages.insertBefore("go","string",{char:{pattern:/'(?:\\.|[^'\\\r\n]){0,10}'/,greedy:!0}}),delete Prism.languages.go["class-name"]},1565:function(){(function(e){var t={pattern:/((?:^|[^\\$])(?:\\{2})*)\$(?:\w+|\{[^{}]*\})/,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{?|\}$/,alias:"punctuation"},expression:{pattern:/[\s\S]+/,inside:null}}};e.languages.gradle=e.languages.extend("clike",{string:{pattern:/'''(?:[^\\]|\\[\s\S])*?'''|'(?:\\.|[^\\'\r\n])*'/,greedy:!0},keyword:/\b(?:apply|def|dependencies|else|if|implementation|import|plugin|plugins|project|repositories|repository|sourceSets|tasks|val)\b/,number:/\b(?:0b[01_]+|0x[\da-f_]+(?:\.[\da-f_p\-]+)?|[\d_]+(?:\.[\d_]+)?(?:e[+-]?\d+)?)[glidf]?\b/i,operator:{pattern:/(^|[^.])(?:~|==?~?|\?[.:]?|\*(?:[.=]|\*=?)?|\.[@&]|\.\.<|\.\.(?!\.)|-[-=>]?|\+[+=]?|!=?|<(?:<=?|=>?)?|>(?:>>?=?|=)?|&[&=]?|\|[|=]?|\/=?|\^=?|%=?)/,lookbehind:!0},punctuation:/\.+|[{}[\];(),:$]/}),e.languages.insertBefore("gradle","string",{shebang:{pattern:/#!.+/,alias:"comment",greedy:!0},"interpolation-string":{pattern:/"""(?:[^\\]|\\[\s\S])*?"""|(["/])(?:\\.|(?!\1)[^\\\r\n])*\1|\$\/(?:[^/$]|\$(?:[/$]|(?![/$]))|\/(?!\$))*\/\$/,greedy:!0,inside:{interpolation:t,string:/[\s\S]+/}}}),e.languages.insertBefore("gradle","punctuation",{"spock-block":/\b(?:and|cleanup|expect|given|setup|then|when|where):/}),e.languages.insertBefore("gradle","function",{annotation:{pattern:/(^|[^.])@\w+/,lookbehind:!0,alias:"punctuation"}}),t.inside.expression.inside=e.languages.gradle})(Prism)},7117:function(){Prism.languages.graphql={comment:/#.*/,description:{pattern:/(?:"""(?:[^"]|(?!""")")*"""|"(?:\\.|[^\\"\r\n])*")(?=\s*[a-z_])/i,greedy:!0,alias:"string",inside:{"language-markdown":{pattern:/(^"(?:"")?)(?!\1)[\s\S]+(?=\1$)/,lookbehind:!0,inside:Prism.languages.markdown}}},string:{pattern:/"""(?:[^"]|(?!""")")*"""|"(?:\\.|[^\\"\r\n])*"/,greedy:!0},number:/(?:\B-|\b)\d+(?:\.\d+)?(?:e[+-]?\d+)?\b/i,boolean:/\b(?:false|true)\b/,variable:/\$[a-z_]\w*/i,directive:{pattern:/@[a-z_]\w*/i,alias:"function"},"attr-name":{pattern:/\b[a-z_]\w*(?=\s*(?:\((?:[^()"]|"(?:\\.|[^\\"\r\n])*")*\))?:)/i,greedy:!0},"atom-input":{pattern:/\b[A-Z]\w*Input\b/,alias:"class-name"},scalar:/\b(?:Boolean|Float|ID|Int|String)\b/,constant:/\b[A-Z][A-Z_\d]*\b/,"class-name":{pattern:/(\b(?:enum|implements|interface|on|scalar|type|union)\s+|&\s*|:\s*|\[)[A-Z_]\w*/,lookbehind:!0},fragment:{pattern:/(\bfragment\s+|\.{3}\s*(?!on\b))[a-zA-Z_]\w*/,lookbehind:!0,alias:"function"},"definition-mutation":{pattern:/(\bmutation\s+)[a-zA-Z_]\w*/,lookbehind:!0,alias:"function"},"definition-query":{pattern:/(\bquery\s+)[a-zA-Z_]\w*/,lookbehind:!0,alias:"function"},keyword:/\b(?:directive|enum|extend|fragment|implements|input|interface|mutation|on|query|repeatable|scalar|schema|subscription|type|union)\b/,operator:/[!=|&]|\.{3}/,"property-query":/\w+(?=\s*\()/,object:/\w+(?=\s*\{)/,punctuation:/[!(){}\[\]:=,]/,property:/\w+/},Prism.hooks.add("after-tokenize",(function(e){if("graphql"===e.language)for(var t=e.tokens.filter((function(e){return"string"!==typeof e&&"comment"!==e.type&&"scalar"!==e.type})),n=0;n0)){var a=h(/^\{$/,/^\}$/);if(-1===a)continue;for(var l=n;l=0&&p(c,"variable-input")}}}}function u(e){return t[n+e]}function d(e,t){t=t||0;for(var n=0;n]?|\+[+=]?|!=?|<(?:<=?|=>?)?|>(?:>>?=?|=)?|&[&=]?|\|[|=]?|\/=?|\^=?|%=?)/,lookbehind:!0},punctuation:/\.+|[{}[\];(),:$]/}),e.languages.insertBefore("groovy","string",{shebang:{pattern:/#!.+/,alias:"comment",greedy:!0},"interpolation-string":{pattern:/"""(?:[^\\]|\\[\s\S])*?"""|(["/])(?:\\.|(?!\1)[^\\\r\n])*\1|\$\/(?:[^/$]|\$(?:[/$]|(?![/$]))|\/(?!\$))*\/\$/,greedy:!0,inside:{interpolation:t,string:/[\s\S]+/}}}),e.languages.insertBefore("groovy","punctuation",{"spock-block":/\b(?:and|cleanup|expect|given|setup|then|when|where):/}),e.languages.insertBefore("groovy","function",{annotation:{pattern:/(^|[^.])@\w+/,lookbehind:!0,alias:"punctuation"}}),t.inside.expression.inside=e.languages.groovy})(Prism)},9181:function(){(function(e){e.languages.haml={"multiline-comment":{pattern:/((?:^|\r?\n|\r)([\t ]*))(?:\/|-#).*(?:(?:\r?\n|\r)\2[\t ].+)*/,lookbehind:!0,alias:"comment"},"multiline-code":[{pattern:/((?:^|\r?\n|\r)([\t ]*)(?:[~-]|[&!]?=)).*,[\t ]*(?:(?:\r?\n|\r)\2[\t ].*,[\t ]*)*(?:(?:\r?\n|\r)\2[\t ].+)/,lookbehind:!0,inside:e.languages.ruby},{pattern:/((?:^|\r?\n|\r)([\t ]*)(?:[~-]|[&!]?=)).*\|[\t ]*(?:(?:\r?\n|\r)\2[\t ].*\|[\t ]*)*/,lookbehind:!0,inside:e.languages.ruby}],filter:{pattern:/((?:^|\r?\n|\r)([\t ]*)):[\w-]+(?:(?:\r?\n|\r)(?:\2[\t ].+|\s*?(?=\r?\n|\r)))+/,lookbehind:!0,inside:{"filter-name":{pattern:/^:[\w-]+/,alias:"symbol"}}},markup:{pattern:/((?:^|\r?\n|\r)[\t ]*)<.+/,lookbehind:!0,inside:e.languages.markup},doctype:{pattern:/((?:^|\r?\n|\r)[\t ]*)!!!(?: .+)?/,lookbehind:!0},tag:{pattern:/((?:^|\r?\n|\r)[\t ]*)[%.#][\w\-#.]*[\w\-](?:\([^)]+\)|\{(?:\{[^}]+\}|[^{}])+\}|\[[^\]]+\])*[\/<>]*/,lookbehind:!0,inside:{attributes:[{pattern:/(^|[^#])\{(?:\{[^}]+\}|[^{}])+\}/,lookbehind:!0,inside:e.languages.ruby},{pattern:/\([^)]+\)/,inside:{"attr-value":{pattern:/(=\s*)(?:"(?:\\.|[^\\"\r\n])*"|[^)\s]+)/,lookbehind:!0},"attr-name":/[\w:-]+(?=\s*!?=|\s*[,)])/,punctuation:/[=(),]/}},{pattern:/\[[^\]]+\]/,inside:e.languages.ruby}],punctuation:/[<>]/}},code:{pattern:/((?:^|\r?\n|\r)[\t ]*(?:[~-]|[&!]?=)).+/,lookbehind:!0,inside:e.languages.ruby},interpolation:{pattern:/#\{[^}]+\}/,inside:{delimiter:{pattern:/^#\{|\}$/,alias:"punctuation"},ruby:{pattern:/[\s\S]+/,inside:e.languages.ruby}}},punctuation:{pattern:/((?:^|\r?\n|\r)[\t ]*)[~=\-&!]+/,lookbehind:!0}};for(var t="((?:^|\\r?\\n|\\r)([\\t ]*)):{{filter_name}}(?:(?:\\r?\\n|\\r)(?:\\2[\\t ].+|\\s*?(?=\\r?\\n|\\r)))+",n=["css",{filter:"coffee",language:"coffeescript"},"erb","javascript","less","markdown","ruby","scss","textile"],r={},i=0,s=n.length;i@\[\\\]^`{|}~]/,variable:/[^!"#%&'()*+,\/;<=>@\[\\\]^`{|}~\s]+/},e.hooks.add("before-tokenize",(function(t){var n=/\{\{\{[\s\S]+?\}\}\}|\{\{[\s\S]+?\}\}/g;e.languages["markup-templating"].buildPlaceholders(t,"handlebars",n)})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"handlebars")})),e.languages.hbs=e.languages.handlebars,e.languages.mustache=e.languages.handlebars})(Prism)},1295:function(){Prism.languages.haskell={comment:{pattern:/(^|[^-!#$%*+=?&@|~.:<>^\\\/])(?:--(?:(?=.)[^-!#$%*+=?&@|~.:<>^\\\/].*|$)|\{-[\s\S]*?-\})/m,lookbehind:!0},char:{pattern:/'(?:[^\\']|\\(?:[abfnrtv\\"'&]|\^[A-Z@[\]^_]|ACK|BEL|BS|CAN|CR|DC1|DC2|DC3|DC4|DEL|DLE|EM|ENQ|EOT|ESC|ETB|ETX|FF|FS|GS|HT|LF|NAK|NUL|RS|SI|SO|SOH|SP|STX|SUB|SYN|US|VT|\d+|o[0-7]+|x[0-9a-fA-F]+))'/,alias:"string"},string:{pattern:/"(?:[^\\"]|\\(?:\S|\s+\\))*"/,greedy:!0},keyword:/\b(?:case|class|data|deriving|do|else|if|in|infixl|infixr|instance|let|module|newtype|of|primitive|then|type|where)\b/,"import-statement":{pattern:/(^[\t ]*)import\s+(?:qualified\s+)?(?:[A-Z][\w']*)(?:\.[A-Z][\w']*)*(?:\s+as\s+(?:[A-Z][\w']*)(?:\.[A-Z][\w']*)*)?(?:\s+hiding\b)?/m,lookbehind:!0,inside:{keyword:/\b(?:as|hiding|import|qualified)\b/,punctuation:/\./}},builtin:/\b(?:abs|acos|acosh|all|and|any|appendFile|approxRational|asTypeOf|asin|asinh|atan|atan2|atanh|basicIORun|break|catch|ceiling|chr|compare|concat|concatMap|const|cos|cosh|curry|cycle|decodeFloat|denominator|digitToInt|div|divMod|drop|dropWhile|either|elem|encodeFloat|enumFrom|enumFromThen|enumFromThenTo|enumFromTo|error|even|exp|exponent|fail|filter|flip|floatDigits|floatRadix|floatRange|floor|fmap|foldl|foldl1|foldr|foldr1|fromDouble|fromEnum|fromInt|fromInteger|fromIntegral|fromRational|fst|gcd|getChar|getContents|getLine|group|head|id|inRange|index|init|intToDigit|interact|ioError|isAlpha|isAlphaNum|isAscii|isControl|isDenormalized|isDigit|isHexDigit|isIEEE|isInfinite|isLower|isNaN|isNegativeZero|isOctDigit|isPrint|isSpace|isUpper|iterate|last|lcm|length|lex|lexDigits|lexLitChar|lines|log|logBase|lookup|map|mapM|mapM_|max|maxBound|maximum|maybe|min|minBound|minimum|mod|negate|not|notElem|null|numerator|odd|or|ord|otherwise|pack|pi|pred|primExitWith|print|product|properFraction|putChar|putStr|putStrLn|quot|quotRem|range|rangeSize|read|readDec|readFile|readFloat|readHex|readIO|readInt|readList|readLitChar|readLn|readOct|readParen|readSigned|reads|readsPrec|realToFrac|recip|rem|repeat|replicate|return|reverse|round|scaleFloat|scanl|scanl1|scanr|scanr1|seq|sequence|sequence_|show|showChar|showInt|showList|showLitChar|showParen|showSigned|showString|shows|showsPrec|significand|signum|sin|sinh|snd|sort|span|splitAt|sqrt|subtract|succ|sum|tail|take|takeWhile|tan|tanh|threadToIOResult|toEnum|toInt|toInteger|toLower|toRational|toUpper|truncate|uncurry|undefined|unlines|until|unwords|unzip|unzip3|userError|words|writeFile|zip|zip3|zipWith|zipWith3)\b/,number:/\b(?:\d+(?:\.\d+)?(?:e[+-]?\d+)?|0o[0-7]+|0x[0-9a-f]+)\b/i,operator:[{pattern:/`(?:[A-Z][\w']*\.)*[_a-z][\w']*`/,greedy:!0},{pattern:/(\s)\.(?=\s)/,lookbehind:!0},/[-!#$%*+=?&@|~:<>^\\\/][-!#$%*+=?&@|~.:<>^\\\/]*|\.[-!#$%*+=?&@|~.:<>^\\\/]+/],hvariable:{pattern:/\b(?:[A-Z][\w']*\.)*[_a-z][\w']*/,inside:{punctuation:/\./}},constant:{pattern:/\b(?:[A-Z][\w']*\.)*[A-Z][\w']*/,inside:{punctuation:/\./}},punctuation:/[{}[\];(),.:]/},Prism.languages.hs=Prism.languages.haskell},4324:function(){Prism.languages.haxe=Prism.languages.extend("clike",{string:{pattern:/"(?:[^"\\]|\\[\s\S])*"/,greedy:!0},"class-name":[{pattern:/(\b(?:abstract|class|enum|extends|implements|interface|new|typedef)\s+)[A-Z_]\w*/,lookbehind:!0},/\b[A-Z]\w*/],keyword:/\bthis\b|\b(?:abstract|as|break|case|cast|catch|class|continue|default|do|dynamic|else|enum|extends|extern|final|for|from|function|if|implements|import|in|inline|interface|macro|new|null|operator|overload|override|package|private|public|return|static|super|switch|throw|to|try|typedef|untyped|using|var|while)(?!\.)\b/,function:{pattern:/\b[a-z_]\w*(?=\s*(?:<[^<>]*>\s*)?\()/i,greedy:!0},operator:/\.{3}|\+\+|--|&&|\|\||->|=>|(?:<{1,3}|[-+*/%!=&|^])=?|[?:~]/}),Prism.languages.insertBefore("haxe","string",{"string-interpolation":{pattern:/'(?:[^'\\]|\\[\s\S])*'/,greedy:!0,inside:{interpolation:{pattern:/(^|[^\\])\$(?:\w+|\{[^{}]+\})/,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{?|\}$/,alias:"punctuation"},expression:{pattern:/[\s\S]+/,inside:Prism.languages.haxe}}},string:/[\s\S]+/}}}),Prism.languages.insertBefore("haxe","class-name",{regex:{pattern:/~\/(?:[^\/\\\r\n]|\\.)+\/[a-z]*/,greedy:!0,inside:{"regex-flags":/\b[a-z]+$/,"regex-source":{pattern:/^(~\/)[\s\S]+(?=\/$)/,lookbehind:!0,alias:"language-regex",inside:Prism.languages.regex},"regex-delimiter":/^~\/|\/$/}}}),Prism.languages.insertBefore("haxe","keyword",{preprocessor:{pattern:/#(?:else|elseif|end|if)\b.*/,alias:"property"},metadata:{pattern:/@:?[\w.]+/,alias:"symbol"},reification:{pattern:/\$(?:\w+|(?=\{))/,alias:"important"}})},9337:function(){Prism.languages.hcl={comment:/(?:\/\/|#).*|\/\*[\s\S]*?(?:\*\/|$)/,heredoc:{pattern:/<<-?(\w+\b)[\s\S]*?^[ \t]*\1/m,greedy:!0,alias:"string"},keyword:[{pattern:/(?:data|resource)\s+(?:"(?:\\[\s\S]|[^\\"])*")(?=\s+"[\w-]+"\s+\{)/i,inside:{type:{pattern:/(resource|data|\s+)(?:"(?:\\[\s\S]|[^\\"])*")/i,lookbehind:!0,alias:"variable"}}},{pattern:/(?:backend|module|output|provider|provisioner|variable)\s+(?:[\w-]+|"(?:\\[\s\S]|[^\\"])*")\s+(?=\{)/i,inside:{type:{pattern:/(backend|module|output|provider|provisioner|variable)\s+(?:[\w-]+|"(?:\\[\s\S]|[^\\"])*")\s+/i,lookbehind:!0,alias:"variable"}}},/[\w-]+(?=\s+\{)/],property:[/[-\w\.]+(?=\s*=(?!=))/,/"(?:\\[\s\S]|[^\\"])+"(?=\s*[:=])/],string:{pattern:/"(?:[^\\$"]|\\[\s\S]|\$(?:(?=")|\$+(?!\$)|[^"${])|\$\{(?:[^{}"]|"(?:[^\\"]|\\[\s\S])*")*\})*"/,greedy:!0,inside:{interpolation:{pattern:/(^|[^$])\$\{(?:[^{}"]|"(?:[^\\"]|\\[\s\S])*")*\}/,lookbehind:!0,inside:{type:{pattern:/(\b(?:count|data|local|module|path|self|terraform|var)\b\.)[\w\*]+/i,lookbehind:!0,alias:"variable"},keyword:/\b(?:count|data|local|module|path|self|terraform|var)\b/i,function:/\w+(?=\()/,string:{pattern:/"(?:\\[\s\S]|[^\\"])*"/,greedy:!0},number:/\b0x[\da-f]+\b|\b\d+(?:\.\d*)?(?:e[+-]?\d+)?/i,punctuation:/[!\$#%&'()*+,.\/;<=>@\[\\\]^`{|}~?:]/}}}},number:/\b0x[\da-f]+\b|\b\d+(?:\.\d*)?(?:e[+-]?\d+)?/i,boolean:/\b(?:false|true)\b/i,punctuation:/[=\[\]{}]/}},5578:function(){Prism.languages.hlsl=Prism.languages.extend("c",{"class-name":[Prism.languages.c["class-name"],/\b(?:AppendStructuredBuffer|BlendState|Buffer|ByteAddressBuffer|CompileShader|ComputeShader|ConsumeStructuredBuffer|DepthStencilState|DepthStencilView|DomainShader|GeometryShader|Hullshader|InputPatch|LineStream|OutputPatch|PixelShader|PointStream|RWBuffer|RWByteAddressBuffer|RWStructuredBuffer|RWTexture(?:1D|1DArray|2D|2DArray|3D)|RasterizerState|RenderTargetView|SamplerComparisonState|SamplerState|StructuredBuffer|Texture(?:1D|1DArray|2D|2DArray|2DMS|2DMSArray|3D|Cube|CubeArray)|TriangleStream|VertexShader)\b/],keyword:[/\b(?:asm|asm_fragment|auto|break|case|catch|cbuffer|centroid|char|class|column_major|compile|compile_fragment|const|const_cast|continue|default|delete|discard|do|dynamic_cast|else|enum|explicit|export|extern|for|friend|fxgroup|goto|groupshared|if|in|inline|inout|interface|line|lineadj|linear|long|matrix|mutable|namespace|new|nointerpolation|noperspective|operator|out|packoffset|pass|pixelfragment|point|precise|private|protected|public|register|reinterpret_cast|return|row_major|sample|sampler|shared|short|signed|sizeof|snorm|stateblock|stateblock_state|static|static_cast|string|struct|switch|tbuffer|technique|technique10|technique11|template|texture|this|throw|triangle|triangleadj|try|typedef|typename|uniform|union|unorm|unsigned|using|vector|vertexfragment|virtual|void|volatile|while)\b/,/\b(?:bool|double|dword|float|half|int|min(?:10float|12int|16(?:float|int|uint))|uint)(?:[1-4](?:x[1-4])?)?\b/],number:/(?:(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[eE][+-]?\d+)?|\b0x[\da-fA-F]+)[fFhHlLuU]?\b/,boolean:/\b(?:false|true)\b/})},8161:function(){Prism.languages.hoon={comment:{pattern:/::.*/,greedy:!0},string:{pattern:/"(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*'/,greedy:!0},constant:/%(?:\.[ny]|[\w-]+)/,"class-name":/@(?:[a-z0-9-]*[a-z0-9])?|\*/i,function:/(?:\+[-+] {2})?(?:[a-z](?:[a-z0-9-]*[a-z0-9])?)/,keyword:/\.[\^\+\*=\?]|![><:\.=\?!]|=[>|:,\.\-\^<+;/~\*\?]|\?[>|:\.\-\^<\+&~=@!]|\|[\$_%:\.\-\^~\*=@\?]|\+[|\$\+\*]|:[_\-\^\+~\*]|%[_:\.\-\^\+~\*=]|\^[|:\.\-\+&~\*=\?]|\$[|_%:<>\-\^&~@=\?]|;[:<\+;\/~\*=]|~[>|\$_%<\+\/&=\?!]|--|==/}},6203:function(){Prism.languages.hpkp={directive:{pattern:/\b(?:includeSubDomains|max-age|pin-sha256|preload|report-to|report-uri|strict)(?=[\s;=]|$)/i,alias:"property"},operator:/=/,punctuation:/;/}},7786:function(){Prism.languages.hsts={directive:{pattern:/\b(?:includeSubDomains|max-age|preload)(?=[\s;=]|$)/i,alias:"property"},operator:/=/,punctuation:/;/}},57:function(){(function(e){function t(e){return RegExp("(^(?:"+e+"):[ \t]*(?![ \t]))[^]+","i")}e.languages.http={"request-line":{pattern:/^(?:CONNECT|DELETE|GET|HEAD|OPTIONS|PATCH|POST|PRI|PUT|SEARCH|TRACE)\s(?:https?:\/\/|\/)\S*\sHTTP\/[\d.]+/m,inside:{method:{pattern:/^[A-Z]+\b/,alias:"property"},"request-target":{pattern:/^(\s)(?:https?:\/\/|\/)\S*(?=\s)/,lookbehind:!0,alias:"url",inside:e.languages.uri},"http-version":{pattern:/^(\s)HTTP\/[\d.]+/,lookbehind:!0,alias:"property"}}},"response-status":{pattern:/^HTTP\/[\d.]+ \d+ .+/m,inside:{"http-version":{pattern:/^HTTP\/[\d.]+/,alias:"property"},"status-code":{pattern:/^(\s)\d+(?=\s)/,lookbehind:!0,alias:"number"},"reason-phrase":{pattern:/^(\s).+/,lookbehind:!0,alias:"string"}}},header:{pattern:/^[\w-]+:.+(?:(?:\r\n?|\n)[ \t].+)*/m,inside:{"header-value":[{pattern:t(/Content-Security-Policy/.source),lookbehind:!0,alias:["csp","languages-csp"],inside:e.languages.csp},{pattern:t(/Public-Key-Pins(?:-Report-Only)?/.source),lookbehind:!0,alias:["hpkp","languages-hpkp"],inside:e.languages.hpkp},{pattern:t(/Strict-Transport-Security/.source),lookbehind:!0,alias:["hsts","languages-hsts"],inside:e.languages.hsts},{pattern:t(/[^:]+/.source),lookbehind:!0}],"header-name":{pattern:/^[^:]+/,alias:"keyword"},punctuation:/^:/}}};var n,r=e.languages,i={"application/javascript":r.javascript,"application/json":r.json||r.javascript,"application/xml":r.xml,"text/xml":r.xml,"text/html":r.html,"text/css":r.css,"text/plain":r.plain},s={"application/json":!0,"application/xml":!0};function o(e){var t=e.replace(/^[a-z]+\//,""),n="\\w+/(?:[\\w.-]+\\+)+"+t+"(?![+\\w.-])";return"(?:"+e+"|"+n+")"}for(var a in i)if(i[a]){n=n||{};var l=s[a]?o(a):a;n[a.replace(/\//g,"-")]={pattern:RegExp("("+/content-type:\s*/.source+l+/(?:(?:\r\n?|\n)[\w-].*)*(?:\r(?:\n|(?!\n))|\n)/.source+")"+/[^ \t\w-][\s\S]*/.source,"i"),lookbehind:!0,inside:i[a]}}n&&e.languages.insertBefore("http","header",n)})(Prism)},7460:function(){Prism.languages.ichigojam={comment:/(?:\B'|REM)(?:[^\n\r]*)/i,string:{pattern:/"(?:""|[!#$%&'()*,\/:;<=>?^\w +\-.])*"/,greedy:!0},number:/\B#[0-9A-F]+|\B`[01]+|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:E[+-]?\d+)?/i,keyword:/\b(?:BEEP|BPS|CASE|CLEAR|CLK|CLO|CLP|CLS|CLT|CLV|CONT|COPY|ELSE|END|FILE|FILES|FOR|GOSUB|GOTO|GSB|IF|INPUT|KBD|LED|LET|LIST|LOAD|LOCATE|LRUN|NEW|NEXT|OUT|PLAY|POKE|PRINT|PWM|REM|RENUM|RESET|RETURN|RIGHT|RTN|RUN|SAVE|SCROLL|SLEEP|SRND|STEP|STOP|SUB|TEMPO|THEN|TO|UART|VIDEO|WAIT)(?:\$|\b)/i,function:/\b(?:ABS|ANA|ASC|BIN|BTN|DEC|END|FREE|HELP|HEX|I2CR|I2CW|IN|INKEY|LEN|LINE|PEEK|RND|SCR|SOUND|STR|TICK|USR|VER|VPEEK|ZER)(?:\$|\b)/i,label:/(?:\B@\S+)/,operator:/<[=>]?|>=?|\|\||&&|[+\-*\/=|&^~!]|\b(?:AND|NOT|OR)\b/i,punctuation:/[\[,;:()\]]/}},4263:function(){Prism.languages.icon={comment:/#.*/,string:{pattern:/(["'])(?:(?!\1)[^\\\r\n_]|\\.|_(?!\1)(?:\r\n|[\s\S]))*\1/,greedy:!0},number:/\b(?:\d+r[a-z\d]+|\d+(?:\.\d+)?(?:e[+-]?\d+)?)\b|\.\d+\b/i,"builtin-keyword":{pattern:/&(?:allocated|ascii|clock|collections|cset|current|date|dateline|digits|dump|e|error(?:number|text|value)?|errout|fail|features|file|host|input|lcase|letters|level|line|main|null|output|phi|pi|pos|progname|random|regions|source|storage|subject|time|trace|ucase|version)\b/,alias:"variable"},directive:{pattern:/\$\w+/,alias:"builtin"},keyword:/\b(?:break|by|case|create|default|do|else|end|every|fail|global|if|initial|invocable|link|local|next|not|of|procedure|record|repeat|return|static|suspend|then|to|until|while)\b/,function:/\b(?!\d)\w+(?=\s*[({]|\s*!\s*\[)/,operator:/[+-]:(?!=)|(?:[\/?@^%&]|\+\+?|--?|==?=?|~==?=?|\*\*?|\|\|\|?|<(?:->?|>?=?)(?::=)?|:(?:=:?)?|[!.\\|~]/,punctuation:/[\[\](){},;]/}},175:function(){(function(e){function t(e,n){return n<=0?/[]/.source:e.replace(//g,(function(){return t(e,n-1)}))}var n=/'[{}:=,](?:[^']|'')*'(?!')/,r={pattern:/''/,greedy:!0,alias:"operator"},i={pattern:n,greedy:!0,inside:{escape:r}},s=t(/\{(?:[^{}']|'(?![{},'])|''||)*\}/.source.replace(//g,(function(){return n.source})),8),o={pattern:RegExp(s),inside:{message:{pattern:/^(\{)[\s\S]+(?=\}$)/,lookbehind:!0,inside:null},"message-delimiter":{pattern:/./,alias:"punctuation"}}};e.languages["icu-message-format"]={argument:{pattern:RegExp(s),greedy:!0,inside:{content:{pattern:/^(\{)[\s\S]+(?=\}$)/,lookbehind:!0,inside:{"argument-name":{pattern:/^(\s*)[^{}:=,\s]+/,lookbehind:!0},"choice-style":{pattern:/^(\s*,\s*choice\s*,\s*)\S(?:[\s\S]*\S)?/,lookbehind:!0,inside:{punctuation:/\|/,range:{pattern:/^(\s*)[+-]?(?:\d+(?:\.\d*)?|\u221e)\s*[<#\u2264]/,lookbehind:!0,inside:{operator:/[<#\u2264]/,number:/\S+/}},rest:null}},"plural-style":{pattern:/^(\s*,\s*(?:plural|selectordinal)\s*,\s*)\S(?:[\s\S]*\S)?/,lookbehind:!0,inside:{offset:/^offset:\s*\d+/,"nested-message":o,selector:{pattern:/=\d+|[^{}:=,\s]+/,inside:{keyword:/^(?:few|many|one|other|two|zero)$/}}}},"select-style":{pattern:/^(\s*,\s*select\s*,\s*)\S(?:[\s\S]*\S)?/,lookbehind:!0,inside:{"nested-message":o,selector:{pattern:/[^{}:=,\s]+/,inside:{keyword:/^other$/}}}},keyword:/\b(?:choice|plural|select|selectordinal)\b/,"arg-type":{pattern:/\b(?:date|duration|number|ordinal|spellout|time)\b/,alias:"keyword"},"arg-skeleton":{pattern:/(,\s*)::[^{}:=,\s]+/,lookbehind:!0},"arg-style":{pattern:/(,\s*)(?:currency|full|integer|long|medium|percent|short)(?=\s*$)/,lookbehind:!0},"arg-style-text":{pattern:RegExp(/(^\s*,\s*(?=\S))/.source+t(/(?:[^{}']|'[^']*'|\{(?:)?\})+/.source,8)+"$"),lookbehind:!0,alias:"string"},punctuation:/,/}},"argument-delimiter":{pattern:/./,alias:"operator"}}},escape:r,string:i},o.inside.message.inside=e.languages["icu-message-format"],e.languages["icu-message-format"].argument.inside.content.inside["choice-style"].inside.rest=e.languages["icu-message-format"]})(Prism)},6150:function(){Prism.languages.idris=Prism.languages.extend("haskell",{comment:{pattern:/(?:(?:--|\|\|\|).*$|\{-[\s\S]*?-\})/m},keyword:/\b(?:Type|case|class|codata|constructor|corecord|data|do|dsl|else|export|if|implementation|implicit|import|impossible|in|infix|infixl|infixr|instance|interface|let|module|mutual|namespace|of|parameters|partial|postulate|private|proof|public|quoteGoal|record|rewrite|syntax|then|total|using|where|with)\b/,builtin:void 0}),Prism.languages.insertBefore("idris","keyword",{"import-statement":{pattern:/(^\s*import\s+)(?:[A-Z][\w']*)(?:\.[A-Z][\w']*)*/m,lookbehind:!0,inside:{punctuation:/\./}}}),Prism.languages.idr=Prism.languages.idris},5689:function(){Prism.languages.iecst={comment:[{pattern:/(^|[^\\])(?:\/\*[\s\S]*?(?:\*\/|$)|\(\*[\s\S]*?(?:\*\)|$)|\{[\s\S]*?(?:\}|$))/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},keyword:[/\b(?:END_)?(?:PROGRAM|CONFIGURATION|INTERFACE|FUNCTION_BLOCK|FUNCTION|ACTION|TRANSITION|TYPE|STRUCT|(?:INITIAL_)?STEP|NAMESPACE|LIBRARY|CHANNEL|FOLDER|RESOURCE|VAR_(?:ACCESS|CONFIG|EXTERNAL|GLOBAL|INPUT|IN_OUT|OUTPUT|TEMP)|VAR|METHOD|PROPERTY)\b/i,/\b(?:AT|BY|(?:END_)?(?:CASE|FOR|IF|REPEAT|WHILE)|CONSTANT|CONTINUE|DO|ELSE|ELSIF|EXIT|EXTENDS|FROM|GET|GOTO|IMPLEMENTS|JMP|NON_RETAIN|OF|PRIVATE|PROTECTED|PUBLIC|RETAIN|RETURN|SET|TASK|THEN|TO|UNTIL|USING|WITH|__CATCH|__ENDTRY|__FINALLY|__TRY)\b/],"class-name":/\b(?:ANY|ARRAY|BOOL|BYTE|U?(?:D|L|S)?INT|(?:D|L)?WORD|DATE(?:_AND_TIME)?|DT|L?REAL|POINTER|STRING|TIME(?:_OF_DAY)?|TOD)\b/,address:{pattern:/%[IQM][XBWDL][\d.]*|%[IQ][\d.]*/,alias:"symbol"},number:/\b(?:16#[\da-f]+|2#[01_]+|0x[\da-f]+)\b|\b(?:D|DT|T|TOD)#[\d_shmd:]*|\b[A-Z]*#[\d.,_]*|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?/i,boolean:/\b(?:FALSE|NULL|TRUE)\b/,operator:/S?R?:?=>?|&&?|\*\*?|<[=>]?|>=?|[-:^/+#]|\b(?:AND|EQ|EXPT|GE|GT|LE|LT|MOD|NE|NOT|OR|XOR)\b/,function:/\b[a-z_]\w*(?=\s*\()/i,punctuation:/[()[\].,;]/}},880:function(){(function(e){e.languages.ignore={comment:/^#.*/m,entry:{pattern:/\S(?:.*(?:(?:\\ )|\S))?/,alias:"string",inside:{operator:/^!|\*\*?|\?/,regex:{pattern:/(^|[^\\])\[[^\[\]]*\]/,lookbehind:!0},punctuation:/\//}}},e.languages.gitignore=e.languages.ignore,e.languages.hgignore=e.languages.ignore,e.languages.npmignore=e.languages.ignore})(Prism)},6521:function(){Prism.languages.inform7={string:{pattern:/"[^"]*"/,inside:{substitution:{pattern:/\[[^\[\]]+\]/,inside:{delimiter:{pattern:/\[|\]/,alias:"punctuation"}}}}},comment:{pattern:/\[[^\[\]]+\]/,greedy:!0},title:{pattern:/^[ \t]*(?:book|chapter|part(?! of)|section|table|volume)\b.+/im,alias:"important"},number:{pattern:/(^|[^-])(?:\b\d+(?:\.\d+)?(?:\^\d+)?(?:(?!\d)\w+)?|\b(?:eight|eleven|five|four|nine|one|seven|six|ten|three|twelve|two))\b(?!-)/i,lookbehind:!0},verb:{pattern:/(^|[^-])\b(?:answering|applying to|are|asking|attacking|be(?:ing)?|burning|buying|called|carries|carry(?! out)|carrying|climbing|closing|conceal(?:ing|s)?|consulting|contain(?:ing|s)?|cutting|drinking|dropping|eating|enclos(?:es?|ing)|entering|examining|exiting|getting|giving|going|ha(?:s|ve|ving)|hold(?:ing|s)?|impl(?:ies|y)|incorporat(?:es?|ing)|inserting|is|jumping|kissing|listening|locking|looking|mean(?:ing|s)?|opening|provid(?:es?|ing)|pulling|pushing|putting|relat(?:es?|ing)|removing|searching|see(?:ing|s)?|setting|showing|singing|sleeping|smelling|squeezing|support(?:ing|s)?|swearing|switching|taking|tasting|telling|thinking|throwing|touching|turning|tying|unlock(?:ing|s)?|var(?:ies|y|ying)|waiting|waking|waving|wear(?:ing|s)?)\b(?!-)/i,lookbehind:!0,alias:"operator"},keyword:{pattern:/(^|[^-])\b(?:after|before|carry out|check|continue the action|definition(?= *:)|do nothing|else|end (?:if|the story|unless)|every turn|if|include|instead(?: of)?|let|move|no|now|otherwise|repeat|report|resume the story|rule for|running through|say(?:ing)?|stop the action|test|try(?:ing)?|understand|unless|use|when|while|yes)\b(?!-)/i,lookbehind:!0},property:{pattern:/(^|[^-])\b(?:adjacent(?! to)|carried|closed|concealed|contained|dark|described|edible|empty|enclosed|enterable|even|female|fixed in place|full|handled|held|improper-named|incorporated|inedible|invisible|lighted|lit|lock(?:able|ed)|male|marked for listing|mentioned|negative|neuter|non-(?:empty|full|recurring)|odd|opaque|open(?:able)?|plural-named|portable|positive|privately-named|proper-named|provided|publically-named|pushable between rooms|recurring|related|rubbing|scenery|seen|singular-named|supported|swinging|switch(?:able|ed(?: off| on)?)|touch(?:able|ed)|transparent|unconcealed|undescribed|unlit|unlocked|unmarked for listing|unmentioned|unopenable|untouchable|unvisited|variable|visible|visited|wearable|worn)\b(?!-)/i,lookbehind:!0,alias:"symbol"},position:{pattern:/(^|[^-])\b(?:above|adjacent to|back side of|below|between|down|east|everywhere|front side|here|in|inside(?: from)?|north(?:east|west)?|nowhere|on(?: top of)?|other side|outside(?: from)?|parts? of|regionally in|south(?:east|west)?|through|up|west|within)\b(?!-)/i,lookbehind:!0,alias:"keyword"},type:{pattern:/(^|[^-])\b(?:actions?|activit(?:ies|y)|actors?|animals?|backdrops?|containers?|devices?|directions?|doors?|holders?|kinds?|lists?|m[ae]n|nobody|nothing|nouns?|numbers?|objects?|people|persons?|player(?:'s holdall)?|regions?|relations?|rooms?|rule(?:book)?s?|scenes?|someone|something|supporters?|tables?|texts?|things?|time|vehicles?|wom[ae]n)\b(?!-)/i,lookbehind:!0,alias:"variable"},punctuation:/[.,:;(){}]/},Prism.languages.inform7["string"].inside["substitution"].inside.rest=Prism.languages.inform7,Prism.languages.inform7["string"].inside["substitution"].inside.rest.text={pattern:/\S(?:\s*\S)*/,alias:"comment"}},9525:function(){Prism.languages.ini={comment:{pattern:/(^[ \f\t\v]*)[#;][^\n\r]*/m,lookbehind:!0},section:{pattern:/(^[ \f\t\v]*)\[[^\n\r\]]*\]?/m,lookbehind:!0,inside:{"section-name":{pattern:/(^\[[ \f\t\v]*)[^ \f\t\v\]]+(?:[ \f\t\v]+[^ \f\t\v\]]+)*/,lookbehind:!0,alias:"selector"},punctuation:/\[|\]/}},key:{pattern:/(^[ \f\t\v]*)[^ \f\n\r\t\v=]+(?:[ \f\t\v]+[^ \f\n\r\t\v=]+)*(?=[ \f\t\v]*=)/m,lookbehind:!0,alias:"attr-name"},value:{pattern:/(=[ \f\t\v]*)[^ \f\n\r\t\v]+(?:[ \f\t\v]+[^ \f\n\r\t\v]+)*/,lookbehind:!0,alias:"attr-value",inside:{"inner-value":{pattern:/^("|').+(?=\1$)/,lookbehind:!0}}},punctuation:/=/}},8942:function(){Prism.languages.io={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?(?:\*\/|$)|\/\/.*|#.*)/,lookbehind:!0,greedy:!0},"triple-quoted-string":{pattern:/"""(?:\\[\s\S]|(?!""")[^\\])*"""/,greedy:!0,alias:"string"},string:{pattern:/"(?:\\.|[^\\\r\n"])*"/,greedy:!0},keyword:/\b(?:activate|activeCoroCount|asString|block|break|call|catch|clone|collectGarbage|compileString|continue|do|doFile|doMessage|doString|else|elseif|exit|for|foreach|forward|getEnvironmentVariable|getSlot|hasSlot|if|ifFalse|ifNil|ifNilEval|ifTrue|isActive|isNil|isResumable|list|message|method|parent|pass|pause|perform|performWithArgList|print|println|proto|raise|raiseResumable|removeSlot|resend|resume|schedulerSleepSeconds|self|sender|setSchedulerSleepSeconds|setSlot|shallowCopy|slotNames|super|system|then|thisBlock|thisContext|try|type|uniqueId|updateSlot|wait|while|write|yield)\b/,builtin:/\b(?:Array|AudioDevice|AudioMixer|BigNum|Block|Box|Buffer|CFunction|CGI|Color|Curses|DBM|DNSResolver|DOConnection|DOProxy|DOServer|Date|Directory|Duration|DynLib|Error|Exception|FFT|File|Fnmatch|Font|Future|GL|GLE|GLScissor|GLU|GLUCylinder|GLUQuadric|GLUSphere|GLUT|Host|Image|Importer|LinkList|List|Lobby|Locals|MD5|MP3Decoder|MP3Encoder|Map|Message|Movie|Notification|Number|Object|OpenGL|Point|Protos|Random|Regex|SGML|SGMLElement|SGMLParser|SQLite|Sequence|Server|ShowMessage|SleepyCat|SleepyCatCursor|Socket|SocketManager|Sound|Soup|Store|String|Tree|UDPSender|UPDReceiver|URL|User|Warning|WeakLink)\b/,boolean:/\b(?:false|nil|true)\b/,number:/\b0x[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e-?\d+)?/i,operator:/[=!*/%+\-^&|]=|>>?=?|<+*\-%$|,#][.:]?|[?^]\.?|[;\[]:?|[~}"i][.:]|[ACeEIjLor]\.|(?:[_\/\\qsux]|_?\d):)/,alias:"keyword"},number:/\b_?(?:(?!\d:)\d+(?:\.\d+)?(?:(?:ad|ar|[ejpx])_?\d+(?:\.\d+)?)*(?:b_?[\da-z]+(?:\.[\da-z]+)?)?|_\b(?!\.))/,adverb:{pattern:/[~}]|[\/\\]\.?|[bfM]\.|t[.:]/,alias:"builtin"},operator:/[=a][.:]|_\./,conjunction:{pattern:/&(?:\.:?|:)?|[.:@][.:]?|[!D][.:]|[;dHT]\.|`:?|[\^LS]:|"/,alias:"variable"},punctuation:/[()]/}},2503:function(){(function(e){var t=/\b(?:abstract|assert|boolean|break|byte|case|catch|char|class|const|continue|default|do|double|else|enum|exports|extends|final|finally|float|for|goto|if|implements|import|instanceof|int|interface|long|module|native|new|non-sealed|null|open|opens|package|permits|private|protected|provides|public|record(?!\s*[(){}[\]<>=%~.:,;?+\-*/&|^])|requires|return|sealed|short|static|strictfp|super|switch|synchronized|this|throw|throws|to|transient|transitive|try|uses|var|void|volatile|while|with|yield)\b/,n=/(?:[a-z]\w*\s*\.\s*)*(?:[A-Z]\w*\s*\.\s*)*/.source,r={pattern:RegExp(/(^|[^\w.])/.source+n+/[A-Z](?:[\d_A-Z]*[a-z]\w*)?\b/.source),lookbehind:!0,inside:{namespace:{pattern:/^[a-z]\w*(?:\s*\.\s*[a-z]\w*)*(?:\s*\.)?/,inside:{punctuation:/\./}},punctuation:/\./}};e.languages.java=e.languages.extend("clike",{string:{pattern:/(^|[^\\])"(?:\\.|[^"\\\r\n])*"/,lookbehind:!0,greedy:!0},"class-name":[r,{pattern:RegExp(/(^|[^\w.])/.source+n+/[A-Z]\w*(?=\s+\w+\s*[;,=()]|\s*(?:\[[\s,]*\]\s*)?::\s*new\b)/.source),lookbehind:!0,inside:r.inside},{pattern:RegExp(/(\b(?:class|enum|extends|implements|instanceof|interface|new|record|throws)\s+)/.source+n+/[A-Z]\w*\b/.source),lookbehind:!0,inside:r.inside}],keyword:t,function:[e.languages.clike.function,{pattern:/(::\s*)[a-z_]\w*/,lookbehind:!0}],number:/\b0b[01][01_]*L?\b|\b0x(?:\.[\da-f_p+-]+|[\da-f_]+(?:\.[\da-f_p+-]+)?)\b|(?:\b\d[\d_]*(?:\.[\d_]*)?|\B\.\d[\d_]*)(?:e[+-]?\d[\d_]*)?[dfl]?/i,operator:{pattern:/(^|[^.])(?:<<=?|>>>?=?|->|--|\+\+|&&|\|\||::|[?:~]|[-+*/%&|^!=<>]=?)/m,lookbehind:!0},constant:/\b[A-Z][A-Z_\d]+\b/}),e.languages.insertBefore("java","string",{"triple-quoted-string":{pattern:/"""[ \t]*[\r\n](?:(?:"|"")?(?:\\.|[^"\\]))*"""/,greedy:!0,alias:"string"},char:{pattern:/'(?:\\.|[^'\\\r\n]){1,6}'/,greedy:!0}}),e.languages.insertBefore("java","class-name",{annotation:{pattern:/(^|[^.])@\w+(?:\s*\.\s*\w+)*/,lookbehind:!0,alias:"punctuation"},generics:{pattern:/<(?:[\w\s,.?]|&(?!&)|<(?:[\w\s,.?]|&(?!&)|<(?:[\w\s,.?]|&(?!&)|<(?:[\w\s,.?]|&(?!&))*>)*>)*>)*>/,inside:{"class-name":r,keyword:t,punctuation:/[<>(),.:]/,operator:/[?&|]/}},import:[{pattern:RegExp(/(\bimport\s+)/.source+n+/(?:[A-Z]\w*|\*)(?=\s*;)/.source),lookbehind:!0,inside:{namespace:r.inside.namespace,punctuation:/\./,operator:/\*/,"class-name":/\w+/}},{pattern:RegExp(/(\bimport\s+static\s+)/.source+n+/(?:\w+|\*)(?=\s*;)/.source),lookbehind:!0,alias:"static",inside:{namespace:r.inside.namespace,static:/\b\w+$/,punctuation:/\./,operator:/\*/,"class-name":/\w+/}}],namespace:{pattern:RegExp(/(\b(?:exports|import(?:\s+static)?|module|open|opens|package|provides|requires|to|transitive|uses|with)\s+)(?!)[a-z]\w*(?:\.[a-z]\w*)*\.?/.source.replace(//g,(function(){return t.source}))),lookbehind:!0,inside:{punctuation:/\./}}})})(Prism)},2008:function(){(function(e){var t=/(^(?:[\t ]*(?:\*\s*)*))[^*\s].*$/m,n=/#\s*\w+(?:\s*\([^()]*\))?/.source,r=/(?:\b[a-zA-Z]\w+\s*\.\s*)*\b[A-Z]\w*(?:\s*)?|/.source.replace(//g,(function(){return n}));e.languages.javadoc=e.languages.extend("javadoclike",{}),e.languages.insertBefore("javadoc","keyword",{reference:{pattern:RegExp(/(@(?:exception|link|linkplain|see|throws|value)\s+(?:\*\s*)?)/.source+"(?:"+r+")"),lookbehind:!0,inside:{function:{pattern:/(#\s*)\w+(?=\s*\()/,lookbehind:!0},field:{pattern:/(#\s*)\w+/,lookbehind:!0},namespace:{pattern:/\b(?:[a-z]\w*\s*\.\s*)+/,inside:{punctuation:/\./}},"class-name":/\b[A-Z]\w*/,keyword:e.languages.java.keyword,punctuation:/[#()[\],.]/}},"class-name":{pattern:/(@param\s+)<[A-Z]\w*>/,lookbehind:!0,inside:{punctuation:/[.<>]/}},"code-section":[{pattern:/(\{@code\s+(?!\s))(?:[^\s{}]|\s+(?![\s}])|\{(?:[^{}]|\{(?:[^{}]|\{(?:[^{}]|\{[^{}]*\})*\})*\})*\})+(?=\s*\})/,lookbehind:!0,inside:{code:{pattern:t,lookbehind:!0,inside:e.languages.java,alias:"language-java"}}},{pattern:/(<(code|pre|tt)>(?!)\s*)\S(?:\S|\s+\S)*?(?=\s*<\/\2>)/,lookbehind:!0,inside:{line:{pattern:t,lookbehind:!0,inside:{tag:e.languages.markup.tag,entity:e.languages.markup.entity,code:{pattern:/.+/,inside:e.languages.java,alias:"language-java"}}}}}],tag:e.languages.markup.tag,entity:e.languages.markup.entity}),e.languages.javadoclike.addSupport("java",e.languages.javadoc)})(Prism)},4884:function(){(function(e){var t=e.languages.javadoclike={parameter:{pattern:/(^[\t ]*(?:\/{3}|\*|\/\*\*)\s*@(?:arg|arguments|param)\s+)\w+/m,lookbehind:!0},keyword:{pattern:/(^[\t ]*(?:\/{3}|\*|\/\*\*)\s*|\{)@[a-z][a-zA-Z-]+\b/m,lookbehind:!0},punctuation:/[{}]/};function n(t,n){var r="doc-comment",i=e.languages[t];if(i){var s=i[r];if(!s){var o={};o[r]={pattern:/(^|[^\\])\/\*\*[^/][\s\S]*?(?:\*\/|$)/,lookbehind:!0,alias:"comment"},i=e.languages.insertBefore(t,"comment",o),s=i[r]}if(s instanceof RegExp&&(s=i[r]={pattern:s}),Array.isArray(s))for(var a=0,l=s.length;a|&&=?|\|\|=?|[!=]==|<<=?|>>>?=?|[-+*/%&|^!=<>]=?|\.{3}|\?\?=?|\?\.?|[~:]/}),Prism.languages.javascript["class-name"][0].pattern=/(\b(?:class|extends|implements|instanceof|interface|new)\s+)[\w.\\]+/,Prism.languages.insertBefore("javascript","keyword",{regex:{pattern:RegExp(/((?:^|[^$\w\xA0-\uFFFF."'\])\s]|\b(?:return|yield))\s*)/.source+/\//.source+"(?:"+/(?:\[(?:[^\]\\\r\n]|\\.)*\]|\\.|[^/\\\[\r\n])+\/[dgimyus]{0,7}/.source+"|"+/(?:\[(?:[^[\]\\\r\n]|\\.|\[(?:[^[\]\\\r\n]|\\.|\[(?:[^[\]\\\r\n]|\\.)*\])*\])*\]|\\.|[^/\\\[\r\n])+\/[dgimyus]{0,7}v[dgimyus]{0,7}/.source+")"+/(?=(?:\s|\/\*(?:[^*]|\*(?!\/))*\*\/)*(?:$|[\r\n,.;:})\]]|\/\/))/.source),lookbehind:!0,greedy:!0,inside:{"regex-source":{pattern:/^(\/)[\s\S]+(?=\/[a-z]*$)/,lookbehind:!0,alias:"language-regex",inside:Prism.languages.regex},"regex-delimiter":/^\/|\/$/,"regex-flags":/^[a-z]+$/}},"function-variable":{pattern:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*[=:]\s*(?:async\s*)?(?:\bfunction\b|(?:\((?:[^()]|\([^()]*\))*\)|(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)\s*=>))/,alias:"function"},parameter:[{pattern:/(function(?:\s+(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)?\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\))/,lookbehind:!0,inside:Prism.languages.javascript},{pattern:/(^|[^$\w\xA0-\uFFFF])(?!\s)[_$a-z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*=>)/i,lookbehind:!0,inside:Prism.languages.javascript},{pattern:/(\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*=>)/,lookbehind:!0,inside:Prism.languages.javascript},{pattern:/((?:\b|\s|^)(?!(?:as|async|await|break|case|catch|class|const|continue|debugger|default|delete|do|else|enum|export|extends|finally|for|from|function|get|if|implements|import|in|instanceof|interface|let|new|null|of|package|private|protected|public|return|set|static|super|switch|this|throw|try|typeof|undefined|var|void|while|with|yield)(?![$\w\xA0-\uFFFF]))(?:(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*\s*)\(\s*|\]\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*\{)/,lookbehind:!0,inside:Prism.languages.javascript}],constant:/\b[A-Z](?:[A-Z_]|\dx?)*\b/}),Prism.languages.insertBefore("javascript","string",{hashbang:{pattern:/^#!.*/,greedy:!0,alias:"comment"},"template-string":{pattern:/`(?:\\[\s\S]|\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}|(?!\$\{)[^\\`])*`/,greedy:!0,inside:{"template-punctuation":{pattern:/^`|`$/,alias:"string"},interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}/,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"},rest:Prism.languages.javascript}},string:/[\s\S]+/}},"string-property":{pattern:/((?:^|[,{])[ \t]*)(["'])(?:\\(?:\r\n|[\s\S])|(?!\2)[^\\\r\n])*\2(?=\s*:)/m,lookbehind:!0,greedy:!0,alias:"property"}}),Prism.languages.insertBefore("javascript","operator",{"literal-property":{pattern:/((?:^|[,{])[ \t]*)(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*:)/m,lookbehind:!0,alias:"property"}}),Prism.languages.markup&&(Prism.languages.markup.tag.addInlined("script","javascript"),Prism.languages.markup.tag.addAttribute(/on(?:abort|blur|change|click|composition(?:end|start|update)|dblclick|error|focus(?:in|out)?|key(?:down|up)|load|mouse(?:down|enter|leave|move|out|over|up)|reset|resize|scroll|select|slotchange|submit|unload|wheel)/.source,"javascript")),Prism.languages.js=Prism.languages.javascript},1454:function(){Prism.languages.javastacktrace={summary:{pattern:/^([\t ]*)(?:(?:Caused by:|Suppressed:|Exception in thread "[^"]*")[\t ]+)?[\w$.]+(?::.*)?$/m,lookbehind:!0,inside:{keyword:{pattern:/^([\t ]*)(?:(?:Caused by|Suppressed)(?=:)|Exception in thread)/m,lookbehind:!0},string:{pattern:/^(\s*)"[^"]*"/,lookbehind:!0},exceptions:{pattern:/^(:?\s*)[\w$.]+(?=:|$)/,lookbehind:!0,inside:{"class-name":/[\w$]+$/,namespace:/\b[a-z]\w*\b/,punctuation:/\./}},message:{pattern:/(:\s*)\S.*/,lookbehind:!0,alias:"string"},punctuation:/:/}},"stack-frame":{pattern:/^([\t ]*)at (?:[\w$./]|@[\w$.+-]*\/)+(?:)?\([^()]*\)/m,lookbehind:!0,inside:{keyword:{pattern:/^(\s*)at(?= )/,lookbehind:!0},source:[{pattern:/(\()\w+\.\w+:\d+(?=\))/,lookbehind:!0,inside:{file:/^\w+\.\w+/,punctuation:/:/,"line-number":{pattern:/\b\d+\b/,alias:"number"}}},{pattern:/(\()[^()]*(?=\))/,lookbehind:!0,inside:{keyword:/^(?:Native Method|Unknown Source)$/}}],"class-name":/[\w$]+(?=\.(?:|[\w$]+)\()/,function:/(?:|[\w$]+)(?=\()/,"class-loader":{pattern:/(\s)[a-z]\w*(?:\.[a-z]\w*)*(?=\/[\w@$.]*\/)/,lookbehind:!0,alias:"namespace",inside:{punctuation:/\./}},module:{pattern:/([\s/])[a-z]\w*(?:\.[a-z]\w*)*(?:@[\w$.+-]*)?(?=\/)/,lookbehind:!0,inside:{version:{pattern:/(@)[\s\S]+/,lookbehind:!0,alias:"number"},punctuation:/[@.]/}},namespace:{pattern:/(?:\b[a-z]\w*\.)+/,inside:{punctuation:/\./}},punctuation:/[()/.]/}},more:{pattern:/^([\t ]*)\.{3} \d+ [a-z]+(?: [a-z]+)*/m,lookbehind:!0,inside:{punctuation:/\.{3}/,number:/\d+/,keyword:/\b[a-z]+(?: [a-z]+)*\b/}}}},5314:function(){Prism.languages.jexl={string:/(["'])(?:\\[\s\S]|(?!\1)[^\\])*\1/,transform:{pattern:/(\|\s*)[a-zA-Zа-яА-Я_\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF$][\wа-яА-Я\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF$]*/,alias:"function",lookbehind:!0},function:/[a-zA-Zа-яА-Я_\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF$][\wа-яА-Я\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF$]*\s*(?=\()/,number:/\b\d+(?:\.\d+)?\b|\B\.\d+\b/,operator:/[<>!]=?|-|\+|&&|==|\|\|?|\/\/?|[?:*^%]/,boolean:/\b(?:false|true)\b/,keyword:/\bin\b/,punctuation:/[{}[\](),.]/}},8874:function(){Prism.languages.jolie=Prism.languages.extend("clike",{string:{pattern:/(^|[^\\])"(?:\\[\s\S]|[^"\\])*"/,lookbehind:!0,greedy:!0},"class-name":{pattern:/((?:\b(?:as|courier|embed|in|inputPort|outputPort|service)\b|@)[ \t]*)\w+/,lookbehind:!0},keyword:/\b(?:as|cH|comp|concurrent|constants|courier|cset|csets|default|define|else|embed|embedded|execution|exit|extender|for|foreach|forward|from|global|if|import|in|include|init|inputPort|install|instanceof|interface|is_defined|linkIn|linkOut|main|new|nullProcess|outputPort|over|private|provide|public|scope|sequential|service|single|spawn|synchronized|this|throw|throws|type|undef|until|while|with)\b/,function:/\b[a-z_]\w*(?=[ \t]*[@(])/i,number:/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?l?/i,operator:/-[-=>]?|\+[+=]?|<[<=]?|[>=*!]=?|&&|\|\||[?\/%^@|]/,punctuation:/[()[\]{},;.:]/,builtin:/\b(?:Byte|any|bool|char|double|enum|float|int|length|long|ranges|regex|string|undefined|void)\b/}),Prism.languages.insertBefore("jolie","keyword",{aggregates:{pattern:/(\bAggregates\s*:\s*)(?:\w+(?:\s+with\s+\w+)?\s*,\s*)*\w+(?:\s+with\s+\w+)?/,lookbehind:!0,inside:{keyword:/\bwith\b/,"class-name":/\w+/,punctuation:/,/}},redirects:{pattern:/(\bRedirects\s*:\s*)(?:\w+\s*=>\s*\w+\s*,\s*)*(?:\w+\s*=>\s*\w+)/,lookbehind:!0,inside:{punctuation:/,/,"class-name":/\w+/,operator:/=>/}},property:{pattern:/\b(?:Aggregates|[Ii]nterfaces|Java|Javascript|Jolie|[Ll]ocation|OneWay|[Pp]rotocol|Redirects|RequestResponse)\b(?=[ \t]*:)/}})},6342:function(){(function(e){var t=/\\\((?:[^()]|\([^()]*\))*\)/.source,n=RegExp(/(^|[^\\])"(?:[^"\r\n\\]|\\[^\r\n(]|__)*"/.source.replace(/__/g,(function(){return t}))),r={interpolation:{pattern:RegExp(/((?:^|[^\\])(?:\\{2})*)/.source+t),lookbehind:!0,inside:{content:{pattern:/^(\\\()[\s\S]+(?=\)$)/,lookbehind:!0,inside:null},punctuation:/^\\\(|\)$/}}},i=e.languages.jq={comment:/#.*/,property:{pattern:RegExp(n.source+/(?=\s*:(?!:))/.source),lookbehind:!0,greedy:!0,inside:r},string:{pattern:n,lookbehind:!0,greedy:!0,inside:r},function:{pattern:/(\bdef\s+)[a-z_]\w+/i,lookbehind:!0},variable:/\B\$\w+/,"property-literal":{pattern:/\b[a-z_]\w*(?=\s*:(?!:))/i,alias:"property"},keyword:/\b(?:as|break|catch|def|elif|else|end|foreach|if|import|include|label|module|modulemeta|null|reduce|then|try|while)\b/,boolean:/\b(?:false|true)\b/,number:/(?:\b\d+\.|\B\.)?\b\d+(?:[eE][+-]?\d+)?\b/,operator:[{pattern:/\|=?/,alias:"pipe"},/\.\.|[!=<>]?=|\?\/\/|\/\/=?|[-+*/%]=?|[<>?]|\b(?:and|not|or)\b/],"c-style-function":{pattern:/\b[a-z_]\w*(?=\s*\()/i,alias:"function"},punctuation:/::|[()\[\]{},:;]|\.(?=\s*[\[\w$])/,dot:{pattern:/\./,alias:"important"}};r.interpolation.inside.content.inside=i})(Prism)},6690:function(){(function(e){function t(e,t){return RegExp(e.replace(//g,(function(){return/(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*/.source})),t)}e.languages.insertBefore("javascript","function-variable",{"method-variable":{pattern:RegExp("(\\.\\s*)"+e.languages.javascript["function-variable"].pattern.source),lookbehind:!0,alias:["function-variable","method","function","property-access"]}}),e.languages.insertBefore("javascript","function",{method:{pattern:RegExp("(\\.\\s*)"+e.languages.javascript["function"].source),lookbehind:!0,alias:["function","property-access"]}}),e.languages.insertBefore("javascript","constant",{"known-class-name":[{pattern:/\b(?:(?:Float(?:32|64)|(?:Int|Uint)(?:8|16|32)|Uint8Clamped)?Array|ArrayBuffer|BigInt|Boolean|DataView|Date|Error|Function|Intl|JSON|(?:Weak)?(?:Map|Set)|Math|Number|Object|Promise|Proxy|Reflect|RegExp|String|Symbol|WebAssembly)\b/,alias:"class-name"},{pattern:/\b(?:[A-Z]\w*)Error\b/,alias:"class-name"}]}),e.languages.insertBefore("javascript","keyword",{imports:{pattern:t(/(\bimport\b\s*)(?:(?:\s*,\s*(?:\*\s*as\s+|\{[^{}]*\}))?|\*\s*as\s+|\{[^{}]*\})(?=\s*\bfrom\b)/.source),lookbehind:!0,inside:e.languages.javascript},exports:{pattern:t(/(\bexport\b\s*)(?:\*(?:\s*as\s+)?(?=\s*\bfrom\b)|\{[^{}]*\})/.source),lookbehind:!0,inside:e.languages.javascript}}),e.languages.javascript["keyword"].unshift({pattern:/\b(?:as|default|export|from|import)\b/,alias:"module"},{pattern:/\b(?:await|break|catch|continue|do|else|finally|for|if|return|switch|throw|try|while|yield)\b/,alias:"control-flow"},{pattern:/\bnull\b/,alias:["null","nil"]},{pattern:/\bundefined\b/,alias:"nil"}),e.languages.insertBefore("javascript","operator",{spread:{pattern:/\.{3}/,alias:"operator"},arrow:{pattern:/=>/,alias:"operator"}}),e.languages.insertBefore("javascript","punctuation",{"property-access":{pattern:t(/(\.\s*)#?/.source),lookbehind:!0},"maybe-class-name":{pattern:/(^|[^$\w\xA0-\uFFFF])[A-Z][$\w\xA0-\uFFFF]+/,lookbehind:!0},dom:{pattern:/\b(?:document|(?:local|session)Storage|location|navigator|performance|window)\b/,alias:"variable"},console:{pattern:/\bconsole(?=\s*\.)/,alias:"class-name"}});for(var n=["function","function-variable","method","method-variable","property-access"],r=0;r=p.length)return;var n=e[t];if("string"===typeof n||"string"===typeof n.content){var r=p[o],i="string"===typeof n?n:n.content,s=i.indexOf(r);if(-1!==s){++o;var a=i.substring(0,s),l=c(u[r]),d=i.substring(s+r.length),h=[];if(a&&h.push(a),h.push(l),d){var g=[d];f(g),h.push.apply(h,g)}"string"===typeof n?(e.splice.apply(e,[t,1].concat(h)),t+=h.length-1):n.content=h}}else{var m=n.content;Array.isArray(m)?f(m):f([m])}}}return o=0,f(h),new e.Token(r,h,"language-"+r,t)}e.languages.javascript["template-string"]=[o("css",/\b(?:styled(?:\([^)]*\))?(?:\s*\.\s*\w+(?:\([^)]*\))*)*|css(?:\s*\.\s*(?:global|resolve))?|createGlobalStyle|keyframes)/.source),o("html",/\bhtml|\.\s*(?:inner|outer)HTML\s*\+?=/.source),o("svg",/\bsvg/.source),o("markdown",/\b(?:markdown|md)/.source),o("graphql",/\b(?:gql|graphql(?:\s*\.\s*experimental)?)/.source),o("sql",/\bsql/.source),t].filter(Boolean);var d={javascript:!0,js:!0,typescript:!0,ts:!0,jsx:!0,tsx:!0};function h(e){return"string"===typeof e?e:Array.isArray(e)?e.map(h).join(""):h(e.content)}e.hooks.add("after-tokenize",(function(t){function n(t){for(var r=0,i=t.length;r\s+)?)[A-Z]\w*(?:\.[A-Z]\w*)*/.source.replace(//g,(function(){return n}))),lookbehind:!0,inside:{punctuation:/\./}},{pattern:RegExp("(@[a-z]+\\s+)"+n),lookbehind:!0,inside:{string:t.string,number:t.number,boolean:t.boolean,keyword:e.languages.typescript.keyword,operator:/=>|\.\.\.|[&|?:*]/,punctuation:/[.,;=<>{}()[\]]/}}],example:{pattern:/(@example\s+(?!\s))(?:[^@\s]|\s+(?!\s))+?(?=\s*(?:\*\s*)?(?:@\w|\*\/))/,lookbehind:!0,inside:{code:{pattern:/^([\t ]*(?:\*\s*)?)\S.*$/m,lookbehind:!0,inside:t,alias:"language-javascript"}}}}),e.languages.javadoclike.addSupport("javascript",e.languages.jsdoc)})(Prism)},4277:function(){Prism.languages.json={property:{pattern:/(^|[^\\])"(?:\\.|[^\\"\r\n])*"(?=\s*:)/,lookbehind:!0,greedy:!0},string:{pattern:/(^|[^\\])"(?:\\.|[^\\"\r\n])*"(?!\s*:)/,lookbehind:!0,greedy:!0},comment:{pattern:/\/\/.*|\/\*[\s\S]*?(?:\*\/|$)/,greedy:!0},number:/-?\b\d+(?:\.\d+)?(?:e[+-]?\d+)?\b/i,punctuation:/[{}[\],]/,operator:/:/,boolean:/\b(?:false|true)\b/,null:{pattern:/\bnull\b/,alias:"keyword"}},Prism.languages.webmanifest=Prism.languages.json},2444:function(){(function(e){var t=/("|')(?:\\(?:\r\n?|\n|.)|(?!\1)[^\\\r\n])*\1/;e.languages.json5=e.languages.extend("json",{property:[{pattern:RegExp(t.source+"(?=\\s*:)"),greedy:!0},{pattern:/(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*:)/,alias:"unquoted"}],string:{pattern:t,greedy:!0},number:/[+-]?\b(?:NaN|Infinity|0x[a-fA-F\d]+)\b|[+-]?(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[eE][+-]?\d+\b)?/})})(Prism)},8393:function(){Prism.languages.jsonp=Prism.languages.extend("json",{punctuation:/[{}[\]();,.]/}),Prism.languages.insertBefore("jsonp","punctuation",{function:/(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*\()/})},1917:function(){Prism.languages.jsstacktrace={"error-message":{pattern:/^\S.*/m,alias:"string"},"stack-frame":{pattern:/(^[ \t]+)at[ \t].*/m,lookbehind:!0,inside:{"not-my-code":{pattern:/^at[ \t]+(?!\s)(?:node\.js||.*(?:node_modules|\(\)|\(|$|\(internal\/|\(node\.js)).*/m,alias:"comment"},filename:{pattern:/(\bat\s+(?!\s)|\()(?:[a-zA-Z]:)?[^():]+(?=:)/,lookbehind:!0,alias:"url"},function:{pattern:/(\bat\s+(?:new\s+)?)(?!\s)[_$a-zA-Z\xA0-\uFFFF<][.$\w\xA0-\uFFFF<>]*/,lookbehind:!0,inside:{punctuation:/\./}},punctuation:/[()]/,keyword:/\b(?:at|new)\b/,alias:{pattern:/\[(?:as\s+)?(?!\s)[_$a-zA-Z\xA0-\uFFFF][$\w\xA0-\uFFFF]*\]/,alias:"variable"},"line-number":{pattern:/:\d+(?::\d+)?\b/,alias:"number",inside:{punctuation:/:/}}}}}},2356:function(){(function(e){var t=e.util.clone(e.languages.javascript),n=/(?:\s|\/\/.*(?!.)|\/\*(?:[^*]|\*(?!\/))\*\/)/.source,r=/(?:\{(?:\{(?:\{[^{}]*\}|[^{}])*\}|[^{}])*\})/.source,i=/(?:\{*\.{3}(?:[^{}]|)*\})/.source;function s(e,t){return e=e.replace(//g,(function(){return n})).replace(//g,(function(){return r})).replace(//g,(function(){return i})),RegExp(e,t)}i=s(i).source,e.languages.jsx=e.languages.extend("markup",t),e.languages.jsx.tag.pattern=s(/<\/?(?:[\w.:-]+(?:+(?:[\w.:$-]+(?:=(?:"(?:\\[\s\S]|[^\\"])*"|'(?:\\[\s\S]|[^\\'])*'|[^\s{'"/>=]+|))?|))**\/?)?>/.source),e.languages.jsx.tag.inside["tag"].pattern=/^<\/?[^\s>\/]*/,e.languages.jsx.tag.inside["attr-value"].pattern=/=(?!\{)(?:"(?:\\[\s\S]|[^\\"])*"|'(?:\\[\s\S]|[^\\'])*'|[^\s'">]+)/,e.languages.jsx.tag.inside["tag"].inside["class-name"]=/^[A-Z]\w*(?:\.[A-Z]\w*)*$/,e.languages.jsx.tag.inside["comment"]=t["comment"],e.languages.insertBefore("inside","attr-name",{spread:{pattern:s(//.source),inside:e.languages.jsx}},e.languages.jsx.tag),e.languages.insertBefore("inside","special-attr",{script:{pattern:s(/=/.source),alias:"language-javascript",inside:{"script-punctuation":{pattern:/^=(?=\{)/,alias:"punctuation"},rest:e.languages.jsx}}},e.languages.jsx.tag);var o=function(e){return e?"string"===typeof e?e:"string"===typeof e.content?e.content:e.content.map(o).join(""):""},a=function(t){for(var n=[],r=0;r0&&n[n.length-1].tagName===o(i.content[0].content[1])&&n.pop():"/>"===i.content[i.content.length-1].content||n.push({tagName:o(i.content[0].content[1]),openedBraces:0}):n.length>0&&"punctuation"===i.type&&"{"===i.content?n[n.length-1].openedBraces++:n.length>0&&n[n.length-1].openedBraces>0&&"punctuation"===i.type&&"}"===i.content?n[n.length-1].openedBraces--:s=!0),(s||"string"===typeof i)&&n.length>0&&0===n[n.length-1].openedBraces){var l=o(i);r0&&("string"===typeof t[r-1]||"plain-text"===t[r-1].type)&&(l=o(t[r-1])+l,t.splice(r-1,1),r--),t[r]=new e.Token("plain-text",l,null,l)}i.content&&"string"!==typeof i.content&&a(i.content)}};e.hooks.add("after-tokenize",(function(e){"jsx"!==e.language&&"tsx"!==e.language||a(e.tokens)}))})(Prism)},6543:function(){Prism.languages.julia={comment:{pattern:/(^|[^\\])(?:#=(?:[^#=]|=(?!#)|#(?!=)|#=(?:[^#=]|=(?!#)|#(?!=))*=#)*=#|#.*)/,lookbehind:!0},regex:{pattern:/r"(?:\\.|[^"\\\r\n])*"[imsx]{0,4}/,greedy:!0},string:{pattern:/"""[\s\S]+?"""|(?:\b\w+)?"(?:\\.|[^"\\\r\n])*"|`(?:[^\\`\r\n]|\\.)*`/,greedy:!0},char:{pattern:/(^|[^\w'])'(?:\\[^\r\n][^'\r\n]*|[^\\\r\n])'/,lookbehind:!0,greedy:!0},keyword:/\b(?:abstract|baremodule|begin|bitstype|break|catch|ccall|const|continue|do|else|elseif|end|export|finally|for|function|global|if|immutable|import|importall|in|let|local|macro|module|print|println|quote|return|struct|try|type|typealias|using|while)\b/,boolean:/\b(?:false|true)\b/,number:/(?:\b(?=\d)|\B(?=\.))(?:0[box])?(?:[\da-f]+(?:_[\da-f]+)*(?:\.(?:\d+(?:_\d+)*)?)?|\.\d+(?:_\d+)*)(?:[efp][+-]?\d+(?:_\d+)*)?j?/i,operator:/&&|\|\||[-+*^%÷⊻&$\\]=?|\/[\/=]?|!=?=?|\|[=>]?|<(?:<=?|[=:|])?|>(?:=|>>?=?)?|==?=?|[~≠≤≥'√∛]/,punctuation:/::?|[{}[\]();,.?]/,constant:/\b(?:(?:Inf|NaN)(?:16|32|64)?|im|pi)\b|[πℯ]/}},1643:function(){Prism.languages.keepalived={comment:{pattern:/[#!].*/,greedy:!0},string:{pattern:/(^|[^\\])(?:"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"|'(?:\\(?:\r\n|[\s\S])|[^'\\\r\n])*')/,lookbehind:!0,greedy:!0},ip:{pattern:RegExp(/\b(?:(?:(?:[\da-f]{1,4}:){7}[\da-f]{1,4}|(?:[\da-f]{1,4}:){6}:[\da-f]{1,4}|(?:[\da-f]{1,4}:){5}:(?:[\da-f]{1,4}:)?[\da-f]{1,4}|(?:[\da-f]{1,4}:){4}:(?:[\da-f]{1,4}:){0,2}[\da-f]{1,4}|(?:[\da-f]{1,4}:){3}:(?:[\da-f]{1,4}:){0,3}[\da-f]{1,4}|(?:[\da-f]{1,4}:){2}:(?:[\da-f]{1,4}:){0,4}[\da-f]{1,4}|(?:[\da-f]{1,4}:){6}|(?:[\da-f]{1,4}:){0,5}:|::(?:[\da-f]{1,4}:){0,5}|[\da-f]{1,4}::(?:[\da-f]{1,4}:){0,5}[\da-f]{1,4}|::(?:[\da-f]{1,4}:){0,6}[\da-f]{1,4}|(?:[\da-f]{1,4}:){1,7}:)(?:\/\d{1,3})?|(?:\/\d{1,2})?)\b/.source.replace(//g,(function(){return/(?:(?:(?:25[0-5]|2[0-4]\d|1\d\d|[1-9]\d|\d)\.){3}(?:25[0-5]|2[0-4]\d|1\d\d|[1-9]\d|\d))/.source})),"i"),alias:"number"},path:{pattern:/(\s)\/(?:[^\/\s]+\/)*[^\/\s]*|\b[a-zA-Z]:\\(?:[^\\\s]+\\)*[^\\\s]*/,lookbehind:!0,alias:"string"},variable:/\$\{?\w+\}?/,email:{pattern:/[\w-]+@[\w-]+(?:\.[\w-]{2,3}){1,2}/,alias:"string"},"conditional-configuration":{pattern:/@\^?[\w-]+/,alias:"variable"},operator:/=/,property:/\b(?:BFD_CHECK|DNS_CHECK|FILE_CHECK|HTTP_GET|MISC_CHECK|NAME|PING_CHECK|SCRIPTS|SMTP_CHECK|SSL|SSL_GET|TCP_CHECK|UDP_CHECK|accept|advert_int|alpha|auth_pass|auth_type|authentication|bfd_cpu_affinity|bfd_instance|bfd_no_swap|bfd_priority|bfd_process_name|bfd_rlimit_rttime|bfd_rt_priority|bind_if|bind_port|bindto|ca|certificate|check_unicast_src|checker|checker_cpu_affinity|checker_log_all_failures|checker_no_swap|checker_priority|checker_rlimit_rttime|checker_rt_priority|child_wait_time|connect_ip|connect_port|connect_timeout|dbus_service_name|debug|default_interface|delay|delay_before_retry|delay_loop|digest|dont_track_primary|dynamic|dynamic_interfaces|enable_(?:dbus|script_security|sni|snmp_checker|snmp_rfc|snmp_rfcv2|snmp_rfcv3|snmp_vrrp|traps)|end|fall|fast_recovery|file|flag-[123]|fork_delay|full_command|fwmark|garp_group|garp_interval|garp_lower_prio_delay|garp_lower_prio_repeat|garp_master_delay|garp_master_refresh|garp_master_refresh_repeat|garp_master_repeat|global_defs|global_tracking|gna_interval|group|ha_suspend|hashed|helo_name|higher_prio_send_advert|hoplimit|http_protocol|hysteresis|idle_tx|include|inhibit_on_failure|init_fail|init_file|instance|interface|interfaces|interval|ip_family|ipvs_process_name|keepalived.conf|kernel_rx_buf_size|key|linkbeat_interfaces|linkbeat_use_polling|log_all_failures|log_unknown_vrids|lower_prio_no_advert|lthreshold|lvs_flush|lvs_flush_onstop|lvs_method|lvs_netlink_cmd_rcv_bufs|lvs_netlink_cmd_rcv_bufs_force|lvs_netlink_monitor_rcv_bufs|lvs_netlink_monitor_rcv_bufs_force|lvs_notify_fifo|lvs_notify_fifo_script|lvs_sched|lvs_sync_daemon|max_auto_priority|max_hops|mcast_src_ip|mh-fallback|mh-port|min_auto_priority_delay|min_rx|min_tx|misc_dynamic|misc_path|misc_timeout|multiplier|name|namespace_with_ipsets|native_ipv6|neighbor_ip|net_namespace|net_namespace_ipvs|nftables|nftables_counters|nftables_ifindex|nftables_priority|no_accept|no_checker_emails|no_email_faults|nopreempt|notification_email|notification_email_from|notify|notify_backup|notify_deleted|notify_down|notify_fault|notify_fifo|notify_fifo_script|notify_master|notify_master_rx_lower_pri|notify_priority_changes|notify_stop|notify_up|old_unicast_checksum|omega|ops|param_match|passive|password|path|persistence_engine|persistence_granularity|persistence_timeout|preempt|preempt_delay|priority|process|process_monitor_rcv_bufs|process_monitor_rcv_bufs_force|process_name|process_names|promote_secondaries|protocol|proxy_arp|proxy_arp_pvlan|quorum|quorum_down|quorum_max|quorum_up|random_seed|real_server|regex|regex_max_offset|regex_min_offset|regex_no_match|regex_options|regex_stack|reload_repeat|reload_time_file|require_reply|retry|rise|router_id|rs_init_notifies|script|script_user|sh-fallback|sh-port|shutdown_script|shutdown_script_timeout|skip_check_adv_addr|smtp_alert|smtp_alert_checker|smtp_alert_vrrp|smtp_connect_timeout|smtp_helo_name|smtp_server|snmp_socket|sorry_server|sorry_server_inhibit|sorry_server_lvs_method|source_ip|start|startup_script|startup_script_timeout|state|static_ipaddress|static_routes|static_rules|status_code|step|strict_mode|sync_group_tracking_weight|terminate_delay|timeout|track_bfd|track_file|track_group|track_interface|track_process|track_script|track_src_ip|ttl|type|umask|unicast_peer|unicast_src_ip|unicast_ttl|url|use_ipvlan|use_pid_dir|use_vmac|user|uthreshold|val[123]|version|virtual_ipaddress|virtual_ipaddress_excluded|virtual_router_id|virtual_routes|virtual_rules|virtual_server|virtual_server_group|virtualhost|vmac_xmit_base|vrrp|vrrp_(?:check_unicast_src|cpu_affinity|garp_interval|garp_lower_prio_delay|garp_lower_prio_repeat|garp_master_delay|garp_master_refresh|garp_master_refresh_repeat|garp_master_repeat|gna_interval|higher_prio_send_advert|instance|ipsets|iptables|lower_prio_no_advert|mcast_group4|mcast_group6|min_garp|netlink_cmd_rcv_bufs|netlink_cmd_rcv_bufs_force|netlink_monitor_rcv_bufs|netlink_monitor_rcv_bufs_force|no_swap|notify_fifo|notify_fifo_script|notify_priority_changes|priority|process_name|rlimit_rttime|rt_priority|rx_bufs_multiplier|rx_bufs_policy|script|skip_check_adv_addr|startup_delay|strict|sync_group|track_process|version)|warmup|weight)\b/,constant:/\b(?:A|AAAA|AH|BACKUP|CNAME|DR|MASTER|MX|NAT|NS|PASS|SCTP|SOA|TCP|TUN|TXT|UDP|dh|fo|lblc|lblcr|lc|mh|nq|ovf|rr|sed|sh|wlc|wrr)\b/,number:{pattern:/(^|[^\w.-])-?\d+(?:\.\d+)?/,lookbehind:!0},boolean:/\b(?:false|no|off|on|true|yes)\b/,punctuation:/[\{\}]/}},2821:function(){Prism.languages.keyman={comment:{pattern:/\bc .*/i,greedy:!0},string:{pattern:/"[^"\r\n]*"|'[^'\r\n]*'/,greedy:!0},"virtual-key":{pattern:/\[\s*(?:(?:ALT|CAPS|CTRL|LALT|LCTRL|NCAPS|RALT|RCTRL|SHIFT)\s+)*(?:[TKU]_[\w?]+|[A-E]\d\d?|"[^"\r\n]*"|'[^'\r\n]*')\s*\]/i,greedy:!0,alias:"function"},"header-keyword":{pattern:/&\w+/,alias:"bold"},"header-statement":{pattern:/\b(?:bitmap|bitmaps|caps always off|caps on only|copyright|hotkey|language|layout|message|name|shift frees caps|version)\b/i,alias:"bold"},"rule-keyword":{pattern:/\b(?:any|baselayout|beep|call|context|deadkey|dk|if|index|layer|notany|nul|outs|platform|reset|return|save|set|store|use)\b/i,alias:"keyword"},"structural-keyword":{pattern:/\b(?:ansi|begin|group|match|newcontext|nomatch|postkeystroke|readonly|unicode|using keys)\b/i,alias:"keyword"},"compile-target":{pattern:/\$(?:keyman|keymanonly|keymanweb|kmfl|weaver):/i,alias:"property"},number:/\b(?:U\+[\dA-F]+|d\d+|x[\da-f]+|\d+)\b/i,operator:/[+>\\$]|\.\./,punctuation:/[()=,]/}},2334:function(){(function(e){e.languages.kotlin=e.languages.extend("clike",{keyword:{pattern:/(^|[^.])\b(?:abstract|actual|annotation|as|break|by|catch|class|companion|const|constructor|continue|crossinline|data|do|dynamic|else|enum|expect|external|final|finally|for|fun|get|if|import|in|infix|init|inline|inner|interface|internal|is|lateinit|noinline|null|object|open|operator|out|override|package|private|protected|public|reified|return|sealed|set|super|suspend|tailrec|this|throw|to|try|typealias|val|var|vararg|when|where|while)\b/,lookbehind:!0},function:[{pattern:/(?:`[^\r\n`]+`|\b\w+)(?=\s*\()/,greedy:!0},{pattern:/(\.)(?:`[^\r\n`]+`|\w+)(?=\s*\{)/,lookbehind:!0,greedy:!0}],number:/\b(?:0[xX][\da-fA-F]+(?:_[\da-fA-F]+)*|0[bB][01]+(?:_[01]+)*|\d+(?:_\d+)*(?:\.\d+(?:_\d+)*)?(?:[eE][+-]?\d+(?:_\d+)*)?[fFL]?)\b/,operator:/\+[+=]?|-[-=>]?|==?=?|!(?:!|==?)?|[\/*%<>]=?|[?:]:?|\.\.|&&|\|\||\b(?:and|inv|or|shl|shr|ushr|xor)\b/}),delete e.languages.kotlin["class-name"];var t={"interpolation-punctuation":{pattern:/^\$\{?|\}$/,alias:"punctuation"},expression:{pattern:/[\s\S]+/,inside:e.languages.kotlin}};e.languages.insertBefore("kotlin","string",{"string-literal":[{pattern:/"""(?:[^$]|\$(?:(?!\{)|\{[^{}]*\}))*?"""/,alias:"multiline",inside:{interpolation:{pattern:/\$(?:[a-z_]\w*|\{[^{}]*\})/i,inside:t},string:/[\s\S]+/}},{pattern:/"(?:[^"\\\r\n$]|\\.|\$(?:(?!\{)|\{[^{}]*\}))*"/,alias:"singleline",inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$(?:[a-z_]\w*|\{[^{}]*\})/i,lookbehind:!0,inside:t},string:/[\s\S]+/}}],char:{pattern:/'(?:[^'\\\r\n]|\\(?:.|u[a-fA-F0-9]{0,4}))'/,greedy:!0}}),delete e.languages.kotlin["string"],e.languages.insertBefore("kotlin","keyword",{annotation:{pattern:/\B@(?:\w+:)?(?:[A-Z]\w*|\[[^\]]+\])/,alias:"builtin"}}),e.languages.insertBefore("kotlin","function",{label:{pattern:/\b\w+@|@\w+\b/,alias:"symbol"}}),e.languages.kt=e.languages.kotlin,e.languages.kts=e.languages.kotlin})(Prism)},9486:function(){(function(e){var t=/\s\x00-\x1f\x22-\x2f\x3a-\x3f\x5b-\x5e\x60\x7b-\x7e/.source;function n(e,n){return RegExp(e.replace(//g,t),n)}e.languages.kumir={comment:{pattern:/\|.*/},prolog:{pattern:/#.*/,greedy:!0},string:{pattern:/"[^\n\r"]*"|'[^\n\r']*'/,greedy:!0},boolean:{pattern:n(/(^|[])(?:да|нет)(?=[]|$)/.source),lookbehind:!0},"operator-word":{pattern:n(/(^|[])(?:и|или|не)(?=[]|$)/.source),lookbehind:!0,alias:"keyword"},"system-variable":{pattern:n(/(^|[])знач(?=[]|$)/.source),lookbehind:!0,alias:"keyword"},type:[{pattern:n(/(^|[])(?:вещ|лит|лог|сим|цел)(?:\x20*таб)?(?=[]|$)/.source),lookbehind:!0,alias:"builtin"},{pattern:n(/(^|[])(?:компл|сканкод|файл|цвет)(?=[]|$)/.source),lookbehind:!0,alias:"important"}],keyword:{pattern:n(/(^|[])(?:алг|арг(?:\x20*рез)?|ввод|ВКЛЮЧИТЬ|вс[её]|выбор|вывод|выход|дано|для|до|дс|если|иначе|исп|использовать|кон(?:(?:\x20+|_)исп)?|кц(?:(?:\x20+|_)при)?|надо|нач|нс|нц|от|пауза|пока|при|раза?|рез|стоп|таб|то|утв|шаг)(?=[]|$)/.source),lookbehind:!0},name:{pattern:n(/(^|[])[^\d][^]*(?:\x20+[^]+)*(?=[]|$)/.source),lookbehind:!0},number:{pattern:n(/(^|[])(?:\B\$[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?)(?=[]|$)/.source,"i"),lookbehind:!0},punctuation:/:=|[(),:;\[\]]/,"operator-char":{pattern:/\*\*?|<[=>]?|>=?|[-+/=]/,alias:"operator"}},e.languages.kum=e.languages.kumir})(Prism)},1634:function(){Prism.languages.kusto={comment:{pattern:/\/\/.*/,greedy:!0},string:{pattern:/```[\s\S]*?```|[hH]?(?:"(?:[^\r\n\\"]|\\.)*"|'(?:[^\r\n\\']|\\.)*'|@(?:"[^\r\n"]*"|'[^\r\n']*'))/,greedy:!0},verb:{pattern:/(\|\s*)[a-z][\w-]*/i,lookbehind:!0,alias:"keyword"},command:{pattern:/\.[a-z][a-z\d-]*\b/,alias:"keyword"},"class-name":/\b(?:bool|datetime|decimal|dynamic|guid|int|long|real|string|timespan)\b/,keyword:/\b(?:access|alias|and|anti|as|asc|auto|between|by|(?:contains|(?:ends|starts)with|has(?:perfix|suffix)?)(?:_cs)?|database|declare|desc|external|from|fullouter|has_all|in|ingestion|inline|inner|innerunique|into|(?:left|right)(?:anti(?:semi)?|inner|outer|semi)?|let|like|local|not|of|on|or|pattern|print|query_parameters|range|restrict|schema|set|step|table|tables|to|view|where|with|matches\s+regex|nulls\s+(?:first|last))(?![\w-])/,boolean:/\b(?:false|null|true)\b/,function:/\b[a-z_]\w*(?=\s*\()/,datetime:[{pattern:/\b(?:(?:Fri|Friday|Mon|Monday|Sat|Saturday|Sun|Sunday|Thu|Thursday|Tue|Tuesday|Wed|Wednesday)\s*,\s*)?\d{1,2}(?:\s+|-)(?:Apr|Aug|Dec|Feb|Jan|Jul|Jun|Mar|May|Nov|Oct|Sep)(?:\s+|-)\d{2}\s+\d{2}:\d{2}(?::\d{2})?(?:\s*(?:\b(?:[A-Z]|(?:[ECMT][DS]|GM|U)T)|[+-]\d{4}))?\b/,alias:"number"},{pattern:/[+-]?\b(?:\d{4}-\d{2}-\d{2}(?:[ T]\d{2}:\d{2}(?::\d{2}(?:\.\d+)?)?)?|\d{2}:\d{2}(?::\d{2}(?:\.\d+)?)?)Z?/,alias:"number"}],number:/\b(?:0x[0-9A-Fa-f]+|\d+(?:\.\d+)?(?:[Ee][+-]?\d+)?)(?:(?:min|sec|[mnµ]s|[dhms]|microsecond|tick)\b)?|[+-]?\binf\b/,operator:/=>|[!=]~|[!=<>]=?|[-+*/%|]|\.\./,punctuation:/[()\[\]{},;.:]/}},319:function(){(function(e){var t=/\\(?:[^a-z()[\]]|[a-z*]+)/i,n={"equation-command":{pattern:t,alias:"regex"}};e.languages.latex={comment:/%.*/,cdata:{pattern:/(\\begin\{((?:lstlisting|verbatim)\*?)\})[\s\S]*?(?=\\end\{\2\})/,lookbehind:!0},equation:[{pattern:/\$\$(?:\\[\s\S]|[^\\$])+\$\$|\$(?:\\[\s\S]|[^\\$])+\$|\\\([\s\S]*?\\\)|\\\[[\s\S]*?\\\]/,inside:n,alias:"string"},{pattern:/(\\begin\{((?:align|eqnarray|equation|gather|math|multline)\*?)\})[\s\S]*?(?=\\end\{\2\})/,lookbehind:!0,inside:n,alias:"string"}],keyword:{pattern:/(\\(?:begin|cite|documentclass|end|label|ref|usepackage)(?:\[[^\]]+\])?\{)[^}]+(?=\})/,lookbehind:!0},url:{pattern:/(\\url\{)[^}]+(?=\})/,lookbehind:!0},headline:{pattern:/(\\(?:chapter|frametitle|paragraph|part|section|subparagraph|subsection|subsubparagraph|subsubsection|subsubsubparagraph)\*?(?:\[[^\]]+\])?\{)[^}]+(?=\})/,lookbehind:!0,alias:"class-name"},function:{pattern:t,alias:"selector"},punctuation:/[[\]{}&]/},e.languages.tex=e.languages.latex,e.languages.context=e.languages.latex})(Prism)},7442:function(){(function(e){e.languages.latte={comment:/^\{\*[\s\S]*/,"latte-tag":{pattern:/(^\{(?:\/(?=[a-z]))?)(?:[=_]|[a-z]\w*\b(?!\())/i,lookbehind:!0,alias:"important"},delimiter:{pattern:/^\{\/?|\}$/,alias:"punctuation"},php:{pattern:/\S(?:[\s\S]*\S)?/,alias:"language-php",inside:e.languages.php}};var t=e.languages.extend("markup",{});e.languages.insertBefore("inside","attr-value",{"n-attr":{pattern:/n:[\w-]+(?:\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+))?/,inside:{"attr-name":{pattern:/^[^\s=]+/,alias:"important"},"attr-value":{pattern:/=[\s\S]+/,inside:{punctuation:[/^=/,{pattern:/^(\s*)["']|["']$/,lookbehind:!0}],php:{pattern:/\S(?:[\s\S]*\S)?/,inside:e.languages.php}}}}}},t.tag),e.hooks.add("before-tokenize",(function(n){if("latte"===n.language){var r=/\{\*[\s\S]*?\*\}|\{[^'"\s{}*](?:[^"'/{}]|\/(?![*/])|("|')(?:\\[\s\S]|(?!\1)[^\\])*\1|\/\*(?:[^*]|\*(?!\/))*\*\/)*\}/g;e.languages["markup-templating"].buildPlaceholders(n,"latte",r),n.grammar=t}})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"latte")}))})(Prism)},7802:function(){Prism.languages.less=Prism.languages.extend("css",{comment:[/\/\*[\s\S]*?\*\//,{pattern:/(^|[^\\])\/\/.*/,lookbehind:!0}],atrule:{pattern:/@[\w-](?:\((?:[^(){}]|\([^(){}]*\))*\)|[^(){};\s]|\s+(?!\s))*?(?=\s*\{)/,inside:{punctuation:/[:()]/}},selector:{pattern:/(?:@\{[\w-]+\}|[^{};\s@])(?:@\{[\w-]+\}|\((?:[^(){}]|\([^(){}]*\))*\)|[^(){};@\s]|\s+(?!\s))*?(?=\s*\{)/,inside:{variable:/@+[\w-]+/}},property:/(?:@\{[\w-]+\}|[\w-])+(?:\+_?)?(?=\s*:)/,operator:/[+\-*\/]/}),Prism.languages.insertBefore("less","property",{variable:[{pattern:/@[\w-]+\s*:/,inside:{punctuation:/:/}},/@@?[\w-]+/],"mixin-usage":{pattern:/([{;]\s*)[.#](?!\d)[\w-].*?(?=[(;])/,lookbehind:!0,alias:"function"}})},1719:function(){(function(e){for(var t=/\((?:[^();"#\\]|\\[\s\S]|;.*(?!.)|"(?:[^"\\]|\\.)*"|#(?:\{(?:(?!#\})[\s\S])*#\}|[^{])|)*\)/.source,n=5,r=0;r/g,(function(){return t}));t=t.replace(//g,/[^\s\S]/.source);var i=e.languages.lilypond={comment:/%(?:(?!\{).*|\{[\s\S]*?%\})/,"embedded-scheme":{pattern:RegExp(/(^|[=\s])#(?:"(?:[^"\\]|\\.)*"|[^\s()"]*(?:[^\s()]|))/.source.replace(//g,(function(){return t})),"m"),lookbehind:!0,greedy:!0,inside:{scheme:{pattern:/^(#)[\s\S]+$/,lookbehind:!0,alias:"language-scheme",inside:{"embedded-lilypond":{pattern:/#\{[\s\S]*?#\}/,greedy:!0,inside:{punctuation:/^#\{|#\}$/,lilypond:{pattern:/[\s\S]+/,alias:"language-lilypond",inside:null}}},rest:e.languages.scheme}},punctuation:/#/}},string:{pattern:/"(?:[^"\\]|\\.)*"/,greedy:!0},"class-name":{pattern:/(\\new\s+)[\w-]+/,lookbehind:!0},keyword:{pattern:/\\[a-z][-\w]*/i,inside:{punctuation:/^\\/}},operator:/[=|]|<<|>>/,punctuation:{pattern:/(^|[a-z\d])(?:'+|,+|[_^]?-[_^]?(?:[-+^!>._]|(?=\d))|[_^]\.?|[.!])|[{}()[\]<>^~]|\\[()[\]<>\\!]|--|__/,lookbehind:!0},number:/\b\d+(?:\/\d+)?\b/};i["embedded-scheme"].inside["scheme"].inside["embedded-lilypond"].inside["lilypond"].inside=i,e.languages.ly=i})(Prism)},7362:function(){Prism.languages["linker-script"]={comment:{pattern:/(^|\s)\/\*[\s\S]*?(?:$|\*\/)/,lookbehind:!0,greedy:!0},identifier:{pattern:/"[^"\r\n]*"/,greedy:!0},"location-counter":{pattern:/\B\.\B/,alias:"important"},section:{pattern:/(^|[^\w*])\.\w+\b/,lookbehind:!0,alias:"keyword"},function:/\b[A-Z][A-Z_]*(?=\s*\()/,number:/\b(?:0[xX][a-fA-F0-9]+|\d+)[KM]?\b/,operator:/>>=?|<<=?|->|\+\+|--|&&|\|\||::|[?:~]|[-+*/%&|^!=<>]=?/,punctuation:/[(){},;]/},Prism.languages["ld"]=Prism.languages["linker-script"]},150:function(){Prism.languages.liquid={comment:{pattern:/(^\{%\s*comment\s*%\})[\s\S]+(?=\{%\s*endcomment\s*%\}$)/,lookbehind:!0},delimiter:{pattern:/^\{(?:\{\{|[%\{])-?|-?(?:\}\}|[%\}])\}$/,alias:"punctuation"},string:{pattern:/"[^"]*"|'[^']*'/,greedy:!0},keyword:/\b(?:as|assign|break|(?:end)?(?:capture|case|comment|for|form|if|paginate|raw|style|tablerow|unless)|continue|cycle|decrement|echo|else|elsif|in|include|increment|limit|liquid|offset|range|render|reversed|section|when|with)\b/,object:/\b(?:address|all_country_option_tags|article|block|blog|cart|checkout|collection|color|country|country_option_tags|currency|current_page|current_tags|customer|customer_address|date|discount_allocation|discount_application|external_video|filter|filter_value|font|forloop|fulfillment|generic_file|gift_card|group|handle|image|line_item|link|linklist|localization|location|measurement|media|metafield|model|model_source|order|page|page_description|page_image|page_title|part|policy|product|product_option|recommendations|request|robots|routes|rule|script|search|selling_plan|selling_plan_allocation|selling_plan_group|shipping_method|shop|shop_locale|sitemap|store_availability|tax_line|template|theme|transaction|unit_price_measurement|user_agent|variant|video|video_source)\b/,function:[{pattern:/(\|\s*)\w+/,lookbehind:!0,alias:"filter"},{pattern:/(\.\s*)(?:first|last|size)/,lookbehind:!0}],boolean:/\b(?:false|nil|true)\b/,range:{pattern:/\.\./,alias:"operator"},number:/\b\d+(?:\.\d+)?\b/,operator:/[!=]=|<>|[<>]=?|[|?:=-]|\b(?:and|contains(?=\s)|or)\b/,punctuation:/[.,\[\]()]/,empty:{pattern:/\bempty\b/,alias:"keyword"}},Prism.hooks.add("before-tokenize",(function(e){var t=/\{%\s*comment\s*%\}[\s\S]*?\{%\s*endcomment\s*%\}|\{(?:%[\s\S]*?%|\{\{[\s\S]*?\}\}|\{[\s\S]*?\})\}/g,n=!1;Prism.languages["markup-templating"].buildPlaceholders(e,"liquid",t,(function(e){var t=/^\{%-?\s*(\w+)/.exec(e);if(t){var r=t[1];if("raw"===r&&!n)return n=!0,!0;if("endraw"===r)return n=!1,!0}return!n}))})),Prism.hooks.add("after-tokenize",(function(e){Prism.languages["markup-templating"].tokenizePlaceholders(e,"liquid")}))},5520:function(){(function(e){function t(e){return RegExp(/(\()/.source+"(?:"+e+")"+/(?=[\s\)])/.source)}function n(e){return RegExp(/([\s([])/.source+"(?:"+e+")"+/(?=[\s)])/.source)}var r=/(?!\d)[-+*/~!@$%^=<>{}\w]+/.source,i="&"+r,s="(\\()",o="(?=\\))",a="(?=\\s)",l=/(?:[^()]|\((?:[^()]|\((?:[^()]|\((?:[^()]|\((?:[^()]|\([^()]*\))*\))*\))*\))*\))*/.source,c={heading:{pattern:/;;;.*/,alias:["comment","title"]},comment:/;.*/,string:{pattern:/"(?:[^"\\]|\\.)*"/,greedy:!0,inside:{argument:/[-A-Z]+(?=[.,\s])/,symbol:RegExp("`"+r+"'")}},"quoted-symbol":{pattern:RegExp("#?'"+r),alias:["variable","symbol"]},"lisp-property":{pattern:RegExp(":"+r),alias:"property"},splice:{pattern:RegExp(",@?"+r),alias:["symbol","variable"]},keyword:[{pattern:RegExp(s+"(?:and|(?:cl-)?letf|cl-loop|cond|cons|error|if|(?:lexical-)?let\\*?|message|not|null|or|provide|require|setq|unless|use-package|when|while)"+a),lookbehind:!0},{pattern:RegExp(s+"(?:append|by|collect|concat|do|finally|for|in|return)"+a),lookbehind:!0}],declare:{pattern:t(/declare/.source),lookbehind:!0,alias:"keyword"},interactive:{pattern:t(/interactive/.source),lookbehind:!0,alias:"keyword"},boolean:{pattern:n(/nil|t/.source),lookbehind:!0},number:{pattern:n(/[-+]?\d+(?:\.\d*)?/.source),lookbehind:!0},defvar:{pattern:RegExp(s+"def(?:const|custom|group|var)\\s+"+r),lookbehind:!0,inside:{keyword:/^def[a-z]+/,variable:RegExp(r)}},defun:{pattern:RegExp(s+/(?:cl-)?(?:defmacro|defun\*?)\s+/.source+r+/\s+\(/.source+l+/\)/.source),lookbehind:!0,greedy:!0,inside:{keyword:/^(?:cl-)?def\S+/,arguments:null,function:{pattern:RegExp("(^\\s)"+r),lookbehind:!0},punctuation:/[()]/}},lambda:{pattern:RegExp(s+"lambda\\s+\\(\\s*(?:&?"+r+"(?:\\s+&?"+r+")*\\s*)?\\)"),lookbehind:!0,greedy:!0,inside:{keyword:/^lambda/,arguments:null,punctuation:/[()]/}},car:{pattern:RegExp(s+r),lookbehind:!0},punctuation:[/(?:['`,]?\(|[)\[\]])/,{pattern:/(\s)\.(?=\s)/,lookbehind:!0}]},u={"lisp-marker":RegExp(i),varform:{pattern:RegExp(/\(/.source+r+/\s+(?=\S)/.source+l+/\)/.source),inside:c},argument:{pattern:RegExp(/(^|[\s(])/.source+r),lookbehind:!0,alias:"variable"},rest:c},d="\\S+(?:\\s+\\S+)*",h={pattern:RegExp(s+l+o),lookbehind:!0,inside:{"rest-vars":{pattern:RegExp("&(?:body|rest)\\s+"+d),inside:u},"other-marker-vars":{pattern:RegExp("&(?:aux|optional)\\s+"+d),inside:u},keys:{pattern:RegExp("&key\\s+"+d+"(?:\\s+&allow-other-keys)?"),inside:u},argument:{pattern:RegExp(r),alias:"variable"},punctuation:/[()]/}};c["lambda"].inside.arguments=h,c["defun"].inside.arguments=e.util.clone(h),c["defun"].inside.arguments.inside.sublist=h,e.languages.lisp=c,e.languages.elisp=c,e.languages.emacs=c,e.languages["emacs-lisp"]=c})(Prism)},6347:function(){Prism.languages.livescript={comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?\*\//,lookbehind:!0},{pattern:/(^|[^\\])#.*/,lookbehind:!0}],"interpolated-string":{pattern:/(^|[^"])("""|")(?:\\[\s\S]|(?!\2)[^\\])*\2(?!")/,lookbehind:!0,greedy:!0,inside:{variable:{pattern:/(^|[^\\])#[a-z_](?:-?[a-z]|[\d_])*/m,lookbehind:!0},interpolation:{pattern:/(^|[^\\])#\{[^}]+\}/m,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^#\{|\}$/,alias:"variable"}}},string:/[\s\S]+/}},string:[{pattern:/('''|')(?:\\[\s\S]|(?!\1)[^\\])*\1/,greedy:!0},{pattern:/<\[[\s\S]*?\]>/,greedy:!0},/\\[^\s,;\])}]+/],regex:[{pattern:/\/\/(?:\[[^\r\n\]]*\]|\\.|(?!\/\/)[^\\\[])+\/\/[gimyu]{0,5}/,greedy:!0,inside:{comment:{pattern:/(^|[^\\])#.*/,lookbehind:!0}}},{pattern:/\/(?:\[[^\r\n\]]*\]|\\.|[^/\\\r\n\[])+\/[gimyu]{0,5}/,greedy:!0}],keyword:{pattern:/(^|(?!-).)\b(?:break|case|catch|class|const|continue|default|do|else|extends|fallthrough|finally|for(?: ever)?|function|if|implements|it|let|loop|new|null|otherwise|own|return|super|switch|that|then|this|throw|try|unless|until|var|void|when|while|yield)(?!-)\b/m,lookbehind:!0},"keyword-operator":{pattern:/(^|[^-])\b(?:(?:delete|require|typeof)!|(?:and|by|delete|export|from|import(?: all)?|in|instanceof|is(?: not|nt)?|not|of|or|til|to|typeof|with|xor)(?!-)\b)/m,lookbehind:!0,alias:"operator"},boolean:{pattern:/(^|[^-])\b(?:false|no|off|on|true|yes)(?!-)\b/m,lookbehind:!0},argument:{pattern:/(^|(?!\.&\.)[^&])&(?!&)\d*/m,lookbehind:!0,alias:"variable"},number:/\b(?:\d+~[\da-z]+|\d[\d_]*(?:\.\d[\d_]*)?(?:[a-z]\w*)?)/i,identifier:/[a-z_](?:-?[a-z]|[\d_])*/i,operator:[{pattern:/( )\.(?= )/,lookbehind:!0},/\.(?:[=~]|\.\.?)|\.(?:[&|^]|<<|>>>?)\.|:(?:=|:=?)|&&|\|[|>]|<(?:<[>=?]?|-(?:->?|>)?|\+\+?|@@?|%%?|\*\*?|!(?:~?=|--?>|~?~>)?|~(?:~?>|=)?|==?|\^\^?|[\/?]/],punctuation:/[(){}\[\]|.,:;`]/},Prism.languages.livescript["interpolated-string"].inside["interpolation"].inside.rest=Prism.languages.livescript},5153:function(){(function(e){e.languages.llvm={comment:/;.*/,string:{pattern:/"[^"]*"/,greedy:!0},boolean:/\b(?:false|true)\b/,variable:/[%@!#](?:(?!\d)(?:[-$.\w]|\\[a-f\d]{2})+|\d+)/i,label:/(?!\d)(?:[-$.\w]|\\[a-f\d]{2})+:/i,type:{pattern:/\b(?:double|float|fp128|half|i[1-9]\d*|label|metadata|ppc_fp128|token|void|x86_fp80|x86_mmx)\b/,alias:"class-name"},keyword:/\b[a-z_][a-z_0-9]*\b/,number:/[+-]?\b\d+(?:\.\d+)?(?:[eE][+-]?\d+)?\b|\b0x[\dA-Fa-f]+\b|\b0xK[\dA-Fa-f]{20}\b|\b0x[ML][\dA-Fa-f]{32}\b|\b0xH[\dA-Fa-f]{4}\b/,punctuation:/[{}[\];(),.!*=<>]/}})(Prism)},3335:function(){Prism.languages.log={string:{pattern:/"(?:[^"\\\r\n]|\\.)*"|'(?![st] | \w)(?:[^'\\\r\n]|\\.)*'/,greedy:!0},exception:{pattern:/(^|[^\w.])[a-z][\w.]*(?:Error|Exception):.*(?:(?:\r\n?|\n)[ \t]*(?:at[ \t].+|\.{3}.*|Caused by:.*))+(?:(?:\r\n?|\n)[ \t]*\.\.\. .*)?/,lookbehind:!0,greedy:!0,alias:["javastacktrace","language-javastacktrace"],inside:Prism.languages["javastacktrace"]||{keyword:/\bat\b/,function:/[a-z_][\w$]*(?=\()/,punctuation:/[.:()]/}},level:[{pattern:/\b(?:ALERT|CRIT|CRITICAL|EMERG|EMERGENCY|ERR|ERROR|FAILURE|FATAL|SEVERE)\b/,alias:["error","important"]},{pattern:/\b(?:WARN|WARNING|WRN)\b/,alias:["warning","important"]},{pattern:/\b(?:DISPLAY|INF|INFO|NOTICE|STATUS)\b/,alias:["info","keyword"]},{pattern:/\b(?:DBG|DEBUG|FINE)\b/,alias:["debug","keyword"]},{pattern:/\b(?:FINER|FINEST|TRACE|TRC|VERBOSE|VRB)\b/,alias:["trace","comment"]}],property:{pattern:/((?:^|[\]|])[ \t]*)[a-z_](?:[\w-]|\b\/\b)*(?:[. ]\(?\w(?:[\w-]|\b\/\b)*\)?)*:(?=\s)/im,lookbehind:!0},separator:{pattern:/(^|[^-+])-{3,}|={3,}|\*{3,}|- - /m,lookbehind:!0,alias:"comment"},url:/\b(?:file|ftp|https?):\/\/[^\s|,;'"]*[^\s|,;'">.]/,email:{pattern:/(^|\s)[-\w+.]+@[a-z][a-z0-9-]*(?:\.[a-z][a-z0-9-]*)+(?=\s)/,lookbehind:!0,alias:"url"},"ip-address":{pattern:/\b(?:\d{1,3}(?:\.\d{1,3}){3})\b/,alias:"constant"},"mac-address":{pattern:/\b[a-f0-9]{2}(?::[a-f0-9]{2}){5}\b/i,alias:"constant"},domain:{pattern:/(^|\s)[a-z][a-z0-9-]*(?:\.[a-z][a-z0-9-]*)*\.[a-z][a-z0-9-]+(?=\s)/,lookbehind:!0,alias:"constant"},uuid:{pattern:/\b[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}\b/i,alias:"constant"},hash:{pattern:/\b(?:[a-f0-9]{32}){1,2}\b/i,alias:"constant"},"file-path":{pattern:/\b[a-z]:[\\/][^\s|,;:(){}\[\]"']+|(^|[\s:\[\](>|])\.{0,2}\/\w[^\s|,;:(){}\[\]"']*/i,lookbehind:!0,greedy:!0,alias:"string"},date:{pattern:RegExp(/\b\d{4}[-/]\d{2}[-/]\d{2}(?:T(?=\d{1,2}:)|(?=\s\d{1,2}:))/.source+"|"+/\b\d{1,4}[-/ ](?:\d{1,2}|Apr|Aug|Dec|Feb|Jan|Jul|Jun|Mar|May|Nov|Oct|Sep)[-/ ]\d{2,4}T?\b/.source+"|"+/\b(?:(?:Fri|Mon|Sat|Sun|Thu|Tue|Wed)(?:\s{1,2}(?:Apr|Aug|Dec|Feb|Jan|Jul|Jun|Mar|May|Nov|Oct|Sep))?|Apr|Aug|Dec|Feb|Jan|Jul|Jun|Mar|May|Nov|Oct|Sep)\s{1,2}\d{1,2}\b/.source,"i"),alias:"number"},time:{pattern:/\b\d{1,2}:\d{1,2}:\d{1,2}(?:[.,:]\d+)?(?:\s?[+-]\d{2}:?\d{2}|Z)?\b/,alias:"number"},boolean:/\b(?:false|null|true)\b/i,number:{pattern:/(^|[^.\w])(?:0x[a-f0-9]+|0o[0-7]+|0b[01]+|v?\d[\da-f]*(?:\.\d+)*(?:e[+-]?\d+)?[a-z]{0,3}\b)\b(?!\.\w)/i,lookbehind:!0},operator:/[;:?<=>~/@!$%&+\-|^(){}*#]/,punctuation:/[\[\].,]/}},6555:function(){Prism.languages.lolcode={comment:[/\bOBTW\s[\s\S]*?\sTLDR\b/,/\bBTW.+/],string:{pattern:/"(?::.|[^":])*"/,inside:{variable:/:\{[^}]+\}/,symbol:[/:\([a-f\d]+\)/i,/:\[[^\]]+\]/,/:[)>o":]/]},greedy:!0},number:/(?:\B-)?(?:\b\d+(?:\.\d*)?|\B\.\d+)/,symbol:{pattern:/(^|\s)(?:A )?(?:BUKKIT|NOOB|NUMBAR|NUMBR|TROOF|YARN)(?=\s|,|$)/,lookbehind:!0,inside:{keyword:/A(?=\s)/}},label:{pattern:/((?:^|\s)(?:IM IN YR|IM OUTTA YR) )[a-zA-Z]\w*/,lookbehind:!0,alias:"string"},function:{pattern:/((?:^|\s)(?:HOW IZ I|I IZ|IZ) )[a-zA-Z]\w*/,lookbehind:!0},keyword:[{pattern:/(^|\s)(?:AN|FOUND YR|GIMMEH|GTFO|HAI|HAS A|HOW IZ I|I HAS A|I IZ|IF U SAY SO|IM IN YR|IM OUTTA YR|IS NOW(?: A)?|ITZ(?: A)?|IZ|KTHX|KTHXBYE|LIEK(?: A)?|MAEK|MEBBE|MKAY|NERFIN|NO WAI|O HAI IM|O RLY\?|OIC|OMG|OMGWTF|R|SMOOSH|SRS|TIL|UPPIN|VISIBLE|WILE|WTF\?|YA RLY|YR)(?=\s|,|$)/,lookbehind:!0},/'Z(?=\s|,|$)/],boolean:{pattern:/(^|\s)(?:FAIL|WIN)(?=\s|,|$)/,lookbehind:!0},variable:{pattern:/(^|\s)IT(?=\s|,|$)/,lookbehind:!0},operator:{pattern:/(^|\s)(?:NOT|BOTH SAEM|DIFFRINT|(?:ALL|ANY|BIGGR|BOTH|DIFF|EITHER|MOD|PRODUKT|QUOSHUNT|SMALLR|SUM|WON) OF)(?=\s|,|$)/,lookbehind:!0},punctuation:/\.{3}|…|,|!/}},6841:function(){Prism.languages.lua={comment:/^#!.+|--(?:\[(=*)\[[\s\S]*?\]\1\]|.*)/m,string:{pattern:/(["'])(?:(?!\1)[^\\\r\n]|\\z(?:\r\n|\s)|\\(?:\r\n|[^z]))*\1|\[(=*)\[[\s\S]*?\]\2\]/,greedy:!0},number:/\b0x[a-f\d]+(?:\.[a-f\d]*)?(?:p[+-]?\d+)?\b|\b\d+(?:\.\B|(?:\.\d*)?(?:e[+-]?\d+)?\b)|\B\.\d+(?:e[+-]?\d+)?\b/i,keyword:/\b(?:and|break|do|else|elseif|end|false|for|function|goto|if|in|local|nil|not|or|repeat|return|then|true|until|while)\b/,function:/(?!\d)\w+(?=\s*(?:[({]))/,operator:[/[-+*%^&|#]|\/\/?|<[<=]?|>[>=]?|[=~]=?/,{pattern:/(^|[^.])\.\.(?!\.)/,lookbehind:!0}],punctuation:/[\[\](){},;]|\.+|:+/}},6004:function(){Prism.languages.magma={output:{pattern:/^(>.*(?:\r(?:\n|(?!\n))|\n))(?!>)(?:.+|(?:\r(?:\n|(?!\n))|\n)(?!>).*)(?:(?:\r(?:\n|(?!\n))|\n)(?!>).*)*/m,lookbehind:!0,greedy:!0},comment:{pattern:/\/\/.*|\/\*[\s\S]*?\*\//,greedy:!0},string:{pattern:/(^|[^\\"])"(?:[^\r\n\\"]|\\.)*"/,lookbehind:!0,greedy:!0},keyword:/\b(?:_|adj|and|assert|assert2|assert3|assigned|break|by|case|cat|catch|clear|cmpeq|cmpne|continue|declare|default|delete|diff|div|do|elif|else|end|eq|error|eval|exists|exit|for|forall|forward|fprintf|freeze|function|ge|gt|if|iload|import|in|intrinsic|is|join|le|load|local|lt|meet|mod|ne|not|notadj|notin|notsubset|or|print|printf|procedure|quit|random|read|readi|repeat|require|requirege|requirerange|restore|return|save|sdiff|select|subset|then|time|to|try|until|vprint|vprintf|vtime|when|where|while|xor)\b/,boolean:/\b(?:false|true)\b/,generator:{pattern:/\b[a-z_]\w*(?=\s*<)/i,alias:"class-name"},function:/\b[a-z_]\w*(?=\s*\()/i,number:{pattern:/(^|[^\w.]|\.\.)(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?(?:_[a-z]?)?(?=$|[^\w.]|\.\.)/,lookbehind:!0},operator:/->|[-+*/^~!|#=]|:=|\.\./,punctuation:/[()[\]{}<>,;.:]/}},8443:function(){Prism.languages.makefile={comment:{pattern:/(^|[^\\])#(?:\\(?:\r\n|[\s\S])|[^\\\r\n])*/,lookbehind:!0},string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},"builtin-target":{pattern:/\.[A-Z][^:#=\s]+(?=\s*:(?!=))/,alias:"builtin"},target:{pattern:/^(?:[^:=\s]|[ \t]+(?![\s:]))+(?=\s*:(?!=))/m,alias:"symbol",inside:{variable:/\$+(?:(?!\$)[^(){}:#=\s]+|(?=[({]))/}},variable:/\$+(?:(?!\$)[^(){}:#=\s]+|\([@*%<^+?][DF]\)|(?=[({]))/,keyword:/-include\b|\b(?:define|else|endef|endif|export|ifn?def|ifn?eq|include|override|private|sinclude|undefine|unexport|vpath)\b/,function:{pattern:/(\()(?:abspath|addsuffix|and|basename|call|dir|error|eval|file|filter(?:-out)?|findstring|firstword|flavor|foreach|guile|if|info|join|lastword|load|notdir|or|origin|patsubst|realpath|shell|sort|strip|subst|suffix|value|warning|wildcard|word(?:list|s)?)(?=[ \t])/,lookbehind:!0},operator:/(?:::|[?:+!])?=|[|@]/,punctuation:/[:;(){}]/}},4064:function(){(function(e){var t=/(?:\\.|[^\\\n\r]|(?:\n|\r\n?)(?![\r\n]))/.source;function n(e){return e=e.replace(//g,(function(){return t})),RegExp(/((?:^|[^\\])(?:\\{2})*)/.source+"(?:"+e+")")}var r=/(?:\\.|``(?:[^`\r\n]|`(?!`))+``|`[^`\r\n]+`|[^\\|\r\n`])+/.source,i=/\|?__(?:\|__)+\|?(?:(?:\n|\r\n?)|(?![\s\S]))/.source.replace(/__/g,(function(){return r})),s=/\|?[ \t]*:?-{3,}:?[ \t]*(?:\|[ \t]*:?-{3,}:?[ \t]*)+\|?(?:\n|\r\n?)/.source;e.languages.markdown=e.languages.extend("markup",{}),e.languages.insertBefore("markdown","prolog",{"front-matter-block":{pattern:/(^(?:\s*[\r\n])?)---(?!.)[\s\S]*?[\r\n]---(?!.)/,lookbehind:!0,greedy:!0,inside:{punctuation:/^---|---$/,"front-matter":{pattern:/\S+(?:\s+\S+)*/,alias:["yaml","language-yaml"],inside:e.languages.yaml}}},blockquote:{pattern:/^>(?:[\t ]*>)*/m,alias:"punctuation"},table:{pattern:RegExp("^"+i+s+"(?:"+i+")*","m"),inside:{"table-data-rows":{pattern:RegExp("^("+i+s+")(?:"+i+")*$"),lookbehind:!0,inside:{"table-data":{pattern:RegExp(r),inside:e.languages.markdown},punctuation:/\|/}},"table-line":{pattern:RegExp("^("+i+")"+s+"$"),lookbehind:!0,inside:{punctuation:/\||:?-{3,}:?/}},"table-header-row":{pattern:RegExp("^"+i+"$"),inside:{"table-header":{pattern:RegExp(r),alias:"important",inside:e.languages.markdown},punctuation:/\|/}}}},code:[{pattern:/((?:^|\n)[ \t]*\n|(?:^|\r\n?)[ \t]*\r\n?)(?: {4}|\t).+(?:(?:\n|\r\n?)(?: {4}|\t).+)*/,lookbehind:!0,alias:"keyword"},{pattern:/^```[\s\S]*?^```$/m,greedy:!0,inside:{"code-block":{pattern:/^(```.*(?:\n|\r\n?))[\s\S]+?(?=(?:\n|\r\n?)^```$)/m,lookbehind:!0},"code-language":{pattern:/^(```).+/,lookbehind:!0},punctuation:/```/}}],title:[{pattern:/\S.*(?:\n|\r\n?)(?:==+|--+)(?=[ \t]*$)/m,alias:"important",inside:{punctuation:/==+$|--+$/}},{pattern:/(^\s*)#.+/m,lookbehind:!0,alias:"important",inside:{punctuation:/^#+|#+$/}}],hr:{pattern:/(^\s*)([*-])(?:[\t ]*\2){2,}(?=\s*$)/m,lookbehind:!0,alias:"punctuation"},list:{pattern:/(^\s*)(?:[*+-]|\d+\.)(?=[\t ].)/m,lookbehind:!0,alias:"punctuation"},"url-reference":{pattern:/!?\[[^\]]+\]:[\t ]+(?:\S+|<(?:\\.|[^>\\])+>)(?:[\t ]+(?:"(?:\\.|[^"\\])*"|'(?:\\.|[^'\\])*'|\((?:\\.|[^)\\])*\)))?/,inside:{variable:{pattern:/^(!?\[)[^\]]+/,lookbehind:!0},string:/(?:"(?:\\.|[^"\\])*"|'(?:\\.|[^'\\])*'|\((?:\\.|[^)\\])*\))$/,punctuation:/^[\[\]!:]|[<>]/},alias:"url"},bold:{pattern:n(/\b__(?:(?!_)|_(?:(?!_))+_)+__\b|\*\*(?:(?!\*)|\*(?:(?!\*))+\*)+\*\*/.source),lookbehind:!0,greedy:!0,inside:{content:{pattern:/(^..)[\s\S]+(?=..$)/,lookbehind:!0,inside:{}},punctuation:/\*\*|__/}},italic:{pattern:n(/\b_(?:(?!_)|__(?:(?!_))+__)+_\b|\*(?:(?!\*)|\*\*(?:(?!\*))+\*\*)+\*/.source),lookbehind:!0,greedy:!0,inside:{content:{pattern:/(^.)[\s\S]+(?=.$)/,lookbehind:!0,inside:{}},punctuation:/[*_]/}},strike:{pattern:n(/(~~?)(?:(?!~))+\2/.source),lookbehind:!0,greedy:!0,inside:{content:{pattern:/(^~~?)[\s\S]+(?=\1$)/,lookbehind:!0,inside:{}},punctuation:/~~?/}},"code-snippet":{pattern:/(^|[^\\`])(?:``[^`\r\n]+(?:`[^`\r\n]+)*``(?!`)|`[^`\r\n]+`(?!`))/,lookbehind:!0,greedy:!0,alias:["code","keyword"]},url:{pattern:n(/!?\[(?:(?!\]))+\](?:\([^\s)]+(?:[\t ]+"(?:\\.|[^"\\])*")?\)|[ \t]?\[(?:(?!\]))+\])/.source),lookbehind:!0,greedy:!0,inside:{operator:/^!/,content:{pattern:/(^\[)[^\]]+(?=\])/,lookbehind:!0,inside:{}},variable:{pattern:/(^\][ \t]?\[)[^\]]+(?=\]$)/,lookbehind:!0},url:{pattern:/(^\]\()[^\s)]+/,lookbehind:!0},string:{pattern:/(^[ \t]+)"(?:\\.|[^"\\])*"(?=\)$)/,lookbehind:!0}}}}),["url","bold","italic","strike"].forEach((function(t){["url","bold","italic","strike","code-snippet"].forEach((function(n){t!==n&&(e.languages.markdown[t].inside.content.inside[n]=e.languages.markdown[n])}))})),e.hooks.add("after-tokenize",(function(e){function t(e){if(e&&"string"!==typeof e)for(var n=0,r=e.length;n",quot:'"'},l=String.fromCodePoint||String.fromCharCode;function c(e){var t=e.replace(o,"");return t=t.replace(/&(\w{1,8}|#x?[\da-f]{1,8});/gi,(function(e,t){var n;if(t=t.toLowerCase(),"#"===t[0])return n="x"===t[1]?parseInt(t.slice(2),16):Number(t.slice(1)),l(n);var r=a[t];return r||e})),t}e.languages.md=e.languages.markdown})(Prism)},6854:function(){(function(e){function t(e,t){return"___"+e.toUpperCase()+t+"___"}Object.defineProperties(e.languages["markup-templating"]={},{buildPlaceholders:{value:function(n,r,i,s){if(n.language===r){var o=n.tokenStack=[];n.code=n.code.replace(i,(function(e){if("function"===typeof s&&!s(e))return e;var i,a=o.length;while(-1!==n.code.indexOf(i=t(r,a)))++a;return o[a]=e,i})),n.grammar=e.languages.markup}}},tokenizePlaceholders:{value:function(n,r){if(n.language===r&&n.tokenStack){n.grammar=e.languages[r];var i=0,s=Object.keys(n.tokenStack);o(n.tokens)}function o(a){for(var l=0;l=s.length)break;var c=a[l];if("string"===typeof c||c.content&&"string"===typeof c.content){var u=s[i],d=n.tokenStack[u],h="string"===typeof c?c:c.content,p=t(r,u),f=h.indexOf(p);if(f>-1){++i;var g=h.substring(0,f),m=new e.Token(r,e.tokenize(d,n.grammar),"language-"+r,d),b=h.substring(f+p.length),_=[];g&&_.push.apply(_,o([g])),_.push(m),b&&_.push.apply(_,o([b])),"string"===typeof c?a.splice.apply(a,[l,1].concat(_)):c.content=_}}else c.content&&o(c.content)}return a}}}})})(Prism)},4335:function(){Prism.languages.markup={comment:{pattern://,greedy:!0},prolog:{pattern:/<\?[\s\S]+?\?>/,greedy:!0},doctype:{pattern:/"'[\]]|"[^"]*"|'[^']*')+(?:\[(?:[^<"'\]]|"[^"]*"|'[^']*'|<(?!!--)|)*\]\s*)?>/i,greedy:!0,inside:{"internal-subset":{pattern:/(^[^\[]*\[)[\s\S]+(?=\]>$)/,lookbehind:!0,greedy:!0,inside:null},string:{pattern:/"[^"]*"|'[^']*'/,greedy:!0},punctuation:/^$|[[\]]/,"doctype-tag":/^DOCTYPE/i,name:/[^\s<>'"]+/}},cdata:{pattern://i,greedy:!0},tag:{pattern:/<\/?(?!\d)[^\s>\/=$<%]+(?:\s(?:\s*[^\s>\/=]+(?:\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))|(?=[\s/>])))+)?\s*\/?>/,greedy:!0,inside:{tag:{pattern:/^<\/?[^\s>\/]+/,inside:{punctuation:/^<\/?/,namespace:/^[^\s>\/:]+:/}},"special-attr":[],"attr-value":{pattern:/=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+)/,inside:{punctuation:[{pattern:/^=/,alias:"attr-equals"},{pattern:/^(\s*)["']|["']$/,lookbehind:!0}]}},punctuation:/\/?>/,"attr-name":{pattern:/[^\s>\/]+/,inside:{namespace:/^[^\s>\/:]+:/}}}},entity:[{pattern:/&[\da-z]{1,8};/i,alias:"named-entity"},/&#x?[\da-f]{1,8};/i]},Prism.languages.markup["tag"].inside["attr-value"].inside["entity"]=Prism.languages.markup["entity"],Prism.languages.markup["doctype"].inside["internal-subset"].inside=Prism.languages.markup,Prism.hooks.add("wrap",(function(e){"entity"===e.type&&(e.attributes["title"]=e.content.replace(/&/,"&"))})),Object.defineProperty(Prism.languages.markup.tag,"addInlined",{value:function(e,t){var n={};n["language-"+t]={pattern:/(^$)/i,lookbehind:!0,inside:Prism.languages[t]},n["cdata"]=/^$/i;var r={"included-cdata":{pattern://i,inside:n}};r["language-"+t]={pattern:/[\s\S]+/,inside:Prism.languages[t]};var i={};i[e]={pattern:RegExp(/(<__[^>]*>)(?:))*\]\]>|(?!)/.source.replace(/__/g,(function(){return e})),"i"),lookbehind:!0,greedy:!0,inside:r},Prism.languages.insertBefore("markup","cdata",i)}}),Object.defineProperty(Prism.languages.markup.tag,"addAttribute",{value:function(e,t){Prism.languages.markup.tag.inside["special-attr"].push({pattern:RegExp(/(^|["'\s])/.source+"(?:"+e+")"+/\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))/.source,"i"),lookbehind:!0,inside:{"attr-name":/^[^\s=]+/,"attr-value":{pattern:/=[\s\S]+/,inside:{value:{pattern:/(^=\s*(["']|(?!["'])))\S[\s\S]*(?=\2$)/,lookbehind:!0,alias:[t,"language-"+t],inside:Prism.languages[t]},punctuation:[{pattern:/^=/,alias:"attr-equals"},/"|'/]}}}})}}),Prism.languages.html=Prism.languages.markup,Prism.languages.mathml=Prism.languages.markup,Prism.languages.svg=Prism.languages.markup,Prism.languages.xml=Prism.languages.extend("markup",{}),Prism.languages.ssml=Prism.languages.xml,Prism.languages.atom=Prism.languages.xml,Prism.languages.rss=Prism.languages.xml},6268:function(){(function(e){var t=/\b(?:(?:col|row)?vector|matrix|scalar)\b/.source,n=/\bvoid\b||\b(?:complex|numeric|pointer(?:\s*\([^()]*\))?|real|string|(?:class|struct)\s+\w+|transmorphic)(?:\s*)?/.source.replace(//g,t);e.languages.mata={comment:{pattern:/\/\/.*|\/\*(?:[^*/]|\*(?!\/)|\/(?!\*)|\/\*(?:[^*]|\*(?!\/))*\*\/)*\*\//,greedy:!0},string:{pattern:/"[^"\r\n]*"|[‘`']".*?"[’`']/,greedy:!0},"class-name":{pattern:/(\b(?:class|extends|struct)\s+)\w+(?=\s*(?:\{|\bextends\b))/,lookbehind:!0},type:{pattern:RegExp(n),alias:"class-name",inside:{punctuation:/[()]/,keyword:/\b(?:class|function|struct|void)\b/}},keyword:/\b(?:break|class|continue|do|else|end|extends|external|final|for|function|goto|if|pragma|private|protected|public|return|static|struct|unset|unused|version|virtual|while)\b/,constant:/\bNULL\b/,number:{pattern:/(^|[^\w.])(?:\d+(?:\.\d+)?(?:e[+-]?\d+)?|\d[a-f0-9]*(?:\.[a-f0-9]+)?x[+-]?\d+)i?(?![\w.])/i,lookbehind:!0},missing:{pattern:/(^|[^\w.])(?:\.[a-z]?)(?![\w.])/,lookbehind:!0,alias:"symbol"},function:/\b[a-z_]\w*(?=\s*\()/i,operator:/\.\.|\+\+|--|&&|\|\||:?(?:[!=<>]=|[+\-*/^<>&|:])|[!?=\\#’`']/,punctuation:/[()[\]{},;.]/}})(Prism)},1169:function(){Prism.languages.matlab={comment:[/%\{[\s\S]*?\}%/,/%.+/],string:{pattern:/\B'(?:''|[^'\r\n])*'/,greedy:!0},number:/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[eE][+-]?\d+)?(?:[ij])?|\b[ij]\b/,keyword:/\b(?:NaN|break|case|catch|continue|else|elseif|end|for|function|if|inf|otherwise|parfor|pause|pi|return|switch|try|while)\b/,function:/\b(?!\d)\w+(?=\s*\()/,operator:/\.?[*^\/\\']|[+\-:@]|[<>=~]=?|&&?|\|\|?/,punctuation:/\.{3}|[.,;\[\](){}!]/}},3965:function(){(function(e){var t=/\b(?:about|and|animate|as|at|attributes|by|case|catch|collect|continue|coordsys|do|else|exit|fn|for|from|function|global|if|in|local|macroscript|mapped|max|not|of|off|on|or|parameters|persistent|plugin|rcmenu|return|rollout|set|struct|then|throw|to|tool|try|undo|utility|when|where|while|with)\b/i;e.languages.maxscript={comment:{pattern:/\/\*[\s\S]*?(?:\*\/|$)|--.*/,greedy:!0},string:{pattern:/(^|[^"\\@])(?:"(?:[^"\\]|\\[\s\S])*"|@"[^"]*")/,lookbehind:!0,greedy:!0},path:{pattern:/\$(?:[\w/\\.*?]|'[^']*')*/,greedy:!0,alias:"string"},"function-call":{pattern:RegExp("((?:"+/^/.source+"|"+/[;=<>+\-*/^({\[]/.source+"|"+/\b(?:and|by|case|catch|collect|do|else|if|in|not|or|return|then|to|try|where|while|with)\b/.source+")[ \t]*)(?!"+t.source+")"+/[a-z_]\w*\b/.source+"(?=[ \t]*(?:(?!"+t.source+")"+/[a-z_]/.source+"|"+/\d|-\.?\d/.source+"|"+/[({'"$@#?]/.source+"))","im"),lookbehind:!0,greedy:!0,alias:"function"},"function-definition":{pattern:/(\b(?:fn|function)\s+)\w+\b/i,lookbehind:!0,alias:"function"},argument:{pattern:/\b[a-z_]\w*(?=:)/i,alias:"attr-name"},keyword:t,boolean:/\b(?:false|true)\b/,time:{pattern:/(^|[^\w.])(?:(?:(?:\d+(?:\.\d*)?|\.\d+)(?:[eEdD][+-]\d+|[LP])?[msft])+|\d+:\d+(?:\.\d*)?)(?![\w.:])/,lookbehind:!0,alias:"number"},number:[{pattern:/(^|[^\w.])(?:(?:\d+(?:\.\d*)?|\.\d+)(?:[eEdD][+-]\d+|[LP])?|0x[a-fA-F0-9]+)(?![\w.:])/,lookbehind:!0},/\b(?:e|pi)\b/],constant:/\b(?:dontcollect|ok|silentValue|undefined|unsupplied)\b/,color:{pattern:/\b(?:black|blue|brown|gray|green|orange|red|white|yellow)\b/i,alias:"constant"},operator:/[-+*/<>=!]=?|[&^?]|#(?!\()/,punctuation:/[()\[\]{}.:,;]|#(?=\()|\\$/m}})(Prism)},6185:function(){Prism.languages.mel={comment:{pattern:/\/\/.*|\/\*[\s\S]*?\*\//,greedy:!0},code:{pattern:/`(?:\\.|[^\\`])*`/,greedy:!0,alias:"italic",inside:{delimiter:{pattern:/^`|`$/,alias:"punctuation"},statement:{pattern:/[\s\S]+/,inside:null}}},string:{pattern:/"(?:\\.|[^\\"\r\n])*"/,greedy:!0},variable:/\$\w+/,number:/\b0x[\da-fA-F]+\b|\b\d+(?:\.\d*)?|\B\.\d+/,flag:{pattern:/-[^\d\W]\w*/,alias:"operator"},keyword:/\b(?:break|case|continue|default|do|else|float|for|global|if|in|int|matrix|proc|return|string|switch|vector|while)\b/,function:{pattern:/((?:^|[{;])[ \t]*)[a-z_]\w*\b(?!\s*(?:\.(?!\.)|[[{=]))|\b[a-z_]\w*(?=[ \t]*\()/im,lookbehind:!0,greedy:!0},"tensor-punctuation":{pattern:/<<|>>/,alias:"punctuation"},operator:/\+[+=]?|-[-=]?|&&|\|\||[<>]=?|[*\/!=]=?|[%^]/,punctuation:/[.,:;?\[\](){}]/},Prism.languages.mel["code"].inside["statement"].inside=Prism.languages.mel},3099:function(){Prism.languages.mermaid={comment:{pattern:/%%.*/,greedy:!0},style:{pattern:/^([ \t]*(?:classDef|linkStyle|style)[ \t]+[\w$-]+[ \t]+)\w.*[^\s;]/m,lookbehind:!0,inside:{property:/\b\w[\w-]*(?=[ \t]*:)/,operator:/:/,punctuation:/,/}},"inter-arrow-label":{pattern:/([^<>ox.=-])(?:-[-.]|==)(?![<>ox.=-])[ \t]*(?:"[^"\r\n]*"|[^\s".=-](?:[^\r\n.=-]*[^\s.=-])?)[ \t]*(?:\.+->?|--+[->]|==+[=>])(?![<>ox.=-])/,lookbehind:!0,greedy:!0,inside:{arrow:{pattern:/(?:\.+->?|--+[->]|==+[=>])$/,alias:"operator"},label:{pattern:/^([\s\S]{2}[ \t]*)\S(?:[\s\S]*\S)?/,lookbehind:!0,alias:"property"},"arrow-head":{pattern:/^\S+/,alias:["arrow","operator"]}}},arrow:[{pattern:/(^|[^{}|o.-])[|}][|o](?:--|\.\.)[|o][|{](?![{}|o.-])/,lookbehind:!0,alias:"operator"},{pattern:/(^|[^<>ox.=-])(?:[ox]?|(?:==+|--+|-\.*-)[>ox]|===+|---+|-\.+-)(?![<>ox.=-])/,lookbehind:!0,alias:"operator"},{pattern:/(^|[^<>()x-])(?:--?(?:>>|[x>)])(?![<>()x])|(?:<<|[x<(])--?(?!-))/,lookbehind:!0,alias:"operator"},{pattern:/(^|[^<>|*o.-])(?:[*o]--|--[*o]|<\|?(?:--|\.\.)|(?:--|\.\.)\|?>|--|\.\.)(?![<>|*o.-])/,lookbehind:!0,alias:"operator"}],label:{pattern:/(^|[^|<])\|(?:[^\r\n"|]|"[^"\r\n]*")+\|/,lookbehind:!0,greedy:!0,alias:"property"},text:{pattern:/(?:[(\[{]+|\b>)(?:[^\r\n"()\[\]{}]|"[^"\r\n]*")+(?:[)\]}]+|>)/,alias:"string"},string:{pattern:/"[^"\r\n]*"/,greedy:!0},annotation:{pattern:/<<(?:abstract|choice|enumeration|fork|interface|join|service)>>|\[\[(?:choice|fork|join)\]\]/i,alias:"important"},keyword:[{pattern:/(^[ \t]*)(?:action|callback|class|classDef|classDiagram|click|direction|erDiagram|flowchart|gantt|gitGraph|graph|journey|link|linkStyle|pie|requirementDiagram|sequenceDiagram|stateDiagram|stateDiagram-v2|style|subgraph)(?![\w$-])/m,lookbehind:!0,greedy:!0},{pattern:/(^[ \t]*)(?:activate|alt|and|as|autonumber|deactivate|else|end(?:[ \t]+note)?|loop|opt|par|participant|rect|state|note[ \t]+(?:over|(?:left|right)[ \t]+of))(?![\w$-])/im,lookbehind:!0,greedy:!0}],entity:/#[a-z0-9]+;/,operator:{pattern:/(\w[ \t]*)&(?=[ \t]*\w)|:::|:/,lookbehind:!0},punctuation:/[(){};]/}},6554:function(){Prism.languages.metafont={comment:{pattern:/%.*/,greedy:!0},string:{pattern:/"[^\r\n"]*"/,greedy:!0},number:/\d*\.?\d+/,boolean:/\b(?:false|true)\b/,punctuation:[/[,;()]/,{pattern:/(^|[^{}])(?:\{|\})(?![{}])/,lookbehind:!0},{pattern:/(^|[^[])\[(?!\[)/,lookbehind:!0},{pattern:/(^|[^\]])\](?!\])/,lookbehind:!0}],constant:[{pattern:/(^|[^!?])\?\?\?(?![!?])/,lookbehind:!0},{pattern:/(^|[^/*\\])(?:\\|\\\\)(?![/*\\])/,lookbehind:!0},/\b(?:_|blankpicture|bp|cc|cm|dd|ditto|down|eps|epsilon|fullcircle|halfcircle|identity|in|infinity|left|mm|nullpen|nullpicture|origin|pc|penrazor|penspeck|pensquare|penstroke|proof|pt|quartercircle|relax|right|smoke|unitpixel|unitsquare|up)\b/],quantity:{pattern:/\b(?:autorounding|blacker|boundarychar|charcode|chardp|chardx|chardy|charext|charht|charic|charwd|currentwindow|day|designsize|displaying|fillin|fontmaking|granularity|hppp|join_radius|month|o_correction|pausing|pen_(?:bot|lft|rt|top)|pixels_per_inch|proofing|showstopping|smoothing|time|tolerance|tracingcapsules|tracingchoices|tracingcommands|tracingedges|tracingequations|tracingmacros|tracingonline|tracingoutput|tracingpens|tracingrestores|tracingspecs|tracingstats|tracingtitles|turningcheck|vppp|warningcheck|xoffset|year|yoffset)\b/,alias:"keyword"},command:{pattern:/\b(?:addto|batchmode|charlist|cull|display|errhelp|errmessage|errorstopmode|everyjob|extensible|fontdimen|headerbyte|inner|interim|let|ligtable|message|newinternal|nonstopmode|numspecial|openwindow|outer|randomseed|save|scrollmode|shipout|show|showdependencies|showstats|showtoken|showvariable|special)\b/,alias:"builtin"},operator:[{pattern:/(^|[^>=<:|])(?:<|<=|=|=:|\|=:|\|=:>|=:\|>|=:\||\|=:\||\|=:\|>|\|=:\|>>|>|>=|:|:=|<>|::|\|\|:)(?![>=<:|])/,lookbehind:!0},{pattern:/(^|[^+-])(?:\+|\+\+|-{1,3}|\+-\+)(?![+-])/,lookbehind:!0},{pattern:/(^|[^/*\\])(?:\*|\*\*|\/)(?![/*\\])/,lookbehind:!0},{pattern:/(^|[^.])(?:\.{2,3})(?!\.)/,lookbehind:!0},{pattern:/(^|[^@#&$])&(?![@#&$])/,lookbehind:!0},/\b(?:and|not|or)\b/],macro:{pattern:/\b(?:abs|beginchar|bot|byte|capsule_def|ceiling|change_width|clear_pen_memory|clearit|clearpen|clearxy|counterclockwise|cullit|cutdraw|cutoff|decr|define_blacker_pixels|define_corrected_pixels|define_good_x_pixels|define_good_y_pixels|define_horizontal_corrected_pixels|define_pixels|define_whole_blacker_pixels|define_whole_pixels|define_whole_vertical_blacker_pixels|define_whole_vertical_pixels|dir|direction|directionpoint|div|dotprod|downto|draw|drawdot|endchar|erase|fill|filldraw|fix_units|flex|font_coding_scheme|font_extra_space|font_identifier|font_normal_shrink|font_normal_space|font_normal_stretch|font_quad|font_size|font_slant|font_x_height|gfcorners|gobble|gobbled|good\.(?:bot|lft|rt|top|x|y)|grayfont|hide|hround|imagerules|incr|interact|interpath|intersectionpoint|inverse|italcorr|killtext|labelfont|labels|lft|loggingall|lowres_fix|makegrid|makelabel(?:\.(?:bot|lft|rt|top)(?:\.nodot)?)?|max|min|mod|mode_def|mode_setup|nodisplays|notransforms|numtok|openit|penlabels|penpos|pickup|proofoffset|proofrule|proofrulethickness|range|reflectedabout|rotatedabout|rotatedaround|round|rt|savepen|screenchars|screenrule|screenstrokes|shipit|showit|slantfont|softjoin|solve|stop|superellipse|tensepath|thru|titlefont|top|tracingall|tracingnone|undraw|undrawdot|unfill|unfilldraw|upto|vround)\b/,alias:"function"},builtin:/\b(?:ASCII|angle|char|cosd|decimal|directiontime|floor|hex|intersectiontimes|jobname|known|length|makepath|makepen|mexp|mlog|normaldeviate|oct|odd|pencircle|penoffset|point|postcontrol|precontrol|reverse|rotated|sind|sqrt|str|subpath|substring|totalweight|turningnumber|uniformdeviate|unknown|xpart|xxpart|xypart|ypart|yxpart|yypart)\b/,keyword:/\b(?:also|at|atleast|begingroup|charexists|contour|controls|curl|cycle|def|delimiters|doublepath|dropping|dump|else|elseif|end|enddef|endfor|endgroup|endinput|exitif|exitunless|expandafter|fi|for|forever|forsuffixes|from|if|input|inwindow|keeping|kern|of|primarydef|quote|readstring|scaled|scantokens|secondarydef|shifted|skipto|slanted|step|tension|tertiarydef|to|transformed|until|vardef|withpen|withweight|xscaled|yscaled|zscaled)\b/,type:{pattern:/\b(?:boolean|expr|numeric|pair|path|pen|picture|primary|secondary|string|suffix|tertiary|text|transform)\b/,alias:"property"},variable:{pattern:/(^|[^@#&$])(?:@#|#@|#|@)(?![@#&$])|\b(?:aspect_ratio|currentpen|currentpicture|currenttransform|d|extra_beginchar|extra_endchar|extra_setup|h|localfont|mag|mode|screen_cols|screen_rows|w|whatever|x|y|z)\b/,lookbehind:!0}}},5101:function(){Prism.languages.mizar={comment:/::.+/,keyword:/@proof\b|\b(?:according|aggregate|all|and|antonym|are|as|associativity|assume|asymmetry|attr|be|begin|being|by|canceled|case|cases|clusters?|coherence|commutativity|compatibility|connectedness|consider|consistency|constructors|contradiction|correctness|def|deffunc|define|definitions?|defpred|do|does|end|environ|equals|ex|exactly|existence|for|from|func|given|hence|hereby|holds|idempotence|identity|iff?|implies|involutiveness|irreflexivity|is|it|let|means|mode|non|not|notations?|now|of|or|otherwise|over|per|pred|prefix|projectivity|proof|provided|qua|reconsider|redefine|reduce|reducibility|reflexivity|registrations?|requirements|reserve|sch|schemes?|section|selector|set|sethood|st|struct|such|suppose|symmetry|synonym|take|that|the|then|theorems?|thesis|thus|to|transitivity|uniqueness|vocabular(?:ies|y)|when|where|with|wrt)\b/,parameter:{pattern:/\$(?:10|\d)/,alias:"variable"},variable:/\b\w+(?=:)/,number:/(?:\b|-)\d+\b/,operator:/\.\.\.|->|&|\.?=/,punctuation:/\(#|#\)|[,:;\[\](){}]/}},9134:function(){(function(e){var t=["$eq","$gt","$gte","$in","$lt","$lte","$ne","$nin","$and","$not","$nor","$or","$exists","$type","$expr","$jsonSchema","$mod","$regex","$text","$where","$geoIntersects","$geoWithin","$near","$nearSphere","$all","$elemMatch","$size","$bitsAllClear","$bitsAllSet","$bitsAnyClear","$bitsAnySet","$comment","$elemMatch","$meta","$slice","$currentDate","$inc","$min","$max","$mul","$rename","$set","$setOnInsert","$unset","$addToSet","$pop","$pull","$push","$pullAll","$each","$position","$slice","$sort","$bit","$addFields","$bucket","$bucketAuto","$collStats","$count","$currentOp","$facet","$geoNear","$graphLookup","$group","$indexStats","$limit","$listLocalSessions","$listSessions","$lookup","$match","$merge","$out","$planCacheStats","$project","$redact","$replaceRoot","$replaceWith","$sample","$set","$skip","$sort","$sortByCount","$unionWith","$unset","$unwind","$setWindowFields","$abs","$accumulator","$acos","$acosh","$add","$addToSet","$allElementsTrue","$and","$anyElementTrue","$arrayElemAt","$arrayToObject","$asin","$asinh","$atan","$atan2","$atanh","$avg","$binarySize","$bsonSize","$ceil","$cmp","$concat","$concatArrays","$cond","$convert","$cos","$dateFromParts","$dateToParts","$dateFromString","$dateToString","$dayOfMonth","$dayOfWeek","$dayOfYear","$degreesToRadians","$divide","$eq","$exp","$filter","$first","$floor","$function","$gt","$gte","$hour","$ifNull","$in","$indexOfArray","$indexOfBytes","$indexOfCP","$isArray","$isNumber","$isoDayOfWeek","$isoWeek","$isoWeekYear","$last","$last","$let","$literal","$ln","$log","$log10","$lt","$lte","$ltrim","$map","$max","$mergeObjects","$meta","$min","$millisecond","$minute","$mod","$month","$multiply","$ne","$not","$objectToArray","$or","$pow","$push","$radiansToDegrees","$range","$reduce","$regexFind","$regexFindAll","$regexMatch","$replaceOne","$replaceAll","$reverseArray","$round","$rtrim","$second","$setDifference","$setEquals","$setIntersection","$setIsSubset","$setUnion","$size","$sin","$slice","$split","$sqrt","$stdDevPop","$stdDevSamp","$strcasecmp","$strLenBytes","$strLenCP","$substr","$substrBytes","$substrCP","$subtract","$sum","$switch","$tan","$toBool","$toDate","$toDecimal","$toDouble","$toInt","$toLong","$toObjectId","$toString","$toLower","$toUpper","$trim","$trunc","$type","$week","$year","$zip","$count","$dateAdd","$dateDiff","$dateSubtract","$dateTrunc","$getField","$rand","$sampleRate","$setField","$unsetField","$comment","$explain","$hint","$max","$maxTimeMS","$min","$orderby","$query","$returnKey","$showDiskLoc","$natural"],n=["ObjectId","Code","BinData","DBRef","Timestamp","NumberLong","NumberDecimal","MaxKey","MinKey","RegExp","ISODate","UUID"];t=t.map((function(e){return e.replace("$","\\$")}));var r="(?:"+t.join("|")+")\\b";e.languages.mongodb=e.languages.extend("javascript",{}),e.languages.insertBefore("mongodb","string",{property:{pattern:/(?:(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1|(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)(?=\s*:)/,greedy:!0,inside:{keyword:RegExp("^(['\"])?"+r+"(?:\\1)?$")}}}),e.languages.mongodb.string.inside={url:{pattern:/https?:\/\/[-\w@:%.+~#=]{1,256}\.[a-z0-9()]{1,6}\b[-\w()@:%+.~#?&/=]*/i,greedy:!0},entity:{pattern:/\b(?:(?:[01]?\d\d?|2[0-4]\d|25[0-5])\.){3}(?:[01]?\d\d?|2[0-4]\d|25[0-5])\b/,greedy:!0}},e.languages.insertBefore("mongodb","constant",{builtin:{pattern:RegExp("\\b(?:"+n.join("|")+")\\b"),alias:"keyword"}})})(Prism)},676:function(){Prism.languages.monkey={comment:{pattern:/^#Rem\s[\s\S]*?^#End|'.+/im,greedy:!0},string:{pattern:/"[^"\r\n]*"/,greedy:!0},preprocessor:{pattern:/(^[ \t]*)#.+/m,lookbehind:!0,greedy:!0,alias:"property"},function:/\b\w+(?=\()/,"type-char":{pattern:/\b[?%#$]/,alias:"class-name"},number:{pattern:/((?:\.\.)?)(?:(?:\b|\B-\.?|\B\.)\d+(?:(?!\.\.)\.\d*)?|\$[\da-f]+)/i,lookbehind:!0},keyword:/\b(?:Abstract|Array|Bool|Case|Catch|Class|Const|Continue|Default|Eachin|Else|ElseIf|End|EndIf|Exit|Extends|Extern|False|Field|Final|Float|For|Forever|Function|Global|If|Implements|Import|Inline|Int|Interface|Local|Method|Module|New|Next|Null|Object|Private|Property|Public|Repeat|Return|Select|Self|Step|Strict|String|Super|Then|Throw|To|True|Try|Until|Void|Wend|While)\b/i,operator:/\.\.|<[=>]?|>=?|:?=|(?:[+\-*\/&~|]|\b(?:Mod|Shl|Shr)\b)=?|\b(?:And|Not|Or)\b/i,punctuation:/[.,:;()\[\]]/}},1899:function(){Prism.languages.moonscript={comment:/--.*/,string:[{pattern:/'[^']*'|\[(=*)\[[\s\S]*?\]\1\]/,greedy:!0},{pattern:/"[^"]*"/,greedy:!0,inside:{interpolation:{pattern:/#\{[^{}]*\}/,inside:{moonscript:{pattern:/(^#\{)[\s\S]+(?=\})/,lookbehind:!0,inside:null},"interpolation-punctuation":{pattern:/#\{|\}/,alias:"punctuation"}}}}}],"class-name":[{pattern:/(\b(?:class|extends)[ \t]+)\w+/,lookbehind:!0},/\b[A-Z]\w*/],keyword:/\b(?:class|continue|do|else|elseif|export|extends|for|from|if|import|in|local|nil|return|self|super|switch|then|unless|using|when|while|with)\b/,variable:/@@?\w*/,property:{pattern:/\b(?!\d)\w+(?=:)|(:)(?!\d)\w+/,lookbehind:!0},function:{pattern:/\b(?:_G|_VERSION|assert|collectgarbage|coroutine\.(?:create|resume|running|status|wrap|yield)|debug\.(?:debug|getfenv|gethook|getinfo|getlocal|getmetatable|getregistry|getupvalue|setfenv|sethook|setlocal|setmetatable|setupvalue|traceback)|dofile|error|getfenv|getmetatable|io\.(?:close|flush|input|lines|open|output|popen|read|stderr|stdin|stdout|tmpfile|type|write)|ipairs|load|loadfile|loadstring|math\.(?:abs|acos|asin|atan|atan2|ceil|cos|cosh|deg|exp|floor|fmod|frexp|ldexp|log|log10|max|min|modf|pi|pow|rad|random|randomseed|sin|sinh|sqrt|tan|tanh)|module|next|os\.(?:clock|date|difftime|execute|exit|getenv|remove|rename|setlocale|time|tmpname)|package\.(?:cpath|loaded|loadlib|path|preload|seeall)|pairs|pcall|print|rawequal|rawget|rawset|require|select|setfenv|setmetatable|string\.(?:byte|char|dump|find|format|gmatch|gsub|len|lower|match|rep|reverse|sub|upper)|table\.(?:concat|insert|maxn|remove|sort)|tonumber|tostring|type|unpack|xpcall)\b/,inside:{punctuation:/\./}},boolean:/\b(?:false|true)\b/,number:/(?:\B\.\d+|\b\d+\.\d+|\b\d+(?=[eE]))(?:[eE][-+]?\d+)?\b|\b(?:0x[a-fA-F\d]+|\d+)(?:U?LL)?\b/,operator:/\.{3}|[-=]>|~=|(?:[-+*/%<>!=]|\.\.)=?|[:#^]|\b(?:and|or)\b=?|\b(?:not)\b/,punctuation:/[.,()[\]{}\\]/},Prism.languages.moonscript.string[1].inside.interpolation.inside.moonscript.inside=Prism.languages.moonscript,Prism.languages.moon=Prism.languages.moonscript},5949:function(){Prism.languages.n1ql={comment:{pattern:/\/\*[\s\S]*?(?:$|\*\/)|--.*/,greedy:!0},string:{pattern:/(["'])(?:\\[\s\S]|(?!\1)[^\\]|\1\1)*\1/,greedy:!0},identifier:{pattern:/`(?:\\[\s\S]|[^\\`]|``)*`/,greedy:!0},parameter:/\$[\w.]+/,keyword:/\b(?:ADVISE|ALL|ALTER|ANALYZE|AS|ASC|AT|BEGIN|BINARY|BOOLEAN|BREAK|BUCKET|BUILD|BY|CALL|CAST|CLUSTER|COLLATE|COLLECTION|COMMIT|COMMITTED|CONNECT|CONTINUE|CORRELATE|CORRELATED|COVER|CREATE|CURRENT|DATABASE|DATASET|DATASTORE|DECLARE|DECREMENT|DELETE|DERIVED|DESC|DESCRIBE|DISTINCT|DO|DROP|EACH|ELEMENT|EXCEPT|EXCLUDE|EXECUTE|EXPLAIN|FETCH|FILTER|FLATTEN|FLUSH|FOLLOWING|FOR|FORCE|FROM|FTS|FUNCTION|GOLANG|GRANT|GROUP|GROUPS|GSI|HASH|HAVING|IF|IGNORE|ILIKE|INCLUDE|INCREMENT|INDEX|INFER|INLINE|INNER|INSERT|INTERSECT|INTO|IS|ISOLATION|JAVASCRIPT|JOIN|KEY|KEYS|KEYSPACE|KNOWN|LANGUAGE|LAST|LEFT|LET|LETTING|LEVEL|LIMIT|LSM|MAP|MAPPING|MATCHED|MATERIALIZED|MERGE|MINUS|MISSING|NAMESPACE|NEST|NL|NO|NTH_VALUE|NULL|NULLS|NUMBER|OBJECT|OFFSET|ON|OPTION|OPTIONS|ORDER|OTHERS|OUTER|OVER|PARSE|PARTITION|PASSWORD|PATH|POOL|PRECEDING|PREPARE|PRIMARY|PRIVATE|PRIVILEGE|PROBE|PROCEDURE|PUBLIC|RANGE|RAW|REALM|REDUCE|RENAME|RESPECT|RETURN|RETURNING|REVOKE|RIGHT|ROLE|ROLLBACK|ROW|ROWS|SATISFIES|SAVEPOINT|SCHEMA|SCOPE|SELECT|SELF|SEMI|SET|SHOW|SOME|START|STATISTICS|STRING|SYSTEM|TIES|TO|TRAN|TRANSACTION|TRIGGER|TRUNCATE|UNBOUNDED|UNDER|UNION|UNIQUE|UNKNOWN|UNNEST|UNSET|UPDATE|UPSERT|USE|USER|USING|VALIDATE|VALUE|VALUES|VIA|VIEW|WHERE|WHILE|WINDOW|WITH|WORK|XOR)\b/i,function:/\b[a-z_]\w*(?=\s*\()/i,boolean:/\b(?:FALSE|TRUE)\b/i,number:/(?:\b\d+\.|\B\.)\d+e[+\-]?\d+\b|\b\d+(?:\.\d*)?|\B\.\d+\b/i,operator:/[-+*\/%]|!=|==?|\|\||<[>=]?|>=?|\b(?:AND|ANY|ARRAY|BETWEEN|CASE|ELSE|END|EVERY|EXISTS|FIRST|IN|LIKE|NOT|OR|THEN|VALUED|WHEN|WITHIN)\b/i,punctuation:/[;[\](),.{}:]/}},8651:function(){Prism.languages.n4js=Prism.languages.extend("javascript",{keyword:/\b(?:Array|any|boolean|break|case|catch|class|const|constructor|continue|debugger|declare|default|delete|do|else|enum|export|extends|false|finally|for|from|function|get|if|implements|import|in|instanceof|interface|let|module|new|null|number|package|private|protected|public|return|set|static|string|super|switch|this|throw|true|try|typeof|var|void|while|with|yield)\b/}),Prism.languages.insertBefore("n4js","constant",{annotation:{pattern:/@+\w+/,alias:"operator"}}),Prism.languages.n4jsd=Prism.languages.n4js},454:function(){Prism.languages["nand2tetris-hdl"]={comment:/\/\/.*|\/\*[\s\S]*?(?:\*\/|$)/,keyword:/\b(?:BUILTIN|CHIP|CLOCKED|IN|OUT|PARTS)\b/,boolean:/\b(?:false|true)\b/,function:/\b[A-Za-z][A-Za-z0-9]*(?=\()/,number:/\b\d+\b/,operator:/=|\.\./,punctuation:/[{}[\];(),:]/}},7898:function(){(function(e){var t=/\{[^\r\n\[\]{}]*\}/,n={"quoted-string":{pattern:/"(?:[^"\\]|\\.)*"/,alias:"operator"},"command-param-id":{pattern:/(\s)\w+:/,lookbehind:!0,alias:"property"},"command-param-value":[{pattern:t,alias:"selector"},{pattern:/([\t ])\S+/,lookbehind:!0,greedy:!0,alias:"operator"},{pattern:/\S(?:.*\S)?/,alias:"operator"}]};function r(e){for(var t="[]{}",n=[],r=0;r.+/m,alias:"tag",inside:{value:{pattern:/(^>\w+[\t ]+)(?!\s)[^{}\r\n]+/,lookbehind:!0,alias:"operator"},key:{pattern:/(^>)\w+/,lookbehind:!0}}},label:{pattern:/^([\t ]*)#[\t ]*\w+[\t ]*$/m,lookbehind:!0,alias:"regex"},command:{pattern:/^([\t ]*)@\w+(?=[\t ]|$).*/m,lookbehind:!0,alias:"function",inside:{"command-name":/^@\w+/,expression:{pattern:t,greedy:!0,alias:"selector"},"command-params":{pattern:/\s*\S[\s\S]*/,inside:n}}},"generic-text":{pattern:/(^[ \t]*)[^#@>;\s].*/m,lookbehind:!0,alias:"punctuation",inside:{"escaped-char":/\\[{}\[\]"]/,expression:{pattern:t,greedy:!0,alias:"selector"},"inline-command":{pattern:/\[[\t ]*\w[^\r\n\[\]]*\]/,greedy:!0,alias:"function",inside:{"command-params":{pattern:/(^\[[\t ]*\w+\b)[\s\S]+(?=\]$)/,lookbehind:!0,inside:n},"command-param-name":{pattern:/^(\[[\t ]*)\w+/,lookbehind:!0,alias:"name"},"start-stop-char":/[\[\]]/}}}}},e.languages.nani=e.languages["naniscript"],e.hooks.add("after-tokenize",(function(e){var t=e.tokens;t.forEach((function(e){if("string"!==typeof e&&"generic-text"===e.type){var t=i(e);r(t)||(e.type="bad-line",e.content=t)}}))}))})(Prism)},2353:function(){Prism.languages.nasm={comment:/;.*$/m,string:/(["'`])(?:\\.|(?!\1)[^\\\r\n])*\1/,label:{pattern:/(^\s*)[A-Za-z._?$][\w.?$@~#]*:/m,lookbehind:!0,alias:"function"},keyword:[/\[?BITS (?:16|32|64)\]?/,{pattern:/(^\s*)section\s*[a-z.]+:?/im,lookbehind:!0},/(?:extern|global)[^;\r\n]*/i,/(?:CPU|DEFAULT|FLOAT).*$/m],register:{pattern:/\b(?:st\d|[xyz]mm\d\d?|[cdt]r\d|r\d\d?[bwd]?|[er]?[abcd]x|[abcd][hl]|[er]?(?:bp|di|si|sp)|[cdefgs]s)\b/i,alias:"variable"},number:/(?:\b|(?=\$))(?:0[hx](?:\.[\da-f]+|[\da-f]+(?:\.[\da-f]+)?)(?:p[+-]?\d+)?|\d[\da-f]+[hx]|\$\d[\da-f]*|0[oq][0-7]+|[0-7]+[oq]|0[by][01]+|[01]+[by]|0[dt]\d+|(?:\d+(?:\.\d+)?|\.\d+)(?:\.?e[+-]?\d+)?[dt]?)\b/i,operator:/[\[\]*+\-\/%<>=&|$!]/}},7661:function(){Prism.languages.neon={comment:{pattern:/#.*/,greedy:!0},datetime:{pattern:/(^|[[{(=:,\s])\d\d\d\d-\d\d?-\d\d?(?:(?:[Tt]| +)\d\d?:\d\d:\d\d(?:\.\d*)? *(?:Z|[-+]\d\d?(?::?\d\d)?)?)?(?=$|[\]}),\s])/,lookbehind:!0,alias:"number"},key:{pattern:/(^|[[{(,\s])[^,:=[\]{}()'"\s]+(?=\s*:(?:$|[\]}),\s])|\s*=)/,lookbehind:!0,alias:"property"},number:{pattern:/(^|[[{(=:,\s])[+-]?(?:0x[\da-fA-F]+|0o[0-7]+|0b[01]+|(?:\d+(?:\.\d*)?|\.?\d+)(?:[eE][+-]?\d+)?)(?=$|[\]}),:=\s])/,lookbehind:!0},boolean:{pattern:/(^|[[{(=:,\s])(?:false|no|true|yes)(?=$|[\]}),:=\s])/i,lookbehind:!0},null:{pattern:/(^|[[{(=:,\s])(?:null)(?=$|[\]}),:=\s])/i,lookbehind:!0,alias:"keyword"},string:{pattern:/(^|[[{(=:,\s])(?:('''|""")\r?\n(?:(?:[^\r\n]|\r?\n(?![\t ]*\2))*\r?\n)?[\t ]*\2|'[^'\r\n]*'|"(?:\\.|[^\\"\r\n])*")/,lookbehind:!0,greedy:!0},literal:{pattern:/(^|[[{(=:,\s])(?:[^#"',:=[\]{}()\s`-]|[:-][^"',=[\]{}()\s])(?:[^,:=\]})(\s]|:(?![\s,\]})]|$)|[ \t]+[^#,:=\]})(\s])*/,lookbehind:!0,alias:"string"},punctuation:/[,:=[\]{}()-]/}},677:function(){Prism.languages.nevod={comment:/\/\/.*|(?:\/\*[\s\S]*?(?:\*\/|$))/,string:{pattern:/(?:"(?:""|[^"])*"(?!")|'(?:''|[^'])*'(?!'))!?\*?/,greedy:!0,inside:{"string-attrs":/!$|!\*$|\*$/}},namespace:{pattern:/(@namespace\s+)[a-zA-Z0-9\-.]+(?=\s*\{)/,lookbehind:!0},pattern:{pattern:/(@pattern\s+)?#?[a-zA-Z0-9\-.]+(?:\s*\(\s*(?:~\s*)?[a-zA-Z0-9\-.]+\s*(?:,\s*(?:~\s*)?[a-zA-Z0-9\-.]*)*\))?(?=\s*=)/,lookbehind:!0,inside:{"pattern-name":{pattern:/^#?[a-zA-Z0-9\-.]+/,alias:"class-name"},fields:{pattern:/\(.*\)/,inside:{"field-name":{pattern:/[a-zA-Z0-9\-.]+/,alias:"variable"},punctuation:/[,()]/,operator:{pattern:/~/,alias:"field-hidden-mark"}}}}},search:{pattern:/(@search\s+|#)[a-zA-Z0-9\-.]+(?:\.\*)?(?=\s*;)/,alias:"function",lookbehind:!0},keyword:/@(?:having|inside|namespace|outside|pattern|require|search|where)\b/,"standard-pattern":{pattern:/\b(?:Alpha|AlphaNum|Any|Blank|End|LineBreak|Num|NumAlpha|Punct|Space|Start|Symbol|Word|WordBreak)\b(?:\([a-zA-Z0-9\-.,\s+]*\))?/,inside:{"standard-pattern-name":{pattern:/^[a-zA-Z0-9\-.]+/,alias:"builtin"},quantifier:{pattern:/\b\d+(?:\s*\+|\s*-\s*\d+)?(?!\w)/,alias:"number"},"standard-pattern-attr":{pattern:/[a-zA-Z0-9\-.]+/,alias:"builtin"},punctuation:/[,()]/}},quantifier:{pattern:/\b\d+(?:\s*\+|\s*-\s*\d+)?(?!\w)/,alias:"number"},operator:[{pattern:/=/,alias:"pattern-def"},{pattern:/&/,alias:"conjunction"},{pattern:/~/,alias:"exception"},{pattern:/\?/,alias:"optionality"},{pattern:/[[\]]/,alias:"repetition"},{pattern:/[{}]/,alias:"variation"},{pattern:/[+_]/,alias:"sequence"},{pattern:/\.{2,3}/,alias:"span"}],"field-capture":[{pattern:/([a-zA-Z0-9\-.]+\s*\()\s*[a-zA-Z0-9\-.]+\s*:\s*[a-zA-Z0-9\-.]+(?:\s*,\s*[a-zA-Z0-9\-.]+\s*:\s*[a-zA-Z0-9\-.]+)*(?=\s*\))/,lookbehind:!0,inside:{"field-name":{pattern:/[a-zA-Z0-9\-.]+/,alias:"variable"},colon:/:/}},{pattern:/[a-zA-Z0-9\-.]+\s*:/,inside:{"field-name":{pattern:/[a-zA-Z0-9\-.]+/,alias:"variable"},colon:/:/}}],punctuation:/[:;,()]/,name:/[a-zA-Z0-9\-.]+/}},3436:function(){(function(e){var t=/\$(?:\w[a-z\d]*(?:_[^\x00-\x1F\s"'\\()$]*)?|\{[^}\s"'\\]+\})/i;e.languages.nginx={comment:{pattern:/(^|[\s{};])#.*/,lookbehind:!0,greedy:!0},directive:{pattern:/(^|\s)\w(?:[^;{}"'\\\s]|\\.|"(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*'|\s+(?:#.*(?!.)|(?![#\s])))*?(?=\s*[;{])/,lookbehind:!0,greedy:!0,inside:{string:{pattern:/((?:^|[^\\])(?:\\\\)*)(?:"(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*')/,lookbehind:!0,greedy:!0,inside:{escape:{pattern:/\\["'\\nrt]/,alias:"entity"},variable:t}},comment:{pattern:/(\s)#.*/,lookbehind:!0,greedy:!0},keyword:{pattern:/^\S+/,greedy:!0},boolean:{pattern:/(\s)(?:off|on)(?!\S)/,lookbehind:!0},number:{pattern:/(\s)\d+[a-z]*(?!\S)/i,lookbehind:!0},variable:t}},punctuation:/[{};]/}})(Prism)},5743:function(){Prism.languages.nim={comment:{pattern:/#.*/,greedy:!0},string:{pattern:/(?:\b(?!\d)(?:\w|\\x[89a-fA-F][0-9a-fA-F])+)?(?:"""[\s\S]*?"""(?!")|"(?:\\[\s\S]|""|[^"\\])*")/,greedy:!0},char:{pattern:/'(?:\\(?:\d+|x[\da-fA-F]{0,2}|.)|[^'])'/,greedy:!0},function:{pattern:/(?:(?!\d)(?:\w|\\x[89a-fA-F][0-9a-fA-F])+|`[^`\r\n]+`)\*?(?:\[[^\]]+\])?(?=\s*\()/,greedy:!0,inside:{operator:/\*$/}},identifier:{pattern:/`[^`\r\n]+`/,greedy:!0,inside:{punctuation:/`/}},number:/\b(?:0[xXoObB][\da-fA-F_]+|\d[\d_]*(?:(?!\.\.)\.[\d_]*)?(?:[eE][+-]?\d[\d_]*)?)(?:'?[iuf]\d*)?/,keyword:/\b(?:addr|as|asm|atomic|bind|block|break|case|cast|concept|const|continue|converter|defer|discard|distinct|do|elif|else|end|enum|except|export|finally|for|from|func|generic|if|import|include|interface|iterator|let|macro|method|mixin|nil|object|out|proc|ptr|raise|ref|return|static|template|try|tuple|type|using|var|when|while|with|without|yield)\b/,operator:{pattern:/(^|[({\[](?=\.\.)|(?![({\[]\.).)(?:(?:[=+\-*\/<>@$~&%|!?^:\\]|\.\.|\.(?![)}\]]))+|\b(?:and|div|in|is|isnot|mod|not|notin|of|or|shl|shr|xor)\b)/m,lookbehind:!0},punctuation:/[({\[]\.|\.[)}\]]|[`(){}\[\],:]/}},8704:function(){Prism.languages.nix={comment:{pattern:/\/\*[\s\S]*?\*\/|#.*/,greedy:!0},string:{pattern:/"(?:[^"\\]|\\[\s\S])*"|''(?:(?!'')[\s\S]|''(?:'|\\|\$\{))*''/,greedy:!0,inside:{interpolation:{pattern:/(^|(?:^|(?!'').)[^\\])\$\{(?:[^{}]|\{[^}]*\})*\}/,lookbehind:!0,inside:null}}},url:[/\b(?:[a-z]{3,7}:\/\/)[\w\-+%~\/.:#=?&]+/,{pattern:/([^\/])(?:[\w\-+%~.:#=?&]*(?!\/\/)[\w\-+%~\/.:#=?&])?(?!\/\/)\/[\w\-+%~\/.:#=?&]*/,lookbehind:!0}],antiquotation:{pattern:/\$(?=\{)/,alias:"important"},number:/\b\d+\b/,keyword:/\b(?:assert|builtins|else|if|in|inherit|let|null|or|then|with)\b/,function:/\b(?:abort|add|all|any|attrNames|attrValues|baseNameOf|compareVersions|concatLists|currentSystem|deepSeq|derivation|dirOf|div|elem(?:At)?|fetch(?:Tarball|url)|filter(?:Source)?|fromJSON|genList|getAttr|getEnv|hasAttr|hashString|head|import|intersectAttrs|is(?:Attrs|Bool|Function|Int|List|Null|String)|length|lessThan|listToAttrs|map|mul|parseDrvName|pathExists|read(?:Dir|File)|removeAttrs|replaceStrings|seq|sort|stringLength|sub(?:string)?|tail|throw|to(?:File|JSON|Path|String|XML)|trace|typeOf)\b|\bfoldl'\B/,boolean:/\b(?:false|true)\b/,operator:/[=!<>]=?|\+\+?|\|\||&&|\/\/|->?|[?@]/,punctuation:/[{}()[\].,:;]/},Prism.languages.nix.string.inside.interpolation.inside=Prism.languages.nix},4876:function(){Prism.languages.nsis={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|[#;].*)/,lookbehind:!0,greedy:!0},string:{pattern:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0},keyword:{pattern:/(^[\t ]*)(?:Abort|Add(?:BrandingImage|Size)|AdvSplash|Allow(?:RootDirInstall|SkipFiles)|AutoCloseWindow|BG(?:Font|Gradient|Image)|Banner|BrandingText|BringToFront|CRCCheck|Call(?:InstDLL)?|Caption|ChangeUI|CheckBitmap|ClearErrors|CompletedText|ComponentText|CopyFiles|Create(?:Directory|Font|ShortCut)|Delete(?:INISec|INIStr|RegKey|RegValue)?|Detail(?:Print|sButtonText)|Dialer|Dir(?:Text|Var|Verify)|EnableWindow|Enum(?:RegKey|RegValue)|Exch|Exec(?:Shell(?:Wait)?|Wait)?|ExpandEnvStrings|File(?:BufSize|Close|ErrorText|Open|Read|ReadByte|ReadUTF16LE|ReadWord|Seek|Write|WriteByte|WriteUTF16LE|WriteWord)?|Find(?:Close|First|Next|Window)|FlushINI|Get(?:CurInstType|CurrentAddress|DLLVersion(?:Local)?|DlgItem|ErrorLevel|FileTime(?:Local)?|FullPathName|Function(?:Address|End)?|InstDirError|KnownFolderPath|LabelAddress|TempFileName|WinVer)|Goto|HideWindow|Icon|If(?:Abort|Errors|FileExists|RebootFlag|RtlLanguage|ShellVarContextAll|Silent)|InitPluginsDir|InstProgressFlags|Inst(?:Type(?:GetText|SetText)?)|Install(?:ButtonText|Colors|Dir(?:RegKey)?)|Int(?:64|Ptr)?CmpU?|Int(?:64)?Fmt|Int(?:Ptr)?Op|IsWindow|Lang(?:DLL|String)|License(?:BkColor|Data|ForceSelection|LangString|Text)|LoadLanguageFile|LockWindow|Log(?:Set|Text)|Manifest(?:DPIAware|SupportedOS)|Math|MessageBox|MiscButtonText|NSISdl|Name|Nop|OutFile|PE(?:DllCharacteristics|SubsysVer)|Page(?:Callbacks)?|Pop|Push|Quit|RMDir|Read(?:EnvStr|INIStr|RegDWORD|RegStr)|Reboot|RegDLL|Rename|RequestExecutionLevel|ReserveFile|Return|SearchPath|Section(?:End|GetFlags|GetInstTypes|GetSize|GetText|Group|In|SetFlags|SetInstTypes|SetSize|SetText)?|SendMessage|Set(?:AutoClose|BrandingImage|Compress|Compressor(?:DictSize)?|CtlColors|CurInstType|DatablockOptimize|DateSave|Details(?:Print|View)|ErrorLevel|Errors|FileAttributes|Font|OutPath|Overwrite|PluginUnload|RebootFlag|RegView|ShellVarContext|Silent)|Show(?:InstDetails|UninstDetails|Window)|Silent(?:Install|UnInstall)|Sleep|SpaceTexts|Splash|StartMenu|Str(?:CmpS?|Cpy|Len)|SubCaption|System|Target|UnRegDLL|Unicode|UninstPage|Uninstall(?:ButtonText|Caption|Icon|SubCaption|Text)|UserInfo|VI(?:AddVersionKey|FileVersion|ProductVersion)|VPatch|Var|WindowIcon|Write(?:INIStr|Reg(?:Bin|DWORD|ExpandStr|MultiStr|None|Str)|Uninstaller)|XPStyle|ns(?:Dialogs|Exec))\b/m,lookbehind:!0},property:/\b(?:ARCHIVE|FILE_(?:ATTRIBUTE_ARCHIVE|ATTRIBUTE_NORMAL|ATTRIBUTE_OFFLINE|ATTRIBUTE_READONLY|ATTRIBUTE_SYSTEM|ATTRIBUTE_TEMPORARY)|HK(?:(?:CR|CU|LM)(?:32|64)?|DD|PD|U)|HKEY_(?:CLASSES_ROOT|CURRENT_CONFIG|CURRENT_USER|DYN_DATA|LOCAL_MACHINE|PERFORMANCE_DATA|USERS)|ID(?:ABORT|CANCEL|IGNORE|NO|OK|RETRY|YES)|MB_(?:ABORTRETRYIGNORE|DEFBUTTON1|DEFBUTTON2|DEFBUTTON3|DEFBUTTON4|ICONEXCLAMATION|ICONINFORMATION|ICONQUESTION|ICONSTOP|OK|OKCANCEL|RETRYCANCEL|RIGHT|RTLREADING|SETFOREGROUND|TOPMOST|USERICON|YESNO)|NORMAL|OFFLINE|READONLY|SHCTX|SHELL_CONTEXT|SYSTEM|TEMPORARY|admin|all|auto|both|colored|false|force|hide|highest|lastused|leave|listonly|none|normal|notset|off|on|open|print|show|silent|silentlog|smooth|textonly|true|user)\b/,constant:/\$\{[!\w\.:\^-]+\}|\$\([!\w\.:\^-]+\)/,variable:/\$\w[\w\.]*/,number:/\b0x[\dA-Fa-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee]-?\d+)?/,operator:/--?|\+\+?|<=?|>=?|==?=?|&&?|\|\|?|[?*\/~^%]/,punctuation:/[{}[\];(),.:]/,important:{pattern:/(^[\t ]*)!(?:addincludedir|addplugindir|appendfile|cd|define|delfile|echo|else|endif|error|execute|finalize|getdllversion|gettlbversion|if|ifdef|ifmacrodef|ifmacrondef|ifndef|include|insertmacro|macro|macroend|makensis|packhdr|pragma|searchparse|searchreplace|system|tempfile|undef|verbose|warning)\b/im,lookbehind:!0}}},1426:function(){Prism.languages.objectivec=Prism.languages.extend("c",{string:{pattern:/@?"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"/,greedy:!0},keyword:/\b(?:asm|auto|break|case|char|const|continue|default|do|double|else|enum|extern|float|for|goto|if|in|inline|int|long|register|return|self|short|signed|sizeof|static|struct|super|switch|typedef|typeof|union|unsigned|void|volatile|while)\b|(?:@interface|@end|@implementation|@protocol|@class|@public|@protected|@private|@property|@try|@catch|@finally|@throw|@synthesize|@dynamic|@selector)\b/,operator:/-[->]?|\+\+?|!=?|<>?=?|==?|&&?|\|\|?|[~^%?*\/@]/}),delete Prism.languages.objectivec["class-name"],Prism.languages.objc=Prism.languages.objectivec},4371:function(){Prism.languages.ocaml={comment:{pattern:/\(\*[\s\S]*?\*\)/,greedy:!0},char:{pattern:/'(?:[^\\\r\n']|\\(?:.|[ox]?[0-9a-f]{1,3}))'/i,greedy:!0},string:[{pattern:/"(?:\\(?:[\s\S]|\r\n)|[^\\\r\n"])*"/,greedy:!0},{pattern:/\{([a-z_]*)\|[\s\S]*?\|\1\}/,greedy:!0}],number:[/\b(?:0b[01][01_]*|0o[0-7][0-7_]*)\b/i,/\b0x[a-f0-9][a-f0-9_]*(?:\.[a-f0-9_]*)?(?:p[+-]?\d[\d_]*)?(?!\w)/i,/\b\d[\d_]*(?:\.[\d_]*)?(?:e[+-]?\d[\d_]*)?(?!\w)/i],directive:{pattern:/\B#\w+/,alias:"property"},label:{pattern:/\B~\w+/,alias:"property"},"type-variable":{pattern:/\B'\w+/,alias:"function"},variant:{pattern:/`\w+/,alias:"symbol"},keyword:/\b(?:as|assert|begin|class|constraint|do|done|downto|else|end|exception|external|for|fun|function|functor|if|in|include|inherit|initializer|lazy|let|match|method|module|mutable|new|nonrec|object|of|open|private|rec|sig|struct|then|to|try|type|val|value|virtual|when|where|while|with)\b/,boolean:/\b(?:false|true)\b/,"operator-like-punctuation":{pattern:/\[[<>|]|[>|]\]|\{<|>\}/,alias:"punctuation"},operator:/\.[.~]|:[=>]|[=<>@^|&+\-*\/$%!?~][!$%&*+\-.\/:<=>?@^|~]*|\b(?:and|asr|land|lor|lsl|lsr|lxor|mod|or)\b/,punctuation:/;;|::|[(){}\[\].,:;#]|\b_\b/}},5577:function(){(function(e){var t=/\\(?:["'\\abefnrtv]|0[0-7]{2}|U[\dA-Fa-f]{6}|u[\dA-Fa-f]{4}|x[\dA-Fa-f]{2})/;e.languages.odin={comment:[{pattern:/\/\*(?:[^/*]|\/(?!\*)|\*(?!\/)|\/\*(?:\*(?!\/)|[^*])*(?:\*\/|$))*(?:\*\/|$)/,greedy:!0},{pattern:/#![^\n\r]*/,greedy:!0},{pattern:/\/\/[^\n\r]*/,greedy:!0}],char:{pattern:/'(?:\\(?:.|[0Uux][0-9A-Fa-f]{1,6})|[^\n\r'\\])'/,greedy:!0,inside:{symbol:t}},string:[{pattern:/`[^`]*`/,greedy:!0},{pattern:/"(?:\\.|[^\n\r"\\])*"/,greedy:!0,inside:{symbol:t}}],directive:{pattern:/#\w+/,alias:"property"},number:/\b0(?:b[01_]+|d[\d_]+|h_*(?:(?:(?:[\dA-Fa-f]_*){8}){1,2}|(?:[\dA-Fa-f]_*){4})|o[0-7_]+|x[\dA-F_a-f]+|z[\dAB_ab]+)\b|(?:\b\d+(?:\.(?!\.)\d*)?|\B\.\d+)(?:[Ee][+-]?\d*)?[ijk]?(?!\w)/,discard:{pattern:/\b_\b/,alias:"keyword"},"procedure-definition":{pattern:/\b\w+(?=[ \t]*(?::\s*){2}proc\b)/,alias:"function"},keyword:/\b(?:asm|auto_cast|bit_set|break|case|cast|context|continue|defer|distinct|do|dynamic|else|enum|fallthrough|for|foreign|if|import|in|map|matrix|not_in|or_else|or_return|package|proc|return|struct|switch|transmute|typeid|union|using|when|where)\b/,"procedure-name":{pattern:/\b\w+(?=[ \t]*\()/,alias:"function"},boolean:/\b(?:false|nil|true)\b/,"constant-parameter-sign":{pattern:/\$/,alias:"important"},undefined:{pattern:/---/,alias:"operator"},arrow:{pattern:/->/,alias:"punctuation"},operator:/\+\+|--|\.\.[<=]?|(?:&~|[-!*+/=~]|[%&<>|]{1,2})=?|[?^]/,punctuation:/[(),.:;@\[\]{}]/}})(Prism)},3144:function(){(function(e){e.languages.opencl=e.languages.extend("c",{keyword:/\b(?:(?:__)?(?:constant|global|kernel|local|private|read_only|read_write|write_only)|__attribute__|auto|(?:bool|u?(?:char|int|long|short)|half|quad)(?:2|3|4|8|16)?|break|case|complex|const|continue|(?:double|float)(?:16(?:x(?:1|2|4|8|16))?|1x(?:1|2|4|8|16)|2(?:x(?:1|2|4|8|16))?|3|4(?:x(?:1|2|4|8|16))?|8(?:x(?:1|2|4|8|16))?)?|default|do|else|enum|extern|for|goto|if|imaginary|inline|packed|pipe|register|restrict|return|signed|sizeof|static|struct|switch|typedef|uniform|union|unsigned|void|volatile|while)\b/,number:/(?:\b0x(?:[\da-f]+(?:\.[\da-f]*)?|\.[\da-f]+)(?:p[+-]?\d+)?|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?)[fuhl]{0,4}/i,boolean:/\b(?:false|true)\b/,"constant-opencl-kernel":{pattern:/\b(?:CHAR_(?:BIT|MAX|MIN)|CLK_(?:ADDRESS_(?:CLAMP(?:_TO_EDGE)?|NONE|REPEAT)|FILTER_(?:LINEAR|NEAREST)|(?:GLOBAL|LOCAL)_MEM_FENCE|NORMALIZED_COORDS_(?:FALSE|TRUE))|CL_(?:BGRA|(?:HALF_)?FLOAT|INTENSITY|LUMINANCE|A?R?G?B?[Ax]?|(?:(?:UN)?SIGNED|[US]NORM)_(?:INT(?:8|16|32))|UNORM_(?:INT_101010|SHORT_(?:555|565)))|(?:DBL|FLT|HALF)_(?:DIG|EPSILON|(?:MAX|MIN)(?:(?:_10)?_EXP)?|MANT_DIG)|FLT_RADIX|HUGE_VALF?|(?:INT|LONG|SCHAR|SHRT)_(?:MAX|MIN)|INFINITY|MAXFLOAT|M_(?:[12]_PI|2_SQRTPI|E|LN(?:2|10)|LOG(?:2|10)E?|PI(?:_[24])?|SQRT(?:1_2|2))(?:_F|_H)?|NAN|(?:UCHAR|UINT|ULONG|USHRT)_MAX)\b/,alias:"constant"}}),e.languages.insertBefore("opencl","class-name",{"builtin-type":{pattern:/\b(?:_cl_(?:command_queue|context|device_id|event|kernel|mem|platform_id|program|sampler)|cl_(?:image_format|mem_fence_flags)|clk_event_t|event_t|image(?:1d_(?:array_|buffer_)?t|2d_(?:array_(?:depth_|msaa_depth_|msaa_)?|depth_|msaa_depth_|msaa_)?t|3d_t)|intptr_t|ndrange_t|ptrdiff_t|queue_t|reserve_id_t|sampler_t|size_t|uintptr_t)\b/,alias:"keyword"}});var t={"type-opencl-host":{pattern:/\b(?:cl_(?:GLenum|GLint|GLuin|addressing_mode|bitfield|bool|buffer_create_type|build_status|channel_(?:order|type)|(?:u?(?:char|int|long|short)|double|float)(?:2|3|4|8|16)?|command_(?:queue(?:_info|_properties)?|type)|context(?:_info|_properties)?|device_(?:exec_capabilities|fp_config|id|info|local_mem_type|mem_cache_type|type)|(?:event|sampler)(?:_info)?|filter_mode|half|image_info|kernel(?:_info|_work_group_info)?|map_flags|mem(?:_flags|_info|_object_type)?|platform_(?:id|info)|profiling_info|program(?:_build_info|_info)?))\b/,alias:"keyword"},"boolean-opencl-host":{pattern:/\bCL_(?:FALSE|TRUE)\b/,alias:"boolean"},"constant-opencl-host":{pattern:/\bCL_(?:A|ABGR|ADDRESS_(?:CLAMP(?:_TO_EDGE)?|MIRRORED_REPEAT|NONE|REPEAT)|ARGB|BGRA|BLOCKING|BUFFER_CREATE_TYPE_REGION|BUILD_(?:ERROR|IN_PROGRESS|NONE|PROGRAM_FAILURE|SUCCESS)|COMMAND_(?:ACQUIRE_GL_OBJECTS|BARRIER|COPY_(?:BUFFER(?:_RECT|_TO_IMAGE)?|IMAGE(?:_TO_BUFFER)?)|FILL_(?:BUFFER|IMAGE)|MAP(?:_BUFFER|_IMAGE)|MARKER|MIGRATE(?:_SVM)?_MEM_OBJECTS|NATIVE_KERNEL|NDRANGE_KERNEL|READ_(?:BUFFER(?:_RECT)?|IMAGE)|RELEASE_GL_OBJECTS|SVM_(?:FREE|MAP|MEMCPY|MEMFILL|UNMAP)|TASK|UNMAP_MEM_OBJECT|USER|WRITE_(?:BUFFER(?:_RECT)?|IMAGE))|COMPILER_NOT_AVAILABLE|COMPILE_PROGRAM_FAILURE|COMPLETE|CONTEXT_(?:DEVICES|INTEROP_USER_SYNC|NUM_DEVICES|PLATFORM|PROPERTIES|REFERENCE_COUNT)|DEPTH(?:_STENCIL)?|DEVICE_(?:ADDRESS_BITS|AFFINITY_DOMAIN_(?:L[1-4]_CACHE|NEXT_PARTITIONABLE|NUMA)|AVAILABLE|BUILT_IN_KERNELS|COMPILER_AVAILABLE|DOUBLE_FP_CONFIG|ENDIAN_LITTLE|ERROR_CORRECTION_SUPPORT|EXECUTION_CAPABILITIES|EXTENSIONS|GLOBAL_(?:MEM_(?:CACHELINE_SIZE|CACHE_SIZE|CACHE_TYPE|SIZE)|VARIABLE_PREFERRED_TOTAL_SIZE)|HOST_UNIFIED_MEMORY|IL_VERSION|IMAGE(?:2D_MAX_(?:HEIGHT|WIDTH)|3D_MAX_(?:DEPTH|HEIGHT|WIDTH)|_BASE_ADDRESS_ALIGNMENT|_MAX_ARRAY_SIZE|_MAX_BUFFER_SIZE|_PITCH_ALIGNMENT|_SUPPORT)|LINKER_AVAILABLE|LOCAL_MEM_SIZE|LOCAL_MEM_TYPE|MAX_(?:CLOCK_FREQUENCY|COMPUTE_UNITS|CONSTANT_ARGS|CONSTANT_BUFFER_SIZE|GLOBAL_VARIABLE_SIZE|MEM_ALLOC_SIZE|NUM_SUB_GROUPS|ON_DEVICE_(?:EVENTS|QUEUES)|PARAMETER_SIZE|PIPE_ARGS|READ_IMAGE_ARGS|READ_WRITE_IMAGE_ARGS|SAMPLERS|WORK_GROUP_SIZE|WORK_ITEM_DIMENSIONS|WORK_ITEM_SIZES|WRITE_IMAGE_ARGS)|MEM_BASE_ADDR_ALIGN|MIN_DATA_TYPE_ALIGN_SIZE|NAME|NATIVE_VECTOR_WIDTH_(?:CHAR|DOUBLE|FLOAT|HALF|INT|LONG|SHORT)|NOT_(?:AVAILABLE|FOUND)|OPENCL_C_VERSION|PARENT_DEVICE|PARTITION_(?:AFFINITY_DOMAIN|BY_AFFINITY_DOMAIN|BY_COUNTS|BY_COUNTS_LIST_END|EQUALLY|FAILED|MAX_SUB_DEVICES|PROPERTIES|TYPE)|PIPE_MAX_(?:ACTIVE_RESERVATIONS|PACKET_SIZE)|PLATFORM|PREFERRED_(?:GLOBAL_ATOMIC_ALIGNMENT|INTEROP_USER_SYNC|LOCAL_ATOMIC_ALIGNMENT|PLATFORM_ATOMIC_ALIGNMENT|VECTOR_WIDTH_(?:CHAR|DOUBLE|FLOAT|HALF|INT|LONG|SHORT))|PRINTF_BUFFER_SIZE|PROFILE|PROFILING_TIMER_RESOLUTION|QUEUE_(?:ON_(?:DEVICE_(?:MAX_SIZE|PREFERRED_SIZE|PROPERTIES)|HOST_PROPERTIES)|PROPERTIES)|REFERENCE_COUNT|SINGLE_FP_CONFIG|SUB_GROUP_INDEPENDENT_FORWARD_PROGRESS|SVM_(?:ATOMICS|CAPABILITIES|COARSE_GRAIN_BUFFER|FINE_GRAIN_BUFFER|FINE_GRAIN_SYSTEM)|TYPE(?:_ACCELERATOR|_ALL|_CPU|_CUSTOM|_DEFAULT|_GPU)?|VENDOR(?:_ID)?|VERSION)|DRIVER_VERSION|EVENT_(?:COMMAND_(?:EXECUTION_STATUS|QUEUE|TYPE)|CONTEXT|REFERENCE_COUNT)|EXEC_(?:KERNEL|NATIVE_KERNEL|STATUS_ERROR_FOR_EVENTS_IN_WAIT_LIST)|FILTER_(?:LINEAR|NEAREST)|FLOAT|FP_(?:CORRECTLY_ROUNDED_DIVIDE_SQRT|DENORM|FMA|INF_NAN|ROUND_TO_INF|ROUND_TO_NEAREST|ROUND_TO_ZERO|SOFT_FLOAT)|GLOBAL|HALF_FLOAT|IMAGE_(?:ARRAY_SIZE|BUFFER|DEPTH|ELEMENT_SIZE|FORMAT|FORMAT_MISMATCH|FORMAT_NOT_SUPPORTED|HEIGHT|NUM_MIP_LEVELS|NUM_SAMPLES|ROW_PITCH|SLICE_PITCH|WIDTH)|INTENSITY|INVALID_(?:ARG_INDEX|ARG_SIZE|ARG_VALUE|BINARY|BUFFER_SIZE|BUILD_OPTIONS|COMMAND_QUEUE|COMPILER_OPTIONS|CONTEXT|DEVICE|DEVICE_PARTITION_COUNT|DEVICE_QUEUE|DEVICE_TYPE|EVENT|EVENT_WAIT_LIST|GLOBAL_OFFSET|GLOBAL_WORK_SIZE|GL_OBJECT|HOST_PTR|IMAGE_DESCRIPTOR|IMAGE_FORMAT_DESCRIPTOR|IMAGE_SIZE|KERNEL|KERNEL_ARGS|KERNEL_DEFINITION|KERNEL_NAME|LINKER_OPTIONS|MEM_OBJECT|MIP_LEVEL|OPERATION|PIPE_SIZE|PLATFORM|PROGRAM|PROGRAM_EXECUTABLE|PROPERTY|QUEUE_PROPERTIES|SAMPLER|VALUE|WORK_DIMENSION|WORK_GROUP_SIZE|WORK_ITEM_SIZE)|KERNEL_(?:ARG_(?:ACCESS_(?:NONE|QUALIFIER|READ_ONLY|READ_WRITE|WRITE_ONLY)|ADDRESS_(?:CONSTANT|GLOBAL|LOCAL|PRIVATE|QUALIFIER)|INFO_NOT_AVAILABLE|NAME|TYPE_(?:CONST|NAME|NONE|PIPE|QUALIFIER|RESTRICT|VOLATILE))|ATTRIBUTES|COMPILE_NUM_SUB_GROUPS|COMPILE_WORK_GROUP_SIZE|CONTEXT|EXEC_INFO_SVM_FINE_GRAIN_SYSTEM|EXEC_INFO_SVM_PTRS|FUNCTION_NAME|GLOBAL_WORK_SIZE|LOCAL_MEM_SIZE|LOCAL_SIZE_FOR_SUB_GROUP_COUNT|MAX_NUM_SUB_GROUPS|MAX_SUB_GROUP_SIZE_FOR_NDRANGE|NUM_ARGS|PREFERRED_WORK_GROUP_SIZE_MULTIPLE|PRIVATE_MEM_SIZE|PROGRAM|REFERENCE_COUNT|SUB_GROUP_COUNT_FOR_NDRANGE|WORK_GROUP_SIZE)|LINKER_NOT_AVAILABLE|LINK_PROGRAM_FAILURE|LOCAL|LUMINANCE|MAP_(?:FAILURE|READ|WRITE|WRITE_INVALIDATE_REGION)|MEM_(?:ALLOC_HOST_PTR|ASSOCIATED_MEMOBJECT|CONTEXT|COPY_HOST_PTR|COPY_OVERLAP|FLAGS|HOST_NO_ACCESS|HOST_PTR|HOST_READ_ONLY|HOST_WRITE_ONLY|KERNEL_READ_AND_WRITE|MAP_COUNT|OBJECT_(?:ALLOCATION_FAILURE|BUFFER|IMAGE1D|IMAGE1D_ARRAY|IMAGE1D_BUFFER|IMAGE2D|IMAGE2D_ARRAY|IMAGE3D|PIPE)|OFFSET|READ_ONLY|READ_WRITE|REFERENCE_COUNT|SIZE|SVM_ATOMICS|SVM_FINE_GRAIN_BUFFER|TYPE|USES_SVM_POINTER|USE_HOST_PTR|WRITE_ONLY)|MIGRATE_MEM_OBJECT_(?:CONTENT_UNDEFINED|HOST)|MISALIGNED_SUB_BUFFER_OFFSET|NONE|NON_BLOCKING|OUT_OF_(?:HOST_MEMORY|RESOURCES)|PIPE_(?:MAX_PACKETS|PACKET_SIZE)|PLATFORM_(?:EXTENSIONS|HOST_TIMER_RESOLUTION|NAME|PROFILE|VENDOR|VERSION)|PROFILING_(?:COMMAND_(?:COMPLETE|END|QUEUED|START|SUBMIT)|INFO_NOT_AVAILABLE)|PROGRAM_(?:BINARIES|BINARY_SIZES|BINARY_TYPE(?:_COMPILED_OBJECT|_EXECUTABLE|_LIBRARY|_NONE)?|BUILD_(?:GLOBAL_VARIABLE_TOTAL_SIZE|LOG|OPTIONS|STATUS)|CONTEXT|DEVICES|IL|KERNEL_NAMES|NUM_DEVICES|NUM_KERNELS|REFERENCE_COUNT|SOURCE)|QUEUED|QUEUE_(?:CONTEXT|DEVICE|DEVICE_DEFAULT|ON_DEVICE|ON_DEVICE_DEFAULT|OUT_OF_ORDER_EXEC_MODE_ENABLE|PROFILING_ENABLE|PROPERTIES|REFERENCE_COUNT|SIZE)|R|RA|READ_(?:ONLY|WRITE)_CACHE|RG|RGB|RGBA|RGBx|RGx|RUNNING|Rx|SAMPLER_(?:ADDRESSING_MODE|CONTEXT|FILTER_MODE|LOD_MAX|LOD_MIN|MIP_FILTER_MODE|NORMALIZED_COORDS|REFERENCE_COUNT)|(?:UN)?SIGNED_INT(?:8|16|32)|SNORM_INT(?:8|16)|SUBMITTED|SUCCESS|UNORM_INT(?:8|16|24|_101010|_101010_2)|UNORM_SHORT_(?:555|565)|VERSION_(?:1_0|1_1|1_2|2_0|2_1)|sBGRA|sRGB|sRGBA|sRGBx)\b/,alias:"constant"},"function-opencl-host":{pattern:/\bcl(?:BuildProgram|CloneKernel|CompileProgram|Create(?:Buffer|CommandQueue(?:WithProperties)?|Context|ContextFromType|Image|Image2D|Image3D|Kernel|KernelsInProgram|Pipe|ProgramWith(?:Binary|BuiltInKernels|IL|Source)|Sampler|SamplerWithProperties|SubBuffer|SubDevices|UserEvent)|Enqueue(?:(?:Barrier|Marker)(?:WithWaitList)?|Copy(?:Buffer(?:Rect|ToImage)?|Image(?:ToBuffer)?)|(?:Fill|Map)(?:Buffer|Image)|MigrateMemObjects|NDRangeKernel|NativeKernel|(?:Read|Write)(?:Buffer(?:Rect)?|Image)|SVM(?:Free|Map|MemFill|Memcpy|MigrateMem|Unmap)|Task|UnmapMemObject|WaitForEvents)|Finish|Flush|Get(?:CommandQueueInfo|ContextInfo|Device(?:AndHostTimer|IDs|Info)|Event(?:Profiling)?Info|ExtensionFunctionAddress(?:ForPlatform)?|HostTimer|ImageInfo|Kernel(?:ArgInfo|Info|SubGroupInfo|WorkGroupInfo)|MemObjectInfo|PipeInfo|Platform(?:IDs|Info)|Program(?:Build)?Info|SamplerInfo|SupportedImageFormats)|LinkProgram|(?:Release|Retain)(?:CommandQueue|Context|Device|Event|Kernel|MemObject|Program|Sampler)|SVM(?:Alloc|Free)|Set(?:CommandQueueProperty|DefaultDeviceCommandQueue|EventCallback|Kernel|Kernel(?:Arg(?:SVMPointer)?|ExecInfo)|MemObjectDestructorCallback|UserEventStatus)|Unload(?:Platform)?Compiler|WaitForEvents)\b/,alias:"function"}};e.languages.insertBefore("c","keyword",t),e.languages.cpp&&(t["type-opencl-host-cpp"]={pattern:/\b(?:Buffer|BufferGL|BufferRenderGL|CommandQueue|Context|Device|DeviceCommandQueue|EnqueueArgs|Event|Image|Image1D|Image1DArray|Image1DBuffer|Image2D|Image2DArray|Image2DGL|Image3D|Image3DGL|ImageFormat|ImageGL|Kernel|KernelFunctor|LocalSpaceArg|Memory|NDRange|Pipe|Platform|Program|SVMAllocator|SVMTraitAtomic|SVMTraitCoarse|SVMTraitFine|SVMTraitReadOnly|SVMTraitReadWrite|SVMTraitWriteOnly|Sampler|UserEvent)\b/,alias:"keyword"},e.languages.insertBefore("cpp","keyword",t))})(Prism)},5513:function(){Prism.languages.openqasm={comment:/\/\*[\s\S]*?\*\/|\/\/.*/,string:{pattern:/"[^"\r\n\t]*"|'[^'\r\n\t]*'/,greedy:!0},keyword:/\b(?:CX|OPENQASM|U|barrier|boxas|boxto|break|const|continue|ctrl|def|defcal|defcalgrammar|delay|else|end|for|gate|gphase|if|in|include|inv|kernel|lengthof|let|measure|pow|reset|return|rotary|stretchinf|while)\b|#pragma\b/,"class-name":/\b(?:angle|bit|bool|creg|fixed|float|int|length|qreg|qubit|stretch|uint)\b/,function:/\b(?:cos|exp|ln|popcount|rotl|rotr|sin|sqrt|tan)\b(?=\s*\()/,constant:/\b(?:euler|pi|tau)\b|π|𝜏|ℇ/,number:{pattern:/(^|[^.\w$])(?:\d+(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?(?:dt|ns|us|µs|ms|s)?/i,lookbehind:!0},operator:/->|>>=?|<<=?|&&|\|\||\+\+|--|[!=<>&|~^+\-*/%]=?|@/,punctuation:/[(){}\[\];,:.]/},Prism.languages.qasm=Prism.languages.openqasm},903:function(){Prism.languages.oz={comment:{pattern:/\/\*[\s\S]*?\*\/|%.*/,greedy:!0},string:{pattern:/"(?:[^"\\]|\\[\s\S])*"/,greedy:!0},atom:{pattern:/'(?:[^'\\]|\\[\s\S])*'/,greedy:!0,alias:"builtin"},keyword:/\$|\[\]|\b(?:_|at|attr|case|catch|choice|class|cond|declare|define|dis|else(?:case|if)?|end|export|fail|false|feat|finally|from|fun|functor|if|import|in|local|lock|meth|nil|not|of|or|prepare|proc|prop|raise|require|self|skip|then|thread|true|try|unit)\b/,function:[/\b[a-z][A-Za-z\d]*(?=\()/,{pattern:/(\{)[A-Z][A-Za-z\d]*\b/,lookbehind:!0}],number:/\b(?:0[bx][\da-f]+|\d+(?:\.\d*)?(?:e~?\d+)?)\b|&(?:[^\\]|\\(?:\d{3}|.))/i,variable:/`(?:[^`\\]|\\.)+`/,"attr-name":/\b\w+(?=[ \t]*:(?![:=]))/,operator:/:(?:=|::?)|<[-:=]?|=(?:=|=?:?|\\=:?|!!?|[|#+\-*\/,~^@]|\b(?:andthen|div|mod|orelse)\b/,punctuation:/[\[\](){}.:;?]/}},7511:function(){Prism.languages.parigp={comment:/\/\*[\s\S]*?\*\/|\\\\.*/,string:{pattern:/"(?:[^"\\\r\n]|\\.)*"/,greedy:!0},keyword:function(){var e=["breakpoint","break","dbg_down","dbg_err","dbg_up","dbg_x","forcomposite","fordiv","forell","forpart","forprime","forstep","forsubgroup","forvec","for","iferr","if","local","my","next","return","until","while"];return e=e.map((function(e){return e.split("").join(" *")})).join("|"),RegExp("\\b(?:"+e+")\\b")}(),function:/\b\w(?:[\w ]*\w)?(?= *\()/,number:{pattern:/((?:\. *\. *)?)(?:\b\d(?: *\d)*(?: *(?!\. *\.)\.(?: *\d)*)?|\. *\d(?: *\d)*)(?: *e *(?:[+-] *)?\d(?: *\d)*)?/i,lookbehind:!0},operator:/\. *\.|[*\/!](?: *=)?|%(?: *=|(?: *#)?(?: *')*)?|\+(?: *[+=])?|-(?: *[-=>])?|<(?: *>|(?: *<)?(?: *=)?)?|>(?: *>)?(?: *=)?|=(?: *=){0,2}|\\(?: *\/)?(?: *=)?|&(?: *&)?|\| *\||['#~^]/,punctuation:/[\[\]{}().,:;|]/}},780:function(){(function(e){var t=e.languages.parser=e.languages.extend("markup",{keyword:{pattern:/(^|[^^])(?:\^(?:case|eval|for|if|switch|throw)\b|@(?:BASE|CLASS|GET(?:_DEFAULT)?|OPTIONS|SET_DEFAULT|USE)\b)/,lookbehind:!0},variable:{pattern:/(^|[^^])\B\$(?:\w+|(?=[.{]))(?:(?:\.|::?)\w+)*(?:\.|::?)?/,lookbehind:!0,inside:{punctuation:/\.|:+/}},function:{pattern:/(^|[^^])\B[@^]\w+(?:(?:\.|::?)\w+)*(?:\.|::?)?/,lookbehind:!0,inside:{keyword:{pattern:/(^@)(?:GET_|SET_)/,lookbehind:!0},punctuation:/\.|:+/}},escape:{pattern:/\^(?:[$^;@()\[\]{}"':]|#[a-f\d]*)/i,alias:"builtin"},punctuation:/[\[\](){};]/});t=e.languages.insertBefore("parser","keyword",{"parser-comment":{pattern:/(\s)#.*/,lookbehind:!0,alias:"comment"},expression:{pattern:/(^|[^^])\((?:[^()]|\((?:[^()]|\((?:[^()])*\))*\))*\)/,greedy:!0,lookbehind:!0,inside:{string:{pattern:/(^|[^^])(["'])(?:(?!\2)[^^]|\^[\s\S])*\2/,lookbehind:!0},keyword:t.keyword,variable:t.variable,function:t.function,boolean:/\b(?:false|true)\b/,number:/\b(?:0x[a-f\d]+|\d+(?:\.\d*)?(?:e[+-]?\d+)?)\b/i,escape:t.escape,operator:/[~+*\/\\%]|!(?:\|\|?|=)?|&&?|\|\|?|==|<[<=]?|>[>=]?|-[fd]?|\b(?:def|eq|ge|gt|in|is|le|lt|ne)\b/,punctuation:t.punctuation}}}),e.languages.insertBefore("inside","punctuation",{expression:t.expression,keyword:t.keyword,variable:t.variable,function:t.function,escape:t.escape,"parser-punctuation":{pattern:t.punctuation,alias:"punctuation"}},t["tag"].inside["attr-value"])})(Prism)},3210:function(){Prism.languages.pascal={directive:{pattern:/\{\$[\s\S]*?\}/,greedy:!0,alias:["marco","property"]},comment:{pattern:/\(\*[\s\S]*?\*\)|\{[\s\S]*?\}|\/\/.*/,greedy:!0},string:{pattern:/(?:'(?:''|[^'\r\n])*'(?!')|#[&$%]?[a-f\d]+)+|\^[a-z]/i,greedy:!0},asm:{pattern:/(\basm\b)[\s\S]+?(?=\bend\s*[;[])/i,lookbehind:!0,greedy:!0,inside:null},keyword:[{pattern:/(^|[^&])\b(?:absolute|array|asm|begin|case|const|constructor|destructor|do|downto|else|end|file|for|function|goto|if|implementation|inherited|inline|interface|label|nil|object|of|operator|packed|procedure|program|record|reintroduce|repeat|self|set|string|then|to|type|unit|until|uses|var|while|with)\b/i,lookbehind:!0},{pattern:/(^|[^&])\b(?:dispose|exit|false|new|true)\b/i,lookbehind:!0},{pattern:/(^|[^&])\b(?:class|dispinterface|except|exports|finalization|finally|initialization|inline|library|on|out|packed|property|raise|resourcestring|threadvar|try)\b/i,lookbehind:!0},{pattern:/(^|[^&])\b(?:absolute|abstract|alias|assembler|bitpacked|break|cdecl|continue|cppdecl|cvar|default|deprecated|dynamic|enumerator|experimental|export|external|far|far16|forward|generic|helper|implements|index|interrupt|iochecks|local|message|name|near|nodefault|noreturn|nostackframe|oldfpccall|otherwise|overload|override|pascal|platform|private|protected|public|published|read|register|reintroduce|result|safecall|saveregisters|softfloat|specialize|static|stdcall|stored|strict|unaligned|unimplemented|varargs|virtual|write)\b/i,lookbehind:!0}],number:[/(?:[&%]\d+|\$[a-f\d]+)/i,/\b\d+(?:\.\d+)?(?:e[+-]?\d+)?/i],operator:[/\.\.|\*\*|:=|<[<=>]?|>[>=]?|[+\-*\/]=?|[@^=]/,{pattern:/(^|[^&])\b(?:and|as|div|exclude|in|include|is|mod|not|or|shl|shr|xor)\b/,lookbehind:!0}],punctuation:/\(\.|\.\)|[()\[\]:;,.]/},Prism.languages.pascal.asm.inside=Prism.languages.extend("pascal",{asm:void 0,keyword:void 0,operator:void 0}),Prism.languages.objectpascal=Prism.languages.pascal},4332:function(){(function(e){var t=/\((?:[^()]|\((?:[^()]|\([^()]*\))*\))*\)/.source,n=/(?:\b\w+(?:)?|)/.source.replace(//g,(function(){return t})),r=e.languages.pascaligo={comment:/\(\*[\s\S]+?\*\)|\/\/.*/,string:{pattern:/(["'`])(?:\\[\s\S]|(?!\1)[^\\])*\1|\^[a-z]/i,greedy:!0},"class-name":[{pattern:RegExp(/(\btype\s+\w+\s+is\s+)/.source.replace(//g,(function(){return n})),"i"),lookbehind:!0,inside:null},{pattern:RegExp(/(?=\s+is\b)/.source.replace(//g,(function(){return n})),"i"),inside:null},{pattern:RegExp(/(:\s*)/.source.replace(//g,(function(){return n}))),lookbehind:!0,inside:null}],keyword:{pattern:/(^|[^&])\b(?:begin|block|case|const|else|end|fail|for|from|function|if|is|nil|of|remove|return|skip|then|type|var|while|with)\b/i,lookbehind:!0},boolean:{pattern:/(^|[^&])\b(?:False|True)\b/i,lookbehind:!0},builtin:{pattern:/(^|[^&])\b(?:bool|int|list|map|nat|record|string|unit)\b/i,lookbehind:!0},function:/\b\w+(?=\s*\()/,number:[/%[01]+|&[0-7]+|\$[a-f\d]+/i,/\b\d+(?:\.\d+)?(?:e[+-]?\d+)?(?:mtz|n)?/i],operator:/->|=\/=|\.\.|\*\*|:=|<[<=>]?|>[>=]?|[+\-*\/]=?|[@^=|]|\b(?:and|mod|or)\b/,punctuation:/\(\.|\.\)|[()\[\]:;,.{}]/},i=["comment","keyword","builtin","operator","punctuation"].reduce((function(e,t){return e[t]=r[t],e}),{});r["class-name"].forEach((function(e){e.inside=i}))})(Prism)},2892:function(){Prism.languages.pcaxis={string:/"[^"]*"/,keyword:{pattern:/((?:^|;)\s*)[-A-Z\d]+(?:\s*\[[-\w]+\])?(?:\s*\("[^"]*"(?:,\s*"[^"]*")*\))?(?=\s*=)/,lookbehind:!0,greedy:!0,inside:{keyword:/^[-A-Z\d]+/,language:{pattern:/^(\s*)\[[-\w]+\]/,lookbehind:!0,inside:{punctuation:/^\[|\]$/,property:/[-\w]+/}},"sub-key":{pattern:/^(\s*)\S[\s\S]*/,lookbehind:!0,inside:{parameter:{pattern:/"[^"]*"/,alias:"property"},punctuation:/^\(|\)$|,/}}}},operator:/=/,tlist:{pattern:/TLIST\s*\(\s*\w+(?:(?:\s*,\s*"[^"]*")+|\s*,\s*"[^"]*"-"[^"]*")?\s*\)/,greedy:!0,inside:{function:/^TLIST/,property:{pattern:/^(\s*\(\s*)\w+/,lookbehind:!0},string:/"[^"]*"/,punctuation:/[(),]/,operator:/-/}},punctuation:/[;,]/,number:{pattern:/(^|\s)\d+(?:\.\d+)?(?!\S)/,lookbehind:!0},boolean:/NO|YES/},Prism.languages.px=Prism.languages.pcaxis},4984:function(){Prism.languages.peoplecode={comment:RegExp([/\/\*[\s\S]*?\*\//.source,/\bREM[^;]*;/.source,/<\*(?:[^<*]|\*(?!>)|<(?!\*)|<\*(?:(?!\*>)[\s\S])*\*>)*\*>/.source,/\/\+[\s\S]*?\+\//.source].join("|")),string:{pattern:/'(?:''|[^'\r\n])*'(?!')|"(?:""|[^"\r\n])*"(?!")/,greedy:!0},variable:/%\w+/,"function-definition":{pattern:/((?:^|[^\w-])(?:function|method)\s+)\w+/i,lookbehind:!0,alias:"function"},"class-name":{pattern:/((?:^|[^-\w])(?:as|catch|class|component|create|extends|global|implements|instance|local|of|property|returns)\s+)\w+(?::\w+)*/i,lookbehind:!0,inside:{punctuation:/:/}},keyword:/\b(?:abstract|alias|as|catch|class|component|constant|create|declare|else|end-(?:class|evaluate|for|function|get|if|method|set|try|while)|evaluate|extends|for|function|get|global|if|implements|import|instance|library|local|method|null|of|out|peopleCode|private|program|property|protected|readonly|ref|repeat|returns?|set|step|then|throw|to|try|until|value|when(?:-other)?|while)\b/i,"operator-keyword":{pattern:/\b(?:and|not|or)\b/i,alias:"operator"},function:/[_a-z]\w*(?=\s*\()/i,boolean:/\b(?:false|true)\b/i,number:/\b\d+(?:\.\d+)?\b/,operator:/<>|[<>]=?|!=|\*\*|[-+*/|=@]/,punctuation:/[:.;,()[\]]/},Prism.languages.pcode=Prism.languages.peoplecode},288:function(){(function(e){var t=/(?:\((?:[^()\\]|\\[\s\S])*\)|\{(?:[^{}\\]|\\[\s\S])*\}|\[(?:[^[\]\\]|\\[\s\S])*\]|<(?:[^<>\\]|\\[\s\S])*>)/.source;e.languages.perl={comment:[{pattern:/(^\s*)=\w[\s\S]*?=cut.*/m,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\$])#.*/,lookbehind:!0,greedy:!0}],string:[{pattern:RegExp(/\b(?:q|qq|qw|qx)(?![a-zA-Z0-9])\s*/.source+"(?:"+[/([^a-zA-Z0-9\s{(\[<])(?:(?!\1)[^\\]|\\[\s\S])*\1/.source,/([a-zA-Z0-9])(?:(?!\2)[^\\]|\\[\s\S])*\2/.source,t].join("|")+")"),greedy:!0},{pattern:/("|`)(?:(?!\1)[^\\]|\\[\s\S])*\1/,greedy:!0},{pattern:/'(?:[^'\\\r\n]|\\.)*'/,greedy:!0}],regex:[{pattern:RegExp(/\b(?:m|qr)(?![a-zA-Z0-9])\s*/.source+"(?:"+[/([^a-zA-Z0-9\s{(\[<])(?:(?!\1)[^\\]|\\[\s\S])*\1/.source,/([a-zA-Z0-9])(?:(?!\2)[^\\]|\\[\s\S])*\2/.source,t].join("|")+")"+/[msixpodualngc]*/.source),greedy:!0},{pattern:RegExp(/(^|[^-])\b(?:s|tr|y)(?![a-zA-Z0-9])\s*/.source+"(?:"+[/([^a-zA-Z0-9\s{(\[<])(?:(?!\2)[^\\]|\\[\s\S])*\2(?:(?!\2)[^\\]|\\[\s\S])*\2/.source,/([a-zA-Z0-9])(?:(?!\3)[^\\]|\\[\s\S])*\3(?:(?!\3)[^\\]|\\[\s\S])*\3/.source,t+/\s*/.source+t].join("|")+")"+/[msixpodualngcer]*/.source),lookbehind:!0,greedy:!0},{pattern:/\/(?:[^\/\\\r\n]|\\.)*\/[msixpodualngc]*(?=\s*(?:$|[\r\n,.;})&|\-+*~<>!?^]|(?:and|cmp|eq|ge|gt|le|lt|ne|not|or|x|xor)\b))/,greedy:!0}],variable:[/[&*$@%]\{\^[A-Z]+\}/,/[&*$@%]\^[A-Z_]/,/[&*$@%]#?(?=\{)/,/[&*$@%]#?(?:(?:::)*'?(?!\d)[\w$]+(?![\w$]))+(?:::)*/,/[&*$@%]\d+/,/(?!%=)[$@%][!"#$%&'()*+,\-.\/:;<=>?@[\\\]^_`{|}~]/],filehandle:{pattern:/<(?![<=])\S*?>|\b_\b/,alias:"symbol"},"v-string":{pattern:/v\d+(?:\.\d+)*|\d+(?:\.\d+){2,}/,alias:"string"},function:{pattern:/(\bsub[ \t]+)\w+/,lookbehind:!0},keyword:/\b(?:any|break|continue|default|delete|die|do|else|elsif|eval|for|foreach|given|goto|if|last|local|my|next|our|package|print|redo|require|return|say|state|sub|switch|undef|unless|until|use|when|while)\b/,number:/\b(?:0x[\dA-Fa-f](?:_?[\dA-Fa-f])*|0b[01](?:_?[01])*|(?:(?:\d(?:_?\d)*)?\.)?\d(?:_?\d)*(?:[Ee][+-]?\d+)?)\b/,operator:/-[rwxoRWXOezsfdlpSbctugkTBMAC]\b|\+[+=]?|-[-=>]?|\*\*?=?|\/\/?=?|=[=~>]?|~[~=]?|\|\|?=?|&&?=?|<(?:=>?|<=?)?|>>?=?|![~=]?|[%^]=?|\.(?:=|\.\.?)?|[\\?]|\bx(?:=|\b)|\b(?:and|cmp|eq|ge|gt|le|lt|ne|not|or|xor)\b/,punctuation:/[{}[\];(),:]/}})(Prism)},9425:function(){Prism.languages.insertBefore("php","variable",{this:{pattern:/\$this\b/,alias:"keyword"},global:/\$(?:GLOBALS|HTTP_RAW_POST_DATA|_(?:COOKIE|ENV|FILES|GET|POST|REQUEST|SERVER|SESSION)|argc|argv|http_response_header|php_errormsg)\b/,scope:{pattern:/\b[\w\\]+::/,inside:{keyword:/\b(?:parent|self|static)\b/,punctuation:/::|\\/}}})},9945:function(){(function(e){var t=/\/\*[\s\S]*?\*\/|\/\/.*|#(?!\[).*/,n=[{pattern:/\b(?:false|true)\b/i,alias:"boolean"},{pattern:/(::\s*)\b[a-z_]\w*\b(?!\s*\()/i,greedy:!0,lookbehind:!0},{pattern:/(\b(?:case|const)\s+)\b[a-z_]\w*(?=\s*[;=])/i,greedy:!0,lookbehind:!0},/\b(?:null)\b/i,/\b[A-Z_][A-Z0-9_]*\b(?!\s*\()/],r=/\b0b[01]+(?:_[01]+)*\b|\b0o[0-7]+(?:_[0-7]+)*\b|\b0x[\da-f]+(?:_[\da-f]+)*\b|(?:\b\d+(?:_\d+)*\.?(?:\d+(?:_\d+)*)?|\B\.\d+)(?:e[+-]?\d+)?/i,i=/|\?\?=?|\.{3}|\??->|[!=]=?=?|::|\*\*=?|--|\+\+|&&|\|\||<<|>>|[?~]|[/^|%*&<>.+-]=?/,s=/[{}\[\](),:;]/;e.languages.php={delimiter:{pattern:/\?>$|^<\?(?:php(?=\s)|=)?/i,alias:"important"},comment:t,variable:/\$+(?:\w+\b|(?=\{))/,package:{pattern:/(namespace\s+|use\s+(?:function\s+)?)(?:\\?\b[a-z_]\w*)+\b(?!\\)/i,lookbehind:!0,inside:{punctuation:/\\/}},"class-name-definition":{pattern:/(\b(?:class|enum|interface|trait)\s+)\b[a-z_]\w*(?!\\)\b/i,lookbehind:!0,alias:"class-name"},"function-definition":{pattern:/(\bfunction\s+)[a-z_]\w*(?=\s*\()/i,lookbehind:!0,alias:"function"},keyword:[{pattern:/(\(\s*)\b(?:array|bool|boolean|float|int|integer|object|string)\b(?=\s*\))/i,alias:"type-casting",greedy:!0,lookbehind:!0},{pattern:/([(,?]\s*)\b(?:array(?!\s*\()|bool|callable|(?:false|null)(?=\s*\|)|float|int|iterable|mixed|object|self|static|string)\b(?=\s*\$)/i,alias:"type-hint",greedy:!0,lookbehind:!0},{pattern:/(\)\s*:\s*(?:\?\s*)?)\b(?:array(?!\s*\()|bool|callable|(?:false|null)(?=\s*\|)|float|int|iterable|mixed|never|object|self|static|string|void)\b/i,alias:"return-type",greedy:!0,lookbehind:!0},{pattern:/\b(?:array(?!\s*\()|bool|float|int|iterable|mixed|object|string|void)\b/i,alias:"type-declaration",greedy:!0},{pattern:/(\|\s*)(?:false|null)\b|\b(?:false|null)(?=\s*\|)/i,alias:"type-declaration",greedy:!0,lookbehind:!0},{pattern:/\b(?:parent|self|static)(?=\s*::)/i,alias:"static-context",greedy:!0},{pattern:/(\byield\s+)from\b/i,lookbehind:!0},/\bclass\b/i,{pattern:/((?:^|[^\s>:]|(?:^|[^-])>|(?:^|[^:]):)\s*)\b(?:abstract|and|array|as|break|callable|case|catch|clone|const|continue|declare|default|die|do|echo|else|elseif|empty|enddeclare|endfor|endforeach|endif|endswitch|endwhile|enum|eval|exit|extends|final|finally|fn|for|foreach|function|global|goto|if|implements|include|include_once|instanceof|insteadof|interface|isset|list|match|namespace|never|new|or|parent|print|private|protected|public|readonly|require|require_once|return|self|static|switch|throw|trait|try|unset|use|var|while|xor|yield|__halt_compiler)\b/i,lookbehind:!0}],"argument-name":{pattern:/([(,]\s*)\b[a-z_]\w*(?=\s*:(?!:))/i,lookbehind:!0},"class-name":[{pattern:/(\b(?:extends|implements|instanceof|new(?!\s+self|\s+static))\s+|\bcatch\s*\()\b[a-z_]\w*(?!\\)\b/i,greedy:!0,lookbehind:!0},{pattern:/(\|\s*)\b[a-z_]\w*(?!\\)\b/i,greedy:!0,lookbehind:!0},{pattern:/\b[a-z_]\w*(?!\\)\b(?=\s*\|)/i,greedy:!0},{pattern:/(\|\s*)(?:\\?\b[a-z_]\w*)+\b/i,alias:"class-name-fully-qualified",greedy:!0,lookbehind:!0,inside:{punctuation:/\\/}},{pattern:/(?:\\?\b[a-z_]\w*)+\b(?=\s*\|)/i,alias:"class-name-fully-qualified",greedy:!0,inside:{punctuation:/\\/}},{pattern:/(\b(?:extends|implements|instanceof|new(?!\s+self\b|\s+static\b))\s+|\bcatch\s*\()(?:\\?\b[a-z_]\w*)+\b(?!\\)/i,alias:"class-name-fully-qualified",greedy:!0,lookbehind:!0,inside:{punctuation:/\\/}},{pattern:/\b[a-z_]\w*(?=\s*\$)/i,alias:"type-declaration",greedy:!0},{pattern:/(?:\\?\b[a-z_]\w*)+(?=\s*\$)/i,alias:["class-name-fully-qualified","type-declaration"],greedy:!0,inside:{punctuation:/\\/}},{pattern:/\b[a-z_]\w*(?=\s*::)/i,alias:"static-context",greedy:!0},{pattern:/(?:\\?\b[a-z_]\w*)+(?=\s*::)/i,alias:["class-name-fully-qualified","static-context"],greedy:!0,inside:{punctuation:/\\/}},{pattern:/([(,?]\s*)[a-z_]\w*(?=\s*\$)/i,alias:"type-hint",greedy:!0,lookbehind:!0},{pattern:/([(,?]\s*)(?:\\?\b[a-z_]\w*)+(?=\s*\$)/i,alias:["class-name-fully-qualified","type-hint"],greedy:!0,lookbehind:!0,inside:{punctuation:/\\/}},{pattern:/(\)\s*:\s*(?:\?\s*)?)\b[a-z_]\w*(?!\\)\b/i,alias:"return-type",greedy:!0,lookbehind:!0},{pattern:/(\)\s*:\s*(?:\?\s*)?)(?:\\?\b[a-z_]\w*)+\b(?!\\)/i,alias:["class-name-fully-qualified","return-type"],greedy:!0,lookbehind:!0,inside:{punctuation:/\\/}}],constant:n,function:{pattern:/(^|[^\\\w])\\?[a-z_](?:[\w\\]*\w)?(?=\s*\()/i,lookbehind:!0,inside:{punctuation:/\\/}},property:{pattern:/(->\s*)\w+/,lookbehind:!0},number:r,operator:i,punctuation:s};var o={pattern:/\{\$(?:\{(?:\{[^{}]+\}|[^{}]+)\}|[^{}])+\}|(^|[^\\{])\$+(?:\w+(?:\[[^\r\n\[\]]+\]|->\w+)?)/,lookbehind:!0,inside:e.languages.php},a=[{pattern:/<<<'([^']+)'[\r\n](?:.*[\r\n])*?\1;/,alias:"nowdoc-string",greedy:!0,inside:{delimiter:{pattern:/^<<<'[^']+'|[a-z_]\w*;$/i,alias:"symbol",inside:{punctuation:/^<<<'?|[';]$/}}}},{pattern:/<<<(?:"([^"]+)"[\r\n](?:.*[\r\n])*?\1;|([a-z_]\w*)[\r\n](?:.*[\r\n])*?\2;)/i,alias:"heredoc-string",greedy:!0,inside:{delimiter:{pattern:/^<<<(?:"[^"]+"|[a-z_]\w*)|[a-z_]\w*;$/i,alias:"symbol",inside:{punctuation:/^<<<"?|[";]$/}},interpolation:o}},{pattern:/`(?:\\[\s\S]|[^\\`])*`/,alias:"backtick-quoted-string",greedy:!0},{pattern:/'(?:\\[\s\S]|[^\\'])*'/,alias:"single-quoted-string",greedy:!0},{pattern:/"(?:\\[\s\S]|[^\\"])*"/,alias:"double-quoted-string",greedy:!0,inside:{interpolation:o}}];e.languages.insertBefore("php","variable",{string:a,attribute:{pattern:/#\[(?:[^"'\/#]|\/(?![*/])|\/\/.*$|#(?!\[).*$|\/\*(?:[^*]|\*(?!\/))*\*\/|"(?:\\[\s\S]|[^\\"])*"|'(?:\\[\s\S]|[^\\'])*')+\](?=\s*[a-z$#])/im,greedy:!0,inside:{"attribute-content":{pattern:/^(#\[)[\s\S]+(?=\]$)/,lookbehind:!0,inside:{comment:t,string:a,"attribute-class-name":[{pattern:/([^:]|^)\b[a-z_]\w*(?!\\)\b/i,alias:"class-name",greedy:!0,lookbehind:!0},{pattern:/([^:]|^)(?:\\?\b[a-z_]\w*)+/i,alias:["class-name","class-name-fully-qualified"],greedy:!0,lookbehind:!0,inside:{punctuation:/\\/}}],constant:n,number:r,operator:i,punctuation:s}},delimiter:{pattern:/^#\[|\]$/,alias:"punctuation"}}}}),e.hooks.add("before-tokenize",(function(t){if(/<\?/.test(t.code)){var n=/<\?(?:[^"'/#]|\/(?![*/])|("|')(?:\\[\s\S]|(?!\1)[^\\])*\1|(?:\/\/|#(?!\[))(?:[^?\n\r]|\?(?!>))*(?=$|\?>|[\r\n])|#\[|\/\*(?:[^*]|\*(?!\/))*(?:\*\/|$))*?(?:\?>|$)/g;e.languages["markup-templating"].buildPlaceholders(t,"php",n)}})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"php")}))})(Prism)},6280:function(){(function(e){var t=/(?:\b[a-zA-Z]\w*|[|\\[\]])+/.source;e.languages.phpdoc=e.languages.extend("javadoclike",{parameter:{pattern:RegExp("(@(?:global|param|property(?:-read|-write)?|var)\\s+(?:"+t+"\\s+)?)\\$\\w+"),lookbehind:!0}}),e.languages.insertBefore("phpdoc","keyword",{"class-name":[{pattern:RegExp("(@(?:global|package|param|property(?:-read|-write)?|return|subpackage|throws|var)\\s+)"+t),lookbehind:!0,inside:{keyword:/\b(?:array|bool|boolean|callback|double|false|float|int|integer|mixed|null|object|resource|self|string|true|void)\b/,punctuation:/[|\\[\]()]/}}]}),e.languages.javadoclike.addSupport("php",e.languages.phpdoc)})(Prism)},9457:function(){(function(e){var t=/\$\w+|%[a-z]+%/,n=/\[[^[\]]*\]/.source,r=/(?:[drlu]|do|down|le|left|ri|right|up)/.source,i="(?:-+"+r+"-+|\\.+"+r+"\\.+|-+(?:"+n+"-*)?|"+n+"-+|\\.+(?:"+n+"\\.*)?|"+n+"\\.+)",s=/(?:<{1,2}|\/{1,2}|\\{1,2}|<\||[#*^+}xo])/.source,o=/(?:>{1,2}|\/{1,2}|\\{1,2}|\|>|[#*^+{xo])/.source,a=/[[?]?[ox]?/.source,l=/[ox]?[\]?]?/.source,c=a+"(?:"+i+o+"|"+s+i+"(?:"+o+")?)"+l;e.languages["plant-uml"]={comment:{pattern:/(^[ \t]*)(?:'.*|\/'[\s\S]*?'\/)/m,lookbehind:!0,greedy:!0},preprocessor:{pattern:/(^[ \t]*)!.*/m,lookbehind:!0,greedy:!0,alias:"property",inside:{variable:t}},delimiter:{pattern:/(^[ \t]*)@(?:end|start)uml\b/m,lookbehind:!0,greedy:!0,alias:"punctuation"},arrow:{pattern:RegExp(/(^|[^-.<>?|\\[\]ox])/.source+c+/(?![-.<>?|\\\]ox])/.source),lookbehind:!0,greedy:!0,alias:"operator",inside:{expression:{pattern:/(\[)[^[\]]+(?=\])/,lookbehind:!0,inside:null},punctuation:/\[(?=$|\])|^\]/}},string:{pattern:/"[^"]*"/,greedy:!0},text:{pattern:/(\[[ \t]*[\r\n]+(?![\r\n]))[^\]]*(?=\])/,lookbehind:!0,greedy:!0,alias:"string"},keyword:[{pattern:/^([ \t]*)(?:abstract\s+class|end\s+(?:box|fork|group|merge|note|ref|split|title)|(?:fork|split)(?:\s+again)?|activate|actor|agent|alt|annotation|artifact|autoactivate|autonumber|backward|binary|boundary|box|break|caption|card|case|circle|class|clock|cloud|collections|component|concise|control|create|critical|database|deactivate|destroy|detach|diamond|else|elseif|end|end[hr]note|endif|endswitch|endwhile|entity|enum|file|folder|footer|frame|group|[hr]?note|header|hexagon|hide|if|interface|label|legend|loop|map|namespace|network|newpage|node|nwdiag|object|opt|package|page|par|participant|person|queue|rectangle|ref|remove|repeat|restore|return|robust|scale|set|show|skinparam|stack|start|state|stop|storage|switch|title|together|usecase|usecase\/|while)(?=\s|$)/m,lookbehind:!0,greedy:!0},/\b(?:elseif|equals|not|while)(?=\s*\()/,/\b(?:as|is|then)\b/],divider:{pattern:/^==.+==$/m,greedy:!0,alias:"important"},time:{pattern:/@(?:\d+(?:[:/]\d+){2}|[+-]?\d+|:[a-z]\w*(?:[+-]\d+)?)\b/i,greedy:!0,alias:"number"},color:{pattern:/#(?:[a-z_]+|[a-fA-F0-9]+)\b/,alias:"symbol"},variable:t,punctuation:/[:,;()[\]{}]|\.{3}/},e.languages["plant-uml"].arrow.inside.expression.inside=e.languages["plant-uml"],e.languages["plantuml"]=e.languages["plant-uml"]})(Prism)},2927:function(){Prism.languages.plsql=Prism.languages.extend("sql",{comment:{pattern:/\/\*[\s\S]*?\*\/|--.*/,greedy:!0},keyword:/\b(?:A|ACCESSIBLE|ADD|AGENT|AGGREGATE|ALL|ALTER|AND|ANY|ARRAY|AS|ASC|AT|ATTRIBUTE|AUTHID|AVG|BEGIN|BETWEEN|BFILE_BASE|BINARY|BLOB_BASE|BLOCK|BODY|BOTH|BOUND|BULK|BY|BYTE|C|CALL|CALLING|CASCADE|CASE|CHAR|CHARACTER|CHARSET|CHARSETFORM|CHARSETID|CHAR_BASE|CHECK|CLOB_BASE|CLONE|CLOSE|CLUSTER|CLUSTERS|COLAUTH|COLLECT|COLUMNS|COMMENT|COMMIT|COMMITTED|COMPILED|COMPRESS|CONNECT|CONSTANT|CONSTRUCTOR|CONTEXT|CONTINUE|CONVERT|COUNT|CRASH|CREATE|CREDENTIAL|CURRENT|CURSOR|CUSTOMDATUM|DANGLING|DATA|DATE|DATE_BASE|DAY|DECLARE|DEFAULT|DEFINE|DELETE|DESC|DETERMINISTIC|DIRECTORY|DISTINCT|DOUBLE|DROP|DURATION|ELEMENT|ELSE|ELSIF|EMPTY|END|ESCAPE|EXCEPT|EXCEPTION|EXCEPTIONS|EXCLUSIVE|EXECUTE|EXISTS|EXIT|EXTERNAL|FETCH|FINAL|FIRST|FIXED|FLOAT|FOR|FORALL|FORCE|FROM|FUNCTION|GENERAL|GOTO|GRANT|GROUP|HASH|HAVING|HEAP|HIDDEN|HOUR|IDENTIFIED|IF|IMMEDIATE|IMMUTABLE|IN|INCLUDING|INDEX|INDEXES|INDICATOR|INDICES|INFINITE|INSERT|INSTANTIABLE|INT|INTERFACE|INTERSECT|INTERVAL|INTO|INVALIDATE|IS|ISOLATION|JAVA|LANGUAGE|LARGE|LEADING|LENGTH|LEVEL|LIBRARY|LIKE|LIKE2|LIKE4|LIKEC|LIMIT|LIMITED|LOCAL|LOCK|LONG|LOOP|MAP|MAX|MAXLEN|MEMBER|MERGE|MIN|MINUS|MINUTE|MOD|MODE|MODIFY|MONTH|MULTISET|MUTABLE|NAME|NAN|NATIONAL|NATIVE|NCHAR|NEW|NOCOMPRESS|NOCOPY|NOT|NOWAIT|NULL|NUMBER_BASE|OBJECT|OCICOLL|OCIDATE|OCIDATETIME|OCIDURATION|OCIINTERVAL|OCILOBLOCATOR|OCINUMBER|OCIRAW|OCIREF|OCIREFCURSOR|OCIROWID|OCISTRING|OCITYPE|OF|OLD|ON|ONLY|OPAQUE|OPEN|OPERATOR|OPTION|OR|ORACLE|ORADATA|ORDER|ORGANIZATION|ORLANY|ORLVARY|OTHERS|OUT|OVERLAPS|OVERRIDING|PACKAGE|PARALLEL_ENABLE|PARAMETER|PARAMETERS|PARENT|PARTITION|PASCAL|PERSISTABLE|PIPE|PIPELINED|PLUGGABLE|POLYMORPHIC|PRAGMA|PRECISION|PRIOR|PRIVATE|PROCEDURE|PUBLIC|RAISE|RANGE|RAW|READ|RECORD|REF|REFERENCE|RELIES_ON|REM|REMAINDER|RENAME|RESOURCE|RESULT|RESULT_CACHE|RETURN|RETURNING|REVERSE|REVOKE|ROLLBACK|ROW|SAMPLE|SAVE|SAVEPOINT|SB1|SB2|SB4|SECOND|SEGMENT|SELECT|SELF|SEPARATE|SEQUENCE|SERIALIZABLE|SET|SHARE|SHORT|SIZE|SIZE_T|SOME|SPARSE|SQL|SQLCODE|SQLDATA|SQLNAME|SQLSTATE|STANDARD|START|STATIC|STDDEV|STORED|STRING|STRUCT|STYLE|SUBMULTISET|SUBPARTITION|SUBSTITUTABLE|SUBTYPE|SUM|SYNONYM|TABAUTH|TABLE|TDO|THE|THEN|TIME|TIMESTAMP|TIMEZONE_ABBR|TIMEZONE_HOUR|TIMEZONE_MINUTE|TIMEZONE_REGION|TO|TRAILING|TRANSACTION|TRANSACTIONAL|TRUSTED|TYPE|UB1|UB2|UB4|UNDER|UNION|UNIQUE|UNPLUG|UNSIGNED|UNTRUSTED|UPDATE|USE|USING|VALIST|VALUE|VALUES|VARIABLE|VARIANCE|VARRAY|VARYING|VIEW|VIEWS|VOID|WHEN|WHERE|WHILE|WITH|WORK|WRAPPED|WRITE|YEAR|ZONE)\b/i,operator:/:=?|=>|[<>^~!]=|\.\.|\|\||\*\*|[-+*/%<>=@]/}),Prism.languages.insertBefore("plsql","operator",{label:{pattern:/<<\s*\w+\s*>>/,alias:"symbol"}})},8281:function(){Prism.languages.powerquery={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|\/\/.*)/,lookbehind:!0,greedy:!0},"quoted-identifier":{pattern:/#"(?:[^"\r\n]|"")*"(?!")/,greedy:!0},string:{pattern:/(?:#!)?"(?:[^"\r\n]|"")*"(?!")/,greedy:!0},constant:[/\bDay\.(?:Friday|Monday|Saturday|Sunday|Thursday|Tuesday|Wednesday)\b/,/\bTraceLevel\.(?:Critical|Error|Information|Verbose|Warning)\b/,/\bOccurrence\.(?:All|First|Last)\b/,/\bOrder\.(?:Ascending|Descending)\b/,/\bRoundingMode\.(?:AwayFromZero|Down|ToEven|TowardZero|Up)\b/,/\bMissingField\.(?:Error|Ignore|UseNull)\b/,/\bQuoteStyle\.(?:Csv|None)\b/,/\bJoinKind\.(?:FullOuter|Inner|LeftAnti|LeftOuter|RightAnti|RightOuter)\b/,/\bGroupKind\.(?:Global|Local)\b/,/\bExtraValues\.(?:Error|Ignore|List)\b/,/\bJoinAlgorithm\.(?:Dynamic|LeftHash|LeftIndex|PairwiseHash|RightHash|RightIndex|SortMerge)\b/,/\bJoinSide\.(?:Left|Right)\b/,/\bPrecision\.(?:Decimal|Double)\b/,/\bRelativePosition\.From(?:End|Start)\b/,/\bTextEncoding\.(?:Ascii|BigEndianUnicode|Unicode|Utf16|Utf8|Windows)\b/,/\b(?:Any|Binary|Date|DateTime|DateTimeZone|Duration|Function|Int16|Int32|Int64|Int8|List|Logical|None|Number|Record|Table|Text|Time)\.Type\b/,/\bnull\b/],boolean:/\b(?:false|true)\b/,keyword:/\b(?:and|as|each|else|error|if|in|is|let|meta|not|nullable|optional|or|otherwise|section|shared|then|try|type)\b|#(?:binary|date|datetime|datetimezone|duration|infinity|nan|sections|shared|table|time)\b/,function:{pattern:/(^|[^#\w.])[a-z_][\w.]*(?=\s*\()/i,lookbehind:!0},"data-type":{pattern:/\b(?:any|anynonnull|binary|date|datetime|datetimezone|duration|function|list|logical|none|number|record|table|text|time)\b/,alias:"class-name"},number:{pattern:/\b0x[\da-f]+\b|(?:[+-]?(?:\b\d+\.)?\b\d+|[+-]\.\d+|(^|[^.])\B\.\d+)(?:e[+-]?\d+)?\b/i,lookbehind:!0},operator:/[-+*\/&?@^]|<(?:=>?|>)?|>=?|=>?|\.\.\.?/,punctuation:/[,;\[\](){}]/},Prism.languages.pq=Prism.languages["powerquery"],Prism.languages.mscript=Prism.languages["powerquery"]},6862:function(){(function(e){var t=e.languages.powershell={comment:[{pattern:/(^|[^`])<#[\s\S]*?#>/,lookbehind:!0},{pattern:/(^|[^`])#.*/,lookbehind:!0}],string:[{pattern:/"(?:`[\s\S]|[^`"])*"/,greedy:!0,inside:null},{pattern:/'(?:[^']|'')*'/,greedy:!0}],namespace:/\[[a-z](?:\[(?:\[[^\]]*\]|[^\[\]])*\]|[^\[\]])*\]/i,boolean:/\$(?:false|true)\b/i,variable:/\$\w+\b/,function:[/\b(?:Add|Approve|Assert|Backup|Block|Checkpoint|Clear|Close|Compare|Complete|Compress|Confirm|Connect|Convert|ConvertFrom|ConvertTo|Copy|Debug|Deny|Disable|Disconnect|Dismount|Edit|Enable|Enter|Exit|Expand|Export|Find|ForEach|Format|Get|Grant|Group|Hide|Import|Initialize|Install|Invoke|Join|Limit|Lock|Measure|Merge|Move|New|Open|Optimize|Out|Ping|Pop|Protect|Publish|Push|Read|Receive|Redo|Register|Remove|Rename|Repair|Request|Reset|Resize|Resolve|Restart|Restore|Resume|Revoke|Save|Search|Select|Send|Set|Show|Skip|Sort|Split|Start|Step|Stop|Submit|Suspend|Switch|Sync|Tee|Test|Trace|Unblock|Undo|Uninstall|Unlock|Unprotect|Unpublish|Unregister|Update|Use|Wait|Watch|Where|Write)-[a-z]+\b/i,/\b(?:ac|cat|chdir|clc|cli|clp|clv|compare|copy|cp|cpi|cpp|cvpa|dbp|del|diff|dir|ebp|echo|epal|epcsv|epsn|erase|fc|fl|ft|fw|gal|gbp|gc|gci|gcs|gdr|gi|gl|gm|gp|gps|group|gsv|gu|gv|gwmi|iex|ii|ipal|ipcsv|ipsn|irm|iwmi|iwr|kill|lp|ls|measure|mi|mount|move|mp|mv|nal|ndr|ni|nv|ogv|popd|ps|pushd|pwd|rbp|rd|rdr|ren|ri|rm|rmdir|rni|rnp|rp|rv|rvpa|rwmi|sal|saps|sasv|sbp|sc|select|set|shcm|si|sl|sleep|sls|sort|sp|spps|spsv|start|sv|swmi|tee|trcm|type|write)\b/i],keyword:/\b(?:Begin|Break|Catch|Class|Continue|Data|Define|Do|DynamicParam|Else|ElseIf|End|Exit|Filter|Finally|For|ForEach|From|Function|If|InlineScript|Parallel|Param|Process|Return|Sequence|Switch|Throw|Trap|Try|Until|Using|Var|While|Workflow)\b/i,operator:{pattern:/(^|\W)(?:!|-(?:b?(?:and|x?or)|as|(?:Not)?(?:Contains|In|Like|Match)|eq|ge|gt|is(?:Not)?|Join|le|lt|ne|not|Replace|sh[lr])\b|-[-=]?|\+[+=]?|[*\/%]=?)/i,lookbehind:!0},punctuation:/[|{}[\];(),.]/};t.string[0].inside={function:{pattern:/(^|[^`])\$\((?:\$\([^\r\n()]*\)|(?!\$\()[^\r\n)])*\)/,lookbehind:!0,inside:t},boolean:t.boolean,variable:t.variable}})(Prism)},7353:function(){Prism.languages.processing=Prism.languages.extend("clike",{keyword:/\b(?:break|case|catch|class|continue|default|else|extends|final|for|if|implements|import|new|null|private|public|return|static|super|switch|this|try|void|while)\b/,function:/\b\w+(?=\s*\()/,operator:/<[<=]?|>[>=]?|&&?|\|\|?|[%?]|[!=+\-*\/]=?/}),Prism.languages.insertBefore("processing","number",{constant:/\b(?!XML\b)[A-Z][A-Z\d_]+\b/,type:{pattern:/\b(?:boolean|byte|char|color|double|float|int|[A-Z]\w*)\b/,alias:"class-name"}})},3932:function(){Prism.languages.prolog={comment:{pattern:/\/\*[\s\S]*?\*\/|%.*/,greedy:!0},string:{pattern:/(["'])(?:\1\1|\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1(?!\1)/,greedy:!0},builtin:/\b(?:fx|fy|xf[xy]?|yfx?)\b/,function:/\b[a-z]\w*(?:(?=\()|\/\d+)/,number:/\b\d+(?:\.\d*)?/,operator:/[:\\=><\-?*@\/;+^|!$.]+|\b(?:is|mod|not|xor)\b/,punctuation:/[(){}\[\],]/}},6638:function(){(function(e){var t=["sum","min","max","avg","group","stddev","stdvar","count","count_values","bottomk","topk","quantile"],n=["on","ignoring","group_right","group_left","by","without"],r=["offset"],i=t.concat(n,r);e.languages.promql={comment:{pattern:/(^[ \t]*)#.*/m,lookbehind:!0},"vector-match":{pattern:new RegExp("((?:"+n.join("|")+")\\s*)\\([^)]*\\)"),lookbehind:!0,inside:{"label-key":{pattern:/\b[^,]+\b/,alias:"attr-name"},punctuation:/[(),]/}},"context-labels":{pattern:/\{[^{}]*\}/,inside:{"label-key":{pattern:/\b[a-z_]\w*(?=\s*(?:=|![=~]))/,alias:"attr-name"},"label-value":{pattern:/(["'`])(?:\\[\s\S]|(?!\1)[^\\])*\1/,greedy:!0,alias:"attr-value"},punctuation:/\{|\}|=~?|![=~]|,/}},"context-range":[{pattern:/\[[\w\s:]+\]/,inside:{punctuation:/\[|\]|:/,"range-duration":{pattern:/\b(?:\d+(?:[smhdwy]|ms))+\b/i,alias:"number"}}},{pattern:/(\boffset\s+)\w+/,lookbehind:!0,inside:{"range-duration":{pattern:/\b(?:\d+(?:[smhdwy]|ms))+\b/i,alias:"number"}}}],keyword:new RegExp("\\b(?:"+i.join("|")+")\\b","i"),function:/\b[a-z_]\w*(?=\s*\()/i,number:/[-+]?(?:(?:\b\d+(?:\.\d+)?|\B\.\d+)(?:e[-+]?\d+)?\b|\b(?:0x[0-9a-f]+|nan|inf)\b)/i,operator:/[\^*/%+-]|==|!=|<=|<|>=|>|\b(?:and|or|unless)\b/i,punctuation:/[{};()`,.[\]]/}})(Prism)},5820:function(){Prism.languages.properties={comment:/^[ \t]*[#!].*$/m,value:{pattern:/(^[ \t]*(?:\\(?:\r\n|[\s\S])|[^\\\s:=])+(?: *[=:] *(?! )| ))(?:\\(?:\r\n|[\s\S])|[^\\\r\n])+/m,lookbehind:!0,alias:"attr-value"},key:{pattern:/^[ \t]*(?:\\(?:\r\n|[\s\S])|[^\\\s:=])+(?= *[=:]| )/m,alias:"attr-name"},punctuation:/[=:]/}},7345:function(){(function(e){var t=/\b(?:bool|bytes|double|s?fixed(?:32|64)|float|[su]?int(?:32|64)|string)\b/;e.languages.protobuf=e.languages.extend("clike",{"class-name":[{pattern:/(\b(?:enum|extend|message|service)\s+)[A-Za-z_]\w*(?=\s*\{)/,lookbehind:!0},{pattern:/(\b(?:rpc\s+\w+|returns)\s*\(\s*(?:stream\s+)?)\.?[A-Za-z_]\w*(?:\.[A-Za-z_]\w*)*(?=\s*\))/,lookbehind:!0}],keyword:/\b(?:enum|extend|extensions|import|message|oneof|option|optional|package|public|repeated|required|reserved|returns|rpc(?=\s+\w)|service|stream|syntax|to)\b(?!\s*=\s*\d)/,function:/\b[a-z_]\w*(?=\s*\()/i}),e.languages.insertBefore("protobuf","operator",{map:{pattern:/\bmap<\s*[\w.]+\s*,\s*[\w.]+\s*>(?=\s+[a-z_]\w*\s*[=;])/i,alias:"class-name",inside:{punctuation:/[<>.,]/,builtin:t}},builtin:t,"positional-class-name":{pattern:/(?:\b|\B\.)[a-z_]\w*(?:\.[a-z_]\w*)*(?=\s+[a-z_]\w*\s*[=;])/i,alias:"class-name",inside:{punctuation:/\./}},annotation:{pattern:/(\[\s*)[a-z_]\w*(?=\s*=)/i,lookbehind:!0}})})(Prism)},942:function(){Prism.languages.psl={comment:{pattern:/#.*/,greedy:!0},string:{pattern:/"(?:\\.|[^\\"])*"/,greedy:!0,inside:{symbol:/\\[ntrbA-Z"\\]/}},"heredoc-string":{pattern:/<<<([a-zA-Z_]\w*)[\r\n](?:.*[\r\n])*?\1\b/,alias:"string",greedy:!0},keyword:/\b(?:__multi|__single|case|default|do|else|elsif|exit|export|for|foreach|function|if|last|line|local|next|requires|return|switch|until|while|word)\b/,constant:/\b(?:ALARM|CHART_ADD_GRAPH|CHART_DELETE_GRAPH|CHART_DESTROY|CHART_LOAD|CHART_PRINT|EOF|OFFLINE|OK|PSL_PROF_LOG|R_CHECK_HORIZ|R_CHECK_VERT|R_CLICKER|R_COLUMN|R_FRAME|R_ICON|R_LABEL|R_LABEL_CENTER|R_LIST_MULTIPLE|R_LIST_MULTIPLE_ND|R_LIST_SINGLE|R_LIST_SINGLE_ND|R_MENU|R_POPUP|R_POPUP_SCROLLED|R_RADIO_HORIZ|R_RADIO_VERT|R_ROW|R_SCALE_HORIZ|R_SCALE_VERT|R_SEP_HORIZ|R_SEP_VERT|R_SPINNER|R_TEXT_FIELD|R_TEXT_FIELD_LABEL|R_TOGGLE|TRIM_LEADING|TRIM_LEADING_AND_TRAILING|TRIM_REDUNDANT|TRIM_TRAILING|VOID|WARN)\b/,boolean:/\b(?:FALSE|False|NO|No|TRUE|True|YES|Yes|false|no|true|yes)\b/,variable:/\b(?:PslDebug|errno|exit_status)\b/,builtin:{pattern:/\b(?:PslExecute|PslFunctionCall|PslFunctionExists|PslSetOptions|_snmp_debug|acos|add_diary|annotate|annotate_get|ascii_to_ebcdic|asctime|asin|atan|atexit|batch_set|blackout|cat|ceil|chan_exists|change_state|close|code_cvt|cond_signal|cond_wait|console_type|convert_base|convert_date|convert_locale_date|cos|cosh|create|date|dcget_text|destroy|destroy_lock|dget_text|difference|dump_hist|ebcdic_to_ascii|encrypt|event_archive|event_catalog_get|event_check|event_query|event_range_manage|event_range_query|event_report|event_schedule|event_trigger|event_trigger2|execute|exists|exp|fabs|file|floor|fmod|fopen|fseek|ftell|full_discovery|get|get_chan_info|get_ranges|get_text|get_vars|getenv|gethostinfo|getpid|getpname|grep|history|history_get_retention|in_transition|index|int|internal|intersection|is_var|isnumber|join|kill|length|lines|lock|lock_info|log|log10|loge|matchline|msg_check|msg_get_format|msg_get_severity|msg_printf|msg_sprintf|ntharg|nthargf|nthline|nthlinef|num_bytes|num_consoles|pconfig|popen|poplines|pow|print|printf|proc_exists|process|random|read|readln|refresh_parameters|remote_check|remote_close|remote_event_query|remote_event_trigger|remote_file_send|remote_open|remove|replace|rindex|sec_check_priv|sec_store_get|sec_store_set|set|set_alarm_ranges|set_locale|share|sin|sinh|sleep|snmp_agent_config|snmp_agent_start|snmp_agent_stop|snmp_close|snmp_config|snmp_get|snmp_get_next|snmp_h_get|snmp_h_get_next|snmp_h_set|snmp_open|snmp_set|snmp_trap_ignore|snmp_trap_listen|snmp_trap_raise_std_trap|snmp_trap_receive|snmp_trap_register_im|snmp_trap_send|snmp_walk|sopen|sort|splitline|sprintf|sqrt|srandom|str_repeat|strcasecmp|subset|substr|system|tail|tan|tanh|text_domain|time|tmpnam|tolower|toupper|trace_psl_process|trim|union|unique|unlock|unset|va_arg|va_start|write)\b/,alias:"builtin-function"},"foreach-variable":{pattern:/(\bforeach\s+(?:(?:\w+\b|"(?:\\.|[^\\"])*")\s+){0,2})[_a-zA-Z]\w*(?=\s*\()/,lookbehind:!0,greedy:!0},function:/\b[_a-z]\w*\b(?=\s*\()/i,number:/\b(?:0x[0-9a-f]+|\d+(?:\.\d+)?)\b/i,operator:/--|\+\+|&&=?|\|\|=?|<<=?|>>=?|[=!]~|[-+*/%&|^!=<>]=?|\.|[:?]/,punctuation:/[(){}\[\];,]/}},3381:function(){(function(e){e.languages.pug={comment:{pattern:/(^([\t ]*))\/\/.*(?:(?:\r?\n|\r)\2[\t ].+)*/m,lookbehind:!0},"multiline-script":{pattern:/(^([\t ]*)script\b.*\.[\t ]*)(?:(?:\r?\n|\r(?!\n))(?:\2[\t ].+|\s*?(?=\r?\n|\r)))+/m,lookbehind:!0,inside:e.languages.javascript},filter:{pattern:/(^([\t ]*)):.+(?:(?:\r?\n|\r(?!\n))(?:\2[\t ].+|\s*?(?=\r?\n|\r)))+/m,lookbehind:!0,inside:{"filter-name":{pattern:/^:[\w-]+/,alias:"variable"},text:/\S[\s\S]*/}},"multiline-plain-text":{pattern:/(^([\t ]*)[\w\-#.]+\.[\t ]*)(?:(?:\r?\n|\r(?!\n))(?:\2[\t ].+|\s*?(?=\r?\n|\r)))+/m,lookbehind:!0},markup:{pattern:/(^[\t ]*)<.+/m,lookbehind:!0,inside:e.languages.markup},doctype:{pattern:/((?:^|\n)[\t ]*)doctype(?: .+)?/,lookbehind:!0},"flow-control":{pattern:/(^[\t ]*)(?:case|default|each|else|if|unless|when|while)\b(?: .+)?/m,lookbehind:!0,inside:{each:{pattern:/^each .+? in\b/,inside:{keyword:/\b(?:each|in)\b/,punctuation:/,/}},branch:{pattern:/^(?:case|default|else|if|unless|when|while)\b/,alias:"keyword"},rest:e.languages.javascript}},keyword:{pattern:/(^[\t ]*)(?:append|block|extends|include|prepend)\b.+/m,lookbehind:!0},mixin:[{pattern:/(^[\t ]*)mixin .+/m,lookbehind:!0,inside:{keyword:/^mixin/,function:/\w+(?=\s*\(|\s*$)/,punctuation:/[(),.]/}},{pattern:/(^[\t ]*)\+.+/m,lookbehind:!0,inside:{name:{pattern:/^\+\w+/,alias:"function"},rest:e.languages.javascript}}],script:{pattern:/(^[\t ]*script(?:(?:&[^(]+)?\([^)]+\))*[\t ]).+/m,lookbehind:!0,inside:e.languages.javascript},"plain-text":{pattern:/(^[\t ]*(?!-)[\w\-#.]*[\w\-](?:(?:&[^(]+)?\([^)]+\))*\/?[\t ]).+/m,lookbehind:!0},tag:{pattern:/(^[\t ]*)(?!-)[\w\-#.]*[\w\-](?:(?:&[^(]+)?\([^)]+\))*\/?:?/m,lookbehind:!0,inside:{attributes:[{pattern:/&[^(]+\([^)]+\)/,inside:e.languages.javascript},{pattern:/\([^)]+\)/,inside:{"attr-value":{pattern:/(=\s*(?!\s))(?:\{[^}]*\}|[^,)\r\n]+)/,lookbehind:!0,inside:e.languages.javascript},"attr-name":/[\w-]+(?=\s*!?=|\s*[,)])/,punctuation:/[!=(),]+/}}],punctuation:/:/,"attr-id":/#[\w\-]+/,"attr-class":/\.[\w\-]+/}},code:[{pattern:/(^[\t ]*(?:-|!?=)).+/m,lookbehind:!0,inside:e.languages.javascript}],punctuation:/[.\-!=|]+/};for(var t=/(^([\t ]*)):(?:(?:\r?\n|\r(?!\n))(?:\2[\t ].+|\s*?(?=\r?\n|\r)))+/.source,n=[{filter:"atpl",language:"twig"},{filter:"coffee",language:"coffeescript"},"ejs","handlebars","less","livescript","markdown",{filter:"sass",language:"scss"},"stylus"],r={},i=0,s=n.length;i",(function(){return o.filter})),"m"),lookbehind:!0,inside:{"filter-name":{pattern:/^:[\w-]+/,alias:"variable"},text:{pattern:/\S[\s\S]*/,alias:[o.language,"language-"+o.language],inside:e.languages[o.language]}}})}e.languages.insertBefore("pug","filter",r)})(Prism)},4319:function(){(function(e){e.languages.puppet={heredoc:[{pattern:/(@\("([^"\r\n\/):]+)"(?:\/[nrts$uL]*)?\).*(?:\r?\n|\r))(?:.*(?:\r?\n|\r(?!\n)))*?[ \t]*(?:\|[ \t]*)?(?:-[ \t]*)?\2/,lookbehind:!0,alias:"string",inside:{punctuation:/(?=\S).*\S(?= *$)/}},{pattern:/(@\(([^"\r\n\/):]+)(?:\/[nrts$uL]*)?\).*(?:\r?\n|\r))(?:.*(?:\r?\n|\r(?!\n)))*?[ \t]*(?:\|[ \t]*)?(?:-[ \t]*)?\2/,lookbehind:!0,greedy:!0,alias:"string",inside:{punctuation:/(?=\S).*\S(?= *$)/}},{pattern:/@\("?(?:[^"\r\n\/):]+)"?(?:\/[nrts$uL]*)?\)/,alias:"string",inside:{punctuation:{pattern:/(\().+?(?=\))/,lookbehind:!0}}}],"multiline-comment":{pattern:/(^|[^\\])\/\*[\s\S]*?\*\//,lookbehind:!0,greedy:!0,alias:"comment"},regex:{pattern:/((?:\bnode\s+|[~=\(\[\{,]\s*|[=+]>\s*|^\s*))\/(?:[^\/\\]|\\[\s\S])+\/(?:[imx]+\b|\B)/,lookbehind:!0,greedy:!0,inside:{"extended-regex":{pattern:/^\/(?:[^\/\\]|\\[\s\S])+\/[im]*x[im]*$/,inside:{comment:/#.*/}}}},comment:{pattern:/(^|[^\\])#.*/,lookbehind:!0,greedy:!0},string:{pattern:/(["'])(?:\$\{(?:[^'"}]|(["'])(?:(?!\2)[^\\]|\\[\s\S])*\2)+\}|\$(?!\{)|(?!\1)[^\\$]|\\[\s\S])*\1/,greedy:!0,inside:{"double-quoted":{pattern:/^"[\s\S]*"$/,inside:{}}}},variable:{pattern:/\$(?:::)?\w+(?:::\w+)*/,inside:{punctuation:/::/}},"attr-name":/(?:\b\w+|\*)(?=\s*=>)/,function:[{pattern:/(\.)(?!\d)\w+/,lookbehind:!0},/\b(?:contain|debug|err|fail|include|info|notice|realize|require|tag|warning)\b|\b(?!\d)\w+(?=\()/],number:/\b(?:0x[a-f\d]+|\d+(?:\.\d+)?(?:e-?\d+)?)\b/i,boolean:/\b(?:false|true)\b/,keyword:/\b(?:application|attr|case|class|consumes|default|define|else|elsif|function|if|import|inherits|node|private|produces|type|undef|unless)\b/,datatype:{pattern:/\b(?:Any|Array|Boolean|Callable|Catalogentry|Class|Collection|Data|Default|Enum|Float|Hash|Integer|NotUndef|Numeric|Optional|Pattern|Regexp|Resource|Runtime|Scalar|String|Struct|Tuple|Type|Undef|Variant)\b/,alias:"symbol"},operator:/=[=~>]?|![=~]?|<(?:<\|?|[=~|-])?|>[>=]?|->?|~>|\|>?>?|[*\/%+?]|\b(?:and|in|or)\b/,punctuation:/[\[\]{}().,;]|:+/};var t=[{pattern:/(^|[^\\])\$\{(?:[^'"{}]|\{[^}]*\}|(["'])(?:(?!\2)[^\\]|\\[\s\S])*\2)+\}/,lookbehind:!0,inside:{"short-variable":{pattern:/(^\$\{)(?!\w+\()(?:::)?\w+(?:::\w+)*/,lookbehind:!0,alias:"variable",inside:{punctuation:/::/}},delimiter:{pattern:/^\$/,alias:"variable"},rest:e.languages.puppet}},{pattern:/(^|[^\\])\$(?:::)?\w+(?:::\w+)*/,lookbehind:!0,alias:"variable",inside:{punctuation:/::/}}];e.languages.puppet["heredoc"][0].inside.interpolation=t,e.languages.puppet["string"].inside["double-quoted"].inside.interpolation=t})(Prism)},9753:function(){(function(e){e.languages.pure={comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?\*\//,lookbehind:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0},/#!.+/],"inline-lang":{pattern:/%<[\s\S]+?%>/,greedy:!0,inside:{lang:{pattern:/(^%< *)-\*-.+?-\*-/,lookbehind:!0,alias:"comment"},delimiter:{pattern:/^%<.*|%>$/,alias:"punctuation"}}},string:{pattern:/"(?:\\.|[^"\\\r\n])*"/,greedy:!0},number:{pattern:/((?:\.\.)?)(?:\b(?:inf|nan)\b|\b0x[\da-f]+|(?:\b(?:0b)?\d+(?:\.\d+)?|\B\.\d+)(?:e[+-]?\d+)?L?)/i,lookbehind:!0},keyword:/\b(?:NULL|ans|break|bt|case|catch|cd|clear|const|def|del|dump|else|end|exit|extern|false|force|help|if|infix[lr]?|interface|let|ls|mem|namespace|nonfix|of|otherwise|outfix|override|postfix|prefix|private|public|pwd|quit|run|save|show|stats|then|throw|trace|true|type|underride|using|when|with)\b/,function:/\b(?:abs|add_(?:addr|constdef|(?:fundef|interface|macdef|typedef)(?:_at)?|vardef)|all|any|applp?|arity|bigintp?|blob(?:_crc|_size|p)?|boolp?|byte_c?string(?:_pointer)?|byte_(?:matrix|pointer)|calloc|cat|catmap|ceil|char[ps]?|check_ptrtag|chr|clear_sentry|clearsym|closurep?|cmatrixp?|cols?|colcat(?:map)?|colmap|colrev|colvector(?:p|seq)?|complex(?:_float_(?:matrix|pointer)|_matrix(?:_view)?|_pointer|p)?|conj|cookedp?|cst|cstring(?:_(?:dup|list|vector))?|curry3?|cyclen?|del_(?:constdef|fundef|interface|macdef|typedef|vardef)|delete|diag(?:mat)?|dim|dmatrixp?|do|double(?:_matrix(?:_view)?|_pointer|p)?|dowith3?|drop|dropwhile|eval(?:cmd)?|exactp|filter|fix|fixity|flip|float(?:_matrix|_pointer)|floor|fold[lr]1?|frac|free|funp?|functionp?|gcd|get(?:_(?:byte|constdef|double|float|fundef|int(?:64)?|interface(?:_typedef)?|long|macdef|pointer|ptrtag|sentry|short|string|typedef|vardef))?|globsym|hash|head|id|im|imatrixp?|index|inexactp|infp|init|insert|int(?:_matrix(?:_view)?|_pointer|p)?|int64_(?:matrix|pointer)|integerp?|iteraten?|iterwhile|join|keys?|lambdap?|last(?:err(?:pos)?)?|lcd|list[2p]?|listmap|make_ptrtag|malloc|map|matcat|matrixp?|max|member|min|nanp|nargs|nmatrixp?|null|numberp?|ord|pack(?:ed)?|pointer(?:_cast|_tag|_type|p)?|pow|pred|ptrtag|put(?:_(?:byte|double|float|int(?:64)?|long|pointer|short|string))?|rationalp?|re|realp?|realloc|recordp?|redim|reduce(?:_with)?|refp?|repeatn?|reverse|rlistp?|round|rows?|rowcat(?:map)?|rowmap|rowrev|rowvector(?:p|seq)?|same|scan[lr]1?|sentry|sgn|short_(?:matrix|pointer)|slice|smatrixp?|sort|split|str|strcat|stream|stride|string(?:_(?:dup|list|vector)|p)?|subdiag(?:mat)?|submat|subseq2?|substr|succ|supdiag(?:mat)?|symbolp?|tail|take|takewhile|thunkp?|transpose|trunc|tuplep?|typep|ubyte|uint(?:64)?|ulong|uncurry3?|unref|unzip3?|update|ushort|vals?|varp?|vector(?:p|seq)?|void|zip3?|zipwith3?)\b/,special:{pattern:/\b__[a-z]+__\b/i,alias:"builtin"},operator:/(?:[!"#$%&'*+,\-.\/:<=>?@\\^`|~\u00a1-\u00bf\u00d7-\u00f7\u20d0-\u2bff]|\b_+\b)+|\b(?:and|div|mod|not|or)\b/,punctuation:/[(){}\[\];,|]/};var t=["c",{lang:"c++",alias:"cpp"},"fortran"],n=/%< *-\*- *\d* *-\*-[\s\S]+?%>/.source;t.forEach((function(t){var r=t;if("string"!==typeof t&&(r=t.alias,t=t.lang),e.languages[r]){var i={};i["inline-lang-"+r]={pattern:RegExp(n.replace("",t.replace(/([.+*?\/\\(){}\[\]])/g,"\\$1")),"i"),inside:e.util.clone(e.languages.pure["inline-lang"].inside)},i["inline-lang-"+r].inside.rest=e.util.clone(e.languages[r]),e.languages.insertBefore("pure","inline-lang",i)}})),e.languages.c&&(e.languages.pure["inline-lang"].inside.rest=e.util.clone(e.languages.c))})(Prism)},2168:function(){Prism.languages.purebasic=Prism.languages.extend("clike",{comment:/;.*/,keyword:/\b(?:align|and|as|break|calldebugger|case|compilercase|compilerdefault|compilerelse|compilerelseif|compilerendif|compilerendselect|compilererror|compilerif|compilerselect|continue|data|datasection|debug|debuglevel|declare|declarec|declarecdll|declaredll|declaremodule|default|define|dim|disableasm|disabledebugger|disableexplicit|else|elseif|enableasm|enabledebugger|enableexplicit|end|enddatasection|enddeclaremodule|endenumeration|endif|endimport|endinterface|endmacro|endmodule|endprocedure|endselect|endstructure|endstructureunion|endwith|enumeration|extends|fakereturn|for|foreach|forever|global|gosub|goto|if|import|importc|includebinary|includefile|includepath|interface|macro|module|newlist|newmap|next|not|or|procedure|procedurec|procedurecdll|proceduredll|procedurereturn|protected|prototype|prototypec|read|redim|repeat|restore|return|runtime|select|shared|static|step|structure|structureunion|swap|threaded|to|until|wend|while|with|xincludefile|xor)\b/i,function:/\b\w+(?:\.\w+)?\s*(?=\()/,number:/(?:\$[\da-f]+|\b-?(?:\d+(?:\.\d+)?|\.\d+)(?:e[+-]?\d+)?)\b/i,operator:/(?:@\*?|\?|\*)\w+\$?|-[>-]?|\+\+?|!=?|<>?=?|==?|&&?|\|?\||[~^%?*/@]/}),Prism.languages.insertBefore("purebasic","keyword",{tag:/#\w+\$?/,asm:{pattern:/(^[\t ]*)!.*/m,lookbehind:!0,alias:"tag",inside:{comment:/;.*/,string:{pattern:/(["'`])(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0},"label-reference-anonymous":{pattern:/(!\s*j[a-z]+\s+)@[fb]/i,lookbehind:!0,alias:"fasm-label"},"label-reference-addressed":{pattern:/(!\s*j[a-z]+\s+)[A-Z._?$@][\w.?$@~#]*/i,lookbehind:!0,alias:"fasm-label"},keyword:[/\b(?:extern|global)\b[^;\r\n]*/i,/\b(?:CPU|DEFAULT|FLOAT)\b.*/],function:{pattern:/^([\t ]*!\s*)[\da-z]+(?=\s|$)/im,lookbehind:!0},"function-inline":{pattern:/(:\s*)[\da-z]+(?=\s)/i,lookbehind:!0,alias:"function"},label:{pattern:/^([\t ]*!\s*)[A-Za-z._?$@][\w.?$@~#]*(?=:)/m,lookbehind:!0,alias:"fasm-label"},register:/\b(?:st\d|[xyz]mm\d\d?|[cdt]r\d|r\d\d?[bwd]?|[er]?[abcd]x|[abcd][hl]|[er]?(?:bp|di|si|sp)|[cdefgs]s|mm\d+)\b/i,number:/(?:\b|-|(?=\$))(?:0[hx](?:[\da-f]*\.)?[\da-f]+(?:p[+-]?\d+)?|\d[\da-f]+[hx]|\$\d[\da-f]*|0[oq][0-7]+|[0-7]+[oq]|0[by][01]+|[01]+[by]|0[dt]\d+|(?:\d+(?:\.\d+)?|\.\d+)(?:\.?e[+-]?\d+)?[dt]?)\b/i,operator:/[\[\]*+\-/%<>=&|$!,.:]/}}}),delete Prism.languages.purebasic["class-name"],delete Prism.languages.purebasic["boolean"],Prism.languages.pbfasm=Prism.languages["purebasic"]},9485:function(){Prism.languages.purescript=Prism.languages.extend("haskell",{keyword:/\b(?:ado|case|class|data|derive|do|else|forall|if|in|infixl|infixr|instance|let|module|newtype|of|primitive|then|type|where)\b|∀/,"import-statement":{pattern:/(^[\t ]*)import\s+[A-Z][\w']*(?:\.[A-Z][\w']*)*(?:\s+as\s+[A-Z][\w']*(?:\.[A-Z][\w']*)*)?(?:\s+hiding\b)?/m,lookbehind:!0,inside:{keyword:/\b(?:as|hiding|import)\b/,punctuation:/\./}},builtin:/\b(?:absurd|add|ap|append|apply|between|bind|bottom|clamp|compare|comparing|compose|conj|const|degree|discard|disj|div|eq|flap|flip|gcd|identity|ifM|join|lcm|liftA1|liftM1|map|max|mempty|min|mod|mul|negate|not|notEq|one|otherwise|recip|show|sub|top|unit|unless|unlessM|void|when|whenM|zero)\b/,operator:[Prism.languages.haskell.operator[0],Prism.languages.haskell.operator[2],/[\xa2-\xa6\xa8\xa9\xac\xae-\xb1\xb4\xb8\xd7\xf7\u02c2-\u02c5\u02d2-\u02df\u02e5-\u02eb\u02ed\u02ef-\u02ff\u0375\u0384\u0385\u03f6\u0482\u058d-\u058f\u0606-\u0608\u060b\u060e\u060f\u06de\u06e9\u06fd\u06fe\u07f6\u07fe\u07ff\u09f2\u09f3\u09fa\u09fb\u0af1\u0b70\u0bf3-\u0bfa\u0c7f\u0d4f\u0d79\u0e3f\u0f01-\u0f03\u0f13\u0f15-\u0f17\u0f1a-\u0f1f\u0f34\u0f36\u0f38\u0fbe-\u0fc5\u0fc7-\u0fcc\u0fce\u0fcf\u0fd5-\u0fd8\u109e\u109f\u1390-\u1399\u166d\u17db\u1940\u19de-\u19ff\u1b61-\u1b6a\u1b74-\u1b7c\u1fbd\u1fbf-\u1fc1\u1fcd-\u1fcf\u1fdd-\u1fdf\u1fed-\u1fef\u1ffd\u1ffe\u2044\u2052\u207a-\u207c\u208a-\u208c\u20a0-\u20bf\u2100\u2101\u2103-\u2106\u2108\u2109\u2114\u2116-\u2118\u211e-\u2123\u2125\u2127\u2129\u212e\u213a\u213b\u2140-\u2144\u214a-\u214d\u214f\u218a\u218b\u2190-\u2307\u230c-\u2328\u232b-\u2426\u2440-\u244a\u249c-\u24e9\u2500-\u2767\u2794-\u27c4\u27c7-\u27e5\u27f0-\u2982\u2999-\u29d7\u29dc-\u29fb\u29fe-\u2b73\u2b76-\u2b95\u2b97-\u2bff\u2ce5-\u2cea\u2e50\u2e51\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u2ff0-\u2ffb\u3004\u3012\u3013\u3020\u3036\u3037\u303e\u303f\u309b\u309c\u3190\u3191\u3196-\u319f\u31c0-\u31e3\u3200-\u321e\u322a-\u3247\u3250\u3260-\u327f\u328a-\u32b0\u32c0-\u33ff\u4dc0-\u4dff\ua490-\ua4c6\ua700-\ua716\ua720\ua721\ua789\ua78a\ua828-\ua82b\ua836-\ua839\uaa77-\uaa79\uab5b\uab6a\uab6b\ufb29\ufbb2-\ufbc1\ufdfc\ufdfd\ufe62\ufe64-\ufe66\ufe69\uff04\uff0b\uff1c-\uff1e\uff3e\uff40\uff5c\uff5e\uffe0-\uffe6\uffe8-\uffee\ufffc\ufffd]/]}),Prism.languages.purs=Prism.languages.purescript},366:function(){Prism.languages.python={comment:{pattern:/(^|[^\\])#.*/,lookbehind:!0,greedy:!0},"string-interpolation":{pattern:/(?:f|fr|rf)(?:("""|''')[\s\S]*?\1|("|')(?:\\.|(?!\2)[^\\\r\n])*\2)/i,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^{])(?:\{\{)*)\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}])+\})+\})+\}/,lookbehind:!0,inside:{"format-spec":{pattern:/(:)[^:(){}]+(?=\}$)/,lookbehind:!0},"conversion-option":{pattern:/![sra](?=[:}]$)/,alias:"punctuation"},rest:null}},string:/[\s\S]+/}},"triple-quoted-string":{pattern:/(?:[rub]|br|rb)?("""|''')[\s\S]*?\1/i,greedy:!0,alias:"string"},string:{pattern:/(?:[rub]|br|rb)?("|')(?:\\.|(?!\1)[^\\\r\n])*\1/i,greedy:!0},function:{pattern:/((?:^|\s)def[ \t]+)[a-zA-Z_]\w*(?=\s*\()/g,lookbehind:!0},"class-name":{pattern:/(\bclass\s+)\w+/i,lookbehind:!0},decorator:{pattern:/(^[\t ]*)@\w+(?:\.\w+)*/m,lookbehind:!0,alias:["annotation","punctuation"],inside:{punctuation:/\./}},keyword:/\b(?:_(?=\s*:)|and|as|assert|async|await|break|case|class|continue|def|del|elif|else|except|exec|finally|for|from|global|if|import|in|is|lambda|match|nonlocal|not|or|pass|print|raise|return|try|while|with|yield)\b/,builtin:/\b(?:__import__|abs|all|any|apply|ascii|basestring|bin|bool|buffer|bytearray|bytes|callable|chr|classmethod|cmp|coerce|compile|complex|delattr|dict|dir|divmod|enumerate|eval|execfile|file|filter|float|format|frozenset|getattr|globals|hasattr|hash|help|hex|id|input|int|intern|isinstance|issubclass|iter|len|list|locals|long|map|max|memoryview|min|next|object|oct|open|ord|pow|property|range|raw_input|reduce|reload|repr|reversed|round|set|setattr|slice|sorted|staticmethod|str|sum|super|tuple|type|unichr|unicode|vars|xrange|zip)\b/,boolean:/\b(?:False|None|True)\b/,number:/\b0(?:b(?:_?[01])+|o(?:_?[0-7])+|x(?:_?[a-f0-9])+)\b|(?:\b\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\B\.\d+(?:_\d+)*)(?:e[+-]?\d+(?:_\d+)*)?j?(?!\w)/i,operator:/[-+%=]=?|!=|:=|\*\*?=?|\/\/?=?|<[<=>]?|>[=>]?|[&|^~]/,punctuation:/[{}[\];(),.:]/},Prism.languages.python["string-interpolation"].inside["interpolation"].inside.rest=Prism.languages.python,Prism.languages.py=Prism.languages.python},2939:function(){Prism.languages.q={string:/"(?:\\.|[^"\\\r\n])*"/,comment:[{pattern:/([\t )\]}])\/.*/,lookbehind:!0,greedy:!0},{pattern:/(^|\r?\n|\r)\/[\t ]*(?:(?:\r?\n|\r)(?:.*(?:\r?\n|\r(?!\n)))*?(?:\\(?=[\t ]*(?:\r?\n|\r))|$)|\S.*)/,lookbehind:!0,greedy:!0},{pattern:/^\\[\t ]*(?:\r?\n|\r)[\s\S]+/m,greedy:!0},{pattern:/^#!.+/m,greedy:!0}],symbol:/`(?::\S+|[\w.]*)/,datetime:{pattern:/0N[mdzuvt]|0W[dtz]|\d{4}\.\d\d(?:m|\.\d\d(?:T(?:\d\d(?::\d\d(?::\d\d(?:[.:]\d\d\d)?)?)?)?)?[dz]?)|\d\d:\d\d(?::\d\d(?:[.:]\d\d\d)?)?[uvt]?/,alias:"number"},number:/\b(?![01]:)(?:0N[hje]?|0W[hj]?|0[wn]|0x[\da-fA-F]+|\d+(?:\.\d*)?(?:e[+-]?\d+)?[hjfeb]?)/,keyword:/\\\w+\b|\b(?:abs|acos|aj0?|all|and|any|asc|asin|asof|atan|attr|avgs?|binr?|by|ceiling|cols|cor|cos|count|cov|cross|csv|cut|delete|deltas|desc|dev|differ|distinct|div|do|dsave|ej|enlist|eval|except|exec|exit|exp|fby|fills|first|fkeys|flip|floor|from|get|getenv|group|gtime|hclose|hcount|hdel|hopen|hsym|iasc|identity|idesc|if|ij|in|insert|inter|inv|keys?|last|like|list|ljf?|load|log|lower|lsq|ltime|ltrim|mavg|maxs?|mcount|md5|mdev|med|meta|mins?|mmax|mmin|mmu|mod|msum|neg|next|not|null|or|over|parse|peach|pj|plist|prds?|prev|prior|rand|rank|ratios|raze|read0|read1|reciprocal|reval|reverse|rload|rotate|rsave|rtrim|save|scan|scov|sdev|select|set|setenv|show|signum|sin|sqrt|ssr?|string|sublist|sums?|sv|svar|system|tables|tan|til|trim|txf|type|uj|ungroup|union|update|upper|upsert|value|var|views?|vs|wavg|where|while|within|wj1?|wsum|ww|xasc|xbar|xcols?|xdesc|xexp|xgroup|xkey|xlog|xprev|xrank)\b/,adverb:{pattern:/['\/\\]:?|\beach\b/,alias:"function"},verb:{pattern:/(?:\B\.\B|\b[01]:|<[=>]?|>=?|[:+\-*%,!?~=|$&#@^]):?|\b_\b:?/,alias:"operator"},punctuation:/[(){}\[\];.]/}},4891:function(){(function(e){for(var t=/"(?:\\.|[^\\"\r\n])*"|'(?:\\.|[^\\'\r\n])*'/.source,n=/\/\/.*(?!.)|\/\*(?:[^*]|\*(?!\/))*\*\//.source,r=/(?:[^\\()[\]{}"'/]||\/(?![*/])||\(*\)|\[*\]|\{*\}|\\[\s\S])/.source.replace(//g,(function(){return t})).replace(//g,(function(){return n})),i=0;i<2;i++)r=r.replace(//g,(function(){return r}));r=r.replace(//g,"[^\\s\\S]"),e.languages.qml={comment:{pattern:/\/\/.*|\/\*[\s\S]*?\*\//,greedy:!0},"javascript-function":{pattern:RegExp(/((?:^|;)[ \t]*)function\s+(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*\s*\(*\)\s*\{*\}/.source.replace(//g,(function(){return r})),"m"),lookbehind:!0,greedy:!0,alias:"language-javascript",inside:e.languages.javascript},"class-name":{pattern:/((?:^|[:;])[ \t]*)(?!\d)\w+(?=[ \t]*\{|[ \t]+on\b)/m,lookbehind:!0},property:[{pattern:/((?:^|[;{])[ \t]*)(?!\d)\w+(?:\.\w+)*(?=[ \t]*:)/m,lookbehind:!0},{pattern:/((?:^|[;{])[ \t]*)property[ \t]+(?!\d)\w+(?:\.\w+)*[ \t]+(?!\d)\w+(?:\.\w+)*(?=[ \t]*:)/m,lookbehind:!0,inside:{keyword:/^property/,property:/\w+(?:\.\w+)*/}}],"javascript-expression":{pattern:RegExp(/(:[ \t]*)(?![\s;}[])(?:(?!$|[;}]))+/.source.replace(//g,(function(){return r})),"m"),lookbehind:!0,greedy:!0,alias:"language-javascript",inside:e.languages.javascript},string:{pattern:/"(?:\\.|[^\\"\r\n])*"/,greedy:!0},keyword:/\b(?:as|import|on)\b/,punctuation:/[{}[\]:;,]/}})(Prism)},4933:function(){Prism.languages.qore=Prism.languages.extend("clike",{comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|(?:\/\/|#).*)/,lookbehind:!0},string:{pattern:/("|')(?:\\[\s\S]|(?!\1)[^\\])*\1/,greedy:!0},keyword:/\b(?:abstract|any|assert|binary|bool|boolean|break|byte|case|catch|char|class|code|const|continue|data|default|do|double|else|enum|extends|final|finally|float|for|goto|hash|if|implements|import|inherits|instanceof|int|interface|long|my|native|new|nothing|null|object|our|own|private|reference|rethrow|return|short|soft(?:bool|date|float|int|list|number|string)|static|strictfp|string|sub|super|switch|synchronized|this|throw|throws|transient|try|void|volatile|while)\b/,boolean:/\b(?:false|true)\b/i,function:/\$?\b(?!\d)\w+(?=\()/,number:/\b(?:0b[01]+|0x(?:[\da-f]*\.)?[\da-fp\-]+|(?:\d+(?:\.\d+)?|\.\d+)(?:e\d+)?[df]|(?:\d+(?:\.\d+)?|\.\d+))\b/i,operator:{pattern:/(^|[^.])(?:\+[+=]?|-[-=]?|[!=](?:==?|~)?|>>?=?|<(?:=>?|<=?)?|&[&=]?|\|[|=]?|[*\/%^]=?|[~?])/,lookbehind:!0},variable:/\$(?!\d)\w+\b/})},6896:function(){(function(e){function t(e,t){return e.replace(/<<(\d+)>>/g,(function(e,n){return"(?:"+t[+n]+")"}))}function n(e,n,r){return RegExp(t(e,n),r||"")}function r(e,t){for(var n=0;n>/g,(function(){return"(?:"+e+")"}));return e.replace(/<>/g,"[^\\s\\S]")}var i={type:"Adj BigInt Bool Ctl Double false Int One Pauli PauliI PauliX PauliY PauliZ Qubit Range Result String true Unit Zero",other:"Adjoint adjoint apply as auto body borrow borrowing Controlled controlled distribute elif else fail fixup for function if in internal intrinsic invert is let mutable namespace new newtype open operation repeat return self set until use using while within"};function s(e){return"\\b(?:"+e.trim().replace(/ /g,"|")+")\\b"}var o=RegExp(s(i.type+" "+i.other)),a=/\b[A-Za-z_]\w*\b/.source,l=t(/<<0>>(?:\s*\.\s*<<0>>)*/.source,[a]),c={keyword:o,punctuation:/[<>()?,.:[\]]/},u=/"(?:\\.|[^\\"])*"/.source;e.languages.qsharp=e.languages.extend("clike",{comment:/\/\/.*/,string:[{pattern:n(/(^|[^$\\])<<0>>/.source,[u]),lookbehind:!0,greedy:!0}],"class-name":[{pattern:n(/(\b(?:as|open)\s+)<<0>>(?=\s*(?:;|as\b))/.source,[l]),lookbehind:!0,inside:c},{pattern:n(/(\bnamespace\s+)<<0>>(?=\s*\{)/.source,[l]),lookbehind:!0,inside:c}],keyword:o,number:/(?:\b0(?:x[\da-f]+|b[01]+|o[0-7]+)|(?:\B\.\d+|\b\d+(?:\.\d*)?)(?:e[-+]?\d+)?)l?\b/i,operator:/\band=|\bor=|\band\b|\bnot\b|\bor\b|<[-=]|[-=]>|>>>=?|<<<=?|\^\^\^=?|\|\|\|=?|&&&=?|w\/=?|~~~|[*\/+\-^=!%]=?/,punctuation:/::|[{}[\];(),.:]/}),e.languages.insertBefore("qsharp","number",{range:{pattern:/\.\./,alias:"operator"}});var d=r(t(/\{(?:[^"{}]|<<0>>|<>)*\}/.source,[u]),2);e.languages.insertBefore("qsharp","string",{"interpolation-string":{pattern:n(/\$"(?:\\.|<<0>>|[^\\"{])*"/.source,[d]),greedy:!0,inside:{interpolation:{pattern:n(/((?:^|[^\\])(?:\\\\)*)<<0>>/.source,[d]),lookbehind:!0,inside:{punctuation:/^\{|\}$/,expression:{pattern:/[\s\S]+/,alias:"language-qsharp",inside:e.languages.qsharp}}},string:/[\s\S]+/}}})})(Prism),Prism.languages.qs=Prism.languages.qsharp},4803:function(){Prism.languages.r={comment:/#.*/,string:{pattern:/(['"])(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0},"percent-operator":{pattern:/%[^%\s]*%/,alias:"operator"},boolean:/\b(?:FALSE|TRUE)\b/,ellipsis:/\.\.(?:\.|\d+)/,number:[/\b(?:Inf|NaN)\b/,/(?:\b0x[\dA-Fa-f]+(?:\.\d*)?|\b\d+(?:\.\d*)?|\B\.\d+)(?:[EePp][+-]?\d+)?[iL]?/],keyword:/\b(?:NA|NA_character_|NA_complex_|NA_integer_|NA_real_|NULL|break|else|for|function|if|in|next|repeat|while)\b/,operator:/->?>?|<(?:=|=!]=?|::?|&&?|\|\|?|[+*\/^$@~]/,punctuation:/[(){}\[\],;]/}},4540:function(){Prism.languages.racket=Prism.languages.extend("scheme",{"lambda-parameter":{pattern:/([(\[]lambda\s+[(\[])[^()\[\]'\s]+/,lookbehind:!0}}),Prism.languages.insertBefore("racket","string",{lang:{pattern:/^#lang.+/m,greedy:!0,alias:"keyword"}}),Prism.languages.rkt=Prism.languages.racket},8439:function(){Prism.languages.reason=Prism.languages.extend("clike",{string:{pattern:/"(?:\\(?:\r\n|[\s\S])|[^\\\r\n"])*"/,greedy:!0},"class-name":/\b[A-Z]\w*/,keyword:/\b(?:and|as|assert|begin|class|constraint|do|done|downto|else|end|exception|external|for|fun|function|functor|if|in|include|inherit|initializer|lazy|let|method|module|mutable|new|nonrec|object|of|open|or|private|rec|sig|struct|switch|then|to|try|type|val|virtual|when|while|with)\b/,operator:/\.{3}|:[:=]|\|>|->|=(?:==?|>)?|<=?|>=?|[|^?'#!~`]|[+\-*\/]\.?|\b(?:asr|land|lor|lsl|lsr|lxor|mod)\b/}),Prism.languages.insertBefore("reason","class-name",{char:{pattern:/'(?:\\x[\da-f]{2}|\\o[0-3][0-7][0-7]|\\\d{3}|\\.|[^'\\\r\n])'/,greedy:!0},constructor:/\b[A-Z]\w*\b(?!\s*\.)/,label:{pattern:/\b[a-z]\w*(?=::)/,alias:"symbol"}}),delete Prism.languages.reason.function},9299:function(){(function(e){var t={pattern:/\\[\\(){}[\]^$+*?|.]/,alias:"escape"},n=/\\(?:x[\da-fA-F]{2}|u[\da-fA-F]{4}|u\{[\da-fA-F]+\}|0[0-7]{0,2}|[123][0-7]{2}|c[a-zA-Z]|.)/,r={pattern:/\.|\\[wsd]|\\p\{[^{}]+\}/i,alias:"class-name"},i={pattern:/\\[wsd]|\\p\{[^{}]+\}/i,alias:"class-name"},s="(?:[^\\\\-]|"+n.source+")",o=RegExp(s+"-"+s),a={pattern:/(<|')[^<>']+(?=[>']$)/,lookbehind:!0,alias:"variable"};e.languages.regex={"char-class":{pattern:/((?:^|[^\\])(?:\\\\)*)\[(?:[^\\\]]|\\[\s\S])*\]/,lookbehind:!0,inside:{"char-class-negation":{pattern:/(^\[)\^/,lookbehind:!0,alias:"operator"},"char-class-punctuation":{pattern:/^\[|\]$/,alias:"punctuation"},range:{pattern:o,inside:{escape:n,"range-punctuation":{pattern:/-/,alias:"operator"}}},"special-escape":t,"char-set":i,escape:n}},"special-escape":t,"char-set":r,backreference:[{pattern:/\\(?![123][0-7]{2})[1-9]/,alias:"keyword"},{pattern:/\\k<[^<>']+>/,alias:"keyword",inside:{"group-name":a}}],anchor:{pattern:/[$^]|\\[ABbGZz]/,alias:"function"},escape:n,group:[{pattern:/\((?:\?(?:<[^<>']+>|'[^<>']+'|[>:]|:=]=?|!=|\b_\b/,punctuation:/[,;.\[\]{}()]/}},8512:function(){Prism.languages.renpy={comment:{pattern:/(^|[^\\])#.+/,lookbehind:!0},string:{pattern:/("""|''')[\s\S]+?\1|("|')(?:\\.|(?!\2)[^\\])*\2|(?:^#?(?:(?:[0-9a-fA-F]){3}|[0-9a-fA-F]{6})$)/m,greedy:!0},function:/\b[a-z_]\w*(?=\()/i,property:/\b(?:Update|UpdateVersion|action|activate_sound|adv_nvl_transition|after_load_transition|align|alpha|alt|anchor|antialias|area|auto|background|bar_invert|bar_resizing|bar_vertical|black_color|bold|bottom_bar|bottom_gutter|bottom_margin|bottom_padding|box_reverse|box_wrap|can_update|caret|child|color|crop|default_afm_enable|default_afm_time|default_fullscreen|default_text_cps|developer|directory_name|drag_handle|drag_joined|drag_name|drag_raise|draggable|dragged|drop_shadow|drop_shadow_color|droppable|dropped|easein|easeout|edgescroll|end_game_transition|end_splash_transition|enter_replay_transition|enter_sound|enter_transition|enter_yesno_transition|executable_name|exit_replay_transition|exit_sound|exit_transition|exit_yesno_transition|fadein|fadeout|first_indent|first_spacing|fit_first|focus|focus_mask|font|foreground|game_main_transition|get_installed_packages|google_play_key|google_play_salt|ground|has_music|has_sound|has_voice|height|help|hinting|hover|hover_background|hover_color|hover_sound|hovered|hyperlink_functions|idle|idle_color|image_style|include_update|insensitive|insensitive_background|insensitive_color|inside|intra_transition|italic|justify|kerning|keyboard_focus|language|layer_clipping|layers|layout|left_bar|left_gutter|left_margin|left_padding|length|line_leading|line_overlap_split|line_spacing|linear|main_game_transition|main_menu_music|maximum|min_width|minimum|minwidth|modal|mouse|mousewheel|name|narrator_menu|newline_indent|nvl_adv_transition|offset|order_reverse|outlines|overlay_functions|pos|position|prefix|radius|range|rest_indent|right_bar|right_gutter|right_margin|right_padding|rotate|rotate_pad|ruby_style|sample_sound|save_directory|say_attribute_transition|screen_height|screen_width|scrollbars|selected_hover|selected_hover_color|selected_idle|selected_idle_color|selected_insensitive|show_side_image|show_two_window|side_spacing|side_xpos|side_ypos|size|size_group|slow_cps|slow_cps_multiplier|spacing|strikethrough|subpixel|text_align|text_style|text_xpos|text_y_fudge|text_ypos|thumb|thumb_offset|thumb_shadow|thumbnail_height|thumbnail_width|time|top_bar|top_gutter|top_margin|top_padding|translations|underline|unscrollable|update|value|version|version_name|version_tuple|vertical|width|window_hide_transition|window_icon|window_left_padding|window_show_transition|window_title|windows_icon|xadjustment|xalign|xanchor|xanchoraround|xaround|xcenter|xfill|xinitial|xmargin|xmaximum|xminimum|xoffset|xofsset|xpadding|xpos|xsize|xzoom|yadjustment|yalign|yanchor|yanchoraround|yaround|ycenter|yfill|yinitial|ymargin|ymaximum|yminimum|yoffset|ypadding|ypos|ysize|ysizexysize|yzoom|zoom|zorder)\b/,tag:/\b(?:bar|block|button|buttoscreenn|drag|draggroup|fixed|frame|grid|[hv]box|hotbar|hotspot|image|imagebutton|imagemap|input|key|label|menu|mm_menu_frame|mousearea|nvl|parallel|screen|self|side|tag|text|textbutton|timer|vbar|viewport|window)\b|\$/,keyword:/\b(?:None|add|adjustment|alignaround|allow|angle|animation|around|as|assert|behind|box_layout|break|build|cache|call|center|changed|child_size|choice|circles|class|clear|clicked|clipping|clockwise|config|contains|continue|corner1|corner2|counterclockwise|def|default|define|del|delay|disabled|disabled_text|dissolve|elif|else|event|except|exclude|exec|expression|fade|finally|for|from|function|global|gm_root|has|hide|id|if|import|in|init|is|jump|knot|lambda|left|less_rounded|mm_root|movie|music|null|on|onlayer|pass|pause|persistent|play|print|python|queue|raise|random|renpy|repeat|return|right|rounded_window|scene|scope|set|show|slow|slow_abortable|slow_done|sound|stop|store|style|style_group|substitute|suffix|theme|transform|transform_anchor|transpose|try|ui|unhovered|updater|use|voice|while|widget|widget_hover|widget_selected|widget_text|yield)\b/,boolean:/\b(?:[Ff]alse|[Tt]rue)\b/,number:/(?:\b(?:0[bo])?(?:(?:\d|0x[\da-f])[\da-f]*(?:\.\d*)?)|\B\.\d+)(?:e[+-]?\d+)?j?/i,operator:/[-+%=]=?|!=|\*\*?=?|\/\/?=?|<[<=>]?|>[=>]?|[&|^~]|\b(?:and|at|not|or|with)\b/,punctuation:/[{}[\];(),.:]/},Prism.languages.rpy=Prism.languages.renpy},96:function(){Prism.languages.rescript={comment:{pattern:/\/\/.*|\/\*[\s\S]*?(?:\*\/|$)/,greedy:!0},char:{pattern:/'(?:[^\r\n\\]|\\(?:.|\w+))'/,greedy:!0},string:{pattern:/"(?:\\(?:\r\n|[\s\S])|[^\\\r\n"])*"/,greedy:!0},"class-name":/\b[A-Z]\w*|@[a-z.]*|#[A-Za-z]\w*|#\d/,function:{pattern:/[a-zA-Z]\w*(?=\()|(\.)[a-z]\w*/,lookbehind:!0},number:/(?:\b0x(?:[\da-f]+(?:\.[\da-f]*)?|\.[\da-f]+)(?:p[+-]?\d+)?|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?)[ful]{0,4}/i,boolean:/\b(?:false|true)\b/,"attr-value":/[A-Za-z]\w*(?==)/,constant:{pattern:/(\btype\s+)[a-z]\w*/,lookbehind:!0},tag:{pattern:/(<)[a-z]\w*|(?:<\/)[a-z]\w*/,lookbehind:!0,inside:{operator:/<|>|\//}},keyword:/\b(?:and|as|assert|begin|bool|class|constraint|do|done|downto|else|end|exception|external|float|for|fun|function|if|in|include|inherit|initializer|int|lazy|let|method|module|mutable|new|nonrec|object|of|open|or|private|rec|string|switch|then|to|try|type|when|while|with)\b/,operator:/\.{3}|:[:=]?|\|>|->|=(?:==?|>)?|<=?|>=?|[|^?'#!~`]|[+\-*\/]\.?|\b(?:asr|land|lor|lsl|lsr|lxor|mod)\b/,punctuation:/[(){}[\],;.]/},Prism.languages.insertBefore("rescript","string",{"template-string":{pattern:/`(?:\\[\s\S]|\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}|(?!\$\{)[^\\`])*`/,greedy:!0,inside:{"template-punctuation":{pattern:/^`|`$/,alias:"string"},interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}/,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"tag"},rest:Prism.languages.rescript}},string:/[\s\S]+/}}}),Prism.languages.res=Prism.languages.rescript},6577:function(){Prism.languages.rest={table:[{pattern:/(^[\t ]*)(?:\+[=-]+)+\+(?:\r?\n|\r)(?:\1[+|].+[+|](?:\r?\n|\r))+\1(?:\+[=-]+)+\+/m,lookbehind:!0,inside:{punctuation:/\||(?:\+[=-]+)+\+/}},{pattern:/(^[\t ]*)=+ [ =]*=(?:(?:\r?\n|\r)\1.+)+(?:\r?\n|\r)\1=+ [ =]*=(?=(?:\r?\n|\r){2}|\s*$)/m,lookbehind:!0,inside:{punctuation:/[=-]+/}}],"substitution-def":{pattern:/(^[\t ]*\.\. )\|(?:[^|\s](?:[^|]*[^|\s])?)\| [^:]+::/m,lookbehind:!0,inside:{substitution:{pattern:/^\|(?:[^|\s]|[^|\s][^|]*[^|\s])\|/,alias:"attr-value",inside:{punctuation:/^\||\|$/}},directive:{pattern:/( )(?! )[^:]+::/,lookbehind:!0,alias:"function",inside:{punctuation:/::$/}}}},"link-target":[{pattern:/(^[\t ]*\.\. )\[[^\]]+\]/m,lookbehind:!0,alias:"string",inside:{punctuation:/^\[|\]$/}},{pattern:/(^[\t ]*\.\. )_(?:`[^`]+`|(?:[^:\\]|\\.)+):/m,lookbehind:!0,alias:"string",inside:{punctuation:/^_|:$/}}],directive:{pattern:/(^[\t ]*\.\. )[^:]+::/m,lookbehind:!0,alias:"function",inside:{punctuation:/::$/}},comment:{pattern:/(^[\t ]*\.\.)(?:(?: .+)?(?:(?:\r?\n|\r).+)+| .+)(?=(?:\r?\n|\r){2}|$)/m,lookbehind:!0},title:[{pattern:/^(([!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~])\2+)(?:\r?\n|\r).+(?:\r?\n|\r)\1$/m,inside:{punctuation:/^[!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~]+|[!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~]+$/,important:/.+/}},{pattern:/(^|(?:\r?\n|\r){2}).+(?:\r?\n|\r)([!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~])\2+(?=\r?\n|\r|$)/,lookbehind:!0,inside:{punctuation:/[!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~]+$/,important:/.+/}}],hr:{pattern:/((?:\r?\n|\r){2})([!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~])\2{3,}(?=(?:\r?\n|\r){2})/,lookbehind:!0,alias:"punctuation"},field:{pattern:/(^[\t ]*):[^:\r\n]+:(?= )/m,lookbehind:!0,alias:"attr-name"},"command-line-option":{pattern:/(^[\t ]*)(?:[+-][a-z\d]|(?:--|\/)[a-z\d-]+)(?:[ =](?:[a-z][\w-]*|<[^<>]+>))?(?:, (?:[+-][a-z\d]|(?:--|\/)[a-z\d-]+)(?:[ =](?:[a-z][\w-]*|<[^<>]+>))?)*(?=(?:\r?\n|\r)? {2,}\S)/im,lookbehind:!0,alias:"symbol"},"literal-block":{pattern:/::(?:\r?\n|\r){2}([ \t]+)(?![ \t]).+(?:(?:\r?\n|\r)\1.+)*/,inside:{"literal-block-punctuation":{pattern:/^::/,alias:"punctuation"}}},"quoted-literal-block":{pattern:/::(?:\r?\n|\r){2}([!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~]).*(?:(?:\r?\n|\r)\1.*)*/,inside:{"literal-block-punctuation":{pattern:/^(?:::|([!"#$%&'()*+,\-.\/:;<=>?@\[\\\]^_`{|}~])\1*)/m,alias:"punctuation"}}},"list-bullet":{pattern:/(^[\t ]*)(?:[*+\-•‣⁃]|\(?(?:\d+|[a-z]|[ivxdclm]+)\)|(?:\d+|[a-z]|[ivxdclm]+)\.)(?= )/im,lookbehind:!0,alias:"punctuation"},"doctest-block":{pattern:/(^[\t ]*)>>> .+(?:(?:\r?\n|\r).+)*/m,lookbehind:!0,inside:{punctuation:/^>>>/}},inline:[{pattern:/(^|[\s\-:\/'"<(\[{])(?::[^:]+:`.*?`|`.*?`:[^:]+:|(\*\*?|``?|\|)(?!\s)(?:(?!\2).)*\S\2(?=[\s\-.,:;!?\\\/'")\]}]|$))/m,lookbehind:!0,inside:{bold:{pattern:/(^\*\*).+(?=\*\*$)/,lookbehind:!0},italic:{pattern:/(^\*).+(?=\*$)/,lookbehind:!0},"inline-literal":{pattern:/(^``).+(?=``$)/,lookbehind:!0,alias:"symbol"},role:{pattern:/^:[^:]+:|:[^:]+:$/,alias:"function",inside:{punctuation:/^:|:$/}},"interpreted-text":{pattern:/(^`).+(?=`$)/,lookbehind:!0,alias:"attr-value"},substitution:{pattern:/(^\|).+(?=\|$)/,lookbehind:!0,alias:"attr-value"},punctuation:/\*\*?|``?|\|/}}],link:[{pattern:/\[[^\[\]]+\]_(?=[\s\-.,:;!?\\\/'")\]}]|$)/,alias:"string",inside:{punctuation:/^\[|\]_$/}},{pattern:/(?:\b[a-z\d]+(?:[_.:+][a-z\d]+)*_?_|`[^`]+`_?_|_`[^`]+`)(?=[\s\-.,:;!?\\\/'")\]}]|$)/i,alias:"string",inside:{punctuation:/^_?`|`$|`?_?_$/}}],punctuation:{pattern:/(^[\t ]*)(?:\|(?= |$)|(?:---?|—|\.\.|__)(?= )|\.\.$)/m,lookbehind:!0}}},998:function(){Prism.languages.rip={comment:{pattern:/#.*/,greedy:!0},char:{pattern:/\B`[^\s`'",.:;#\/\\()<>\[\]{}]\b/,greedy:!0},string:{pattern:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0},regex:{pattern:/(^|[^/])\/(?!\/)(?:\[[^\n\r\]]*\]|\\.|[^/\\\r\n\[])+\/(?=\s*(?:$|[\r\n,.;})]))/,lookbehind:!0,greedy:!0},keyword:/(?:=>|->)|\b(?:case|catch|class|else|exit|finally|if|raise|return|switch|try)\b/,builtin:/@|\bSystem\b/,boolean:/\b(?:false|true)\b/,date:/\b\d{4}-\d{2}-\d{2}\b/,time:/\b\d{2}:\d{2}:\d{2}\b/,datetime:/\b\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\b/,symbol:/:[^\d\s`'",.:;#\/\\()<>\[\]{}][^\s`'",.:;#\/\\()<>\[\]{}]*/,number:/[+-]?\b(?:\d+\.\d+|\d+)\b/,punctuation:/(?:\.{2,3})|[`,.:;=\/\\()<>\[\]{}]/,reference:/[^\d\s`'",.:;#\/\\()<>\[\]{}][^\s`'",.:;#\/\\()<>\[\]{}]*/}},4840:function(){Prism.languages.roboconf={comment:/#.*/,keyword:{pattern:/(^|\s)(?:(?:external|import)\b|(?:facet|instance of)(?=[ \t]+[\w-]+[ \t]*\{))/,lookbehind:!0},component:{pattern:/[\w-]+(?=[ \t]*\{)/,alias:"variable"},property:/[\w.-]+(?=[ \t]*:)/,value:{pattern:/(=[ \t]*(?![ \t]))[^,;]+/,lookbehind:!0,alias:"attr-value"},optional:{pattern:/\(optional\)/,alias:"builtin"},wildcard:{pattern:/(\.)\*/,lookbehind:!0,alias:"operator"},punctuation:/[{},.;:=]/}},3449:function(){(function(e){var t={pattern:/(^[ \t]*| {2}|\t)#.*/m,lookbehind:!0,greedy:!0},n={pattern:/((?:^|[^\\])(?:\\{2})*)[$@&%]\{(?:[^{}\r\n]|\{[^{}\r\n]*\})*\}/,lookbehind:!0,inside:{punctuation:/^[$@&%]\{|\}$/}};function r(e,r){var i={"section-header":{pattern:/^ ?\*{3}.+?\*{3}/,alias:"keyword"}};for(var s in r)i[s]=r[s];return i["tag"]={pattern:/([\r\n](?: {2}|\t)[ \t]*)\[[-\w]+\]/,lookbehind:!0,inside:{punctuation:/\[|\]/}},i["variable"]=n,i["comment"]=t,{pattern:RegExp(/^ ?\*{3}[ \t]*[ \t]*\*{3}(?:.|[\r\n](?!\*{3}))*/.source.replace(//g,(function(){return e})),"im"),alias:"section",inside:i}}var i={pattern:/(\[Documentation\](?: {2}|\t)[ \t]*)(?![ \t]|#)(?:.|(?:\r\n?|\n)[ \t]*\.{3})+/,lookbehind:!0,alias:"string"},s={pattern:/([\r\n] ?)(?!#)(?:\S(?:[ \t]\S)*)+/,lookbehind:!0,alias:"function",inside:{variable:n}},o={pattern:/([\r\n](?: {2}|\t)[ \t]*)(?!\[|\.{3}|#)(?:\S(?:[ \t]\S)*)+/,lookbehind:!0,inside:{variable:n}};e.languages["robotframework"]={settings:r("Settings",{documentation:{pattern:/([\r\n] ?Documentation(?: {2}|\t)[ \t]*)(?![ \t]|#)(?:.|(?:\r\n?|\n)[ \t]*\.{3})+/,lookbehind:!0,alias:"string"},property:{pattern:/([\r\n] ?)(?!\.{3}|#)(?:\S(?:[ \t]\S)*)+/,lookbehind:!0}}),variables:r("Variables"),"test-cases":r("Test Cases",{"test-name":s,documentation:i,property:o}),keywords:r("Keywords",{"keyword-name":s,documentation:i,property:o}),tasks:r("Tasks",{"task-name":s,documentation:i,property:o}),comment:t},e.languages.robot=e.languages["robotframework"]})(Prism)},9385:function(){(function(e){e.languages.ruby=e.languages.extend("clike",{comment:{pattern:/#.*|^=begin\s[\s\S]*?^=end/m,greedy:!0},"class-name":{pattern:/(\b(?:class|module)\s+|\bcatch\s+\()[\w.\\]+|\b[A-Z_]\w*(?=\s*\.\s*new\b)/,lookbehind:!0,inside:{punctuation:/[.\\]/}},keyword:/\b(?:BEGIN|END|alias|and|begin|break|case|class|def|define_method|defined|do|each|else|elsif|end|ensure|extend|for|if|in|include|module|new|next|nil|not|or|prepend|private|protected|public|raise|redo|require|rescue|retry|return|self|super|then|throw|undef|unless|until|when|while|yield)\b/,operator:/\.{2,3}|&\.|===||[!=]?~|(?:&&|\|\||<<|>>|\*\*|[+\-*/%<>!^&|=])=?|[?:]/,punctuation:/[(){}[\].,;]/}),e.languages.insertBefore("ruby","operator",{"double-colon":{pattern:/::/,alias:"punctuation"}});var t={pattern:/((?:^|[^\\])(?:\\{2})*)#\{(?:[^{}]|\{[^{}]*\})*\}/,lookbehind:!0,inside:{content:{pattern:/^(#\{)[\s\S]+(?=\}$)/,lookbehind:!0,inside:e.languages.ruby},delimiter:{pattern:/^#\{|\}$/,alias:"punctuation"}}};delete e.languages.ruby.function;var n="(?:"+[/([^a-zA-Z0-9\s{(\[<=])(?:(?!\1)[^\\]|\\[\s\S])*\1/.source,/\((?:[^()\\]|\\[\s\S]|\((?:[^()\\]|\\[\s\S])*\))*\)/.source,/\{(?:[^{}\\]|\\[\s\S]|\{(?:[^{}\\]|\\[\s\S])*\})*\}/.source,/\[(?:[^\[\]\\]|\\[\s\S]|\[(?:[^\[\]\\]|\\[\s\S])*\])*\]/.source,/<(?:[^<>\\]|\\[\s\S]|<(?:[^<>\\]|\\[\s\S])*>)*>/.source].join("|")+")",r=/(?:"(?:\\.|[^"\\\r\n])*"|(?:\b[a-zA-Z_]\w*|[^\s\0-\x7F]+)[?!]?|\$.)/.source;e.languages.insertBefore("ruby","keyword",{"regex-literal":[{pattern:RegExp(/%r/.source+n+/[egimnosux]{0,6}/.source),greedy:!0,inside:{interpolation:t,regex:/[\s\S]+/}},{pattern:/(^|[^/])\/(?!\/)(?:\[[^\r\n\]]+\]|\\.|[^[/\\\r\n])+\/[egimnosux]{0,6}(?=\s*(?:$|[\r\n,.;})#]))/,lookbehind:!0,greedy:!0,inside:{interpolation:t,regex:/[\s\S]+/}}],variable:/[@$]+[a-zA-Z_]\w*(?:[?!]|\b)/,symbol:[{pattern:RegExp(/(^|[^:]):/.source+r),lookbehind:!0,greedy:!0},{pattern:RegExp(/([\r\n{(,][ \t]*)/.source+r+/(?=:(?!:))/.source),lookbehind:!0,greedy:!0}],"method-definition":{pattern:/(\bdef\s+)\w+(?:\s*\.\s*\w+)?/,lookbehind:!0,inside:{function:/\b\w+$/,keyword:/^self\b/,"class-name":/^\w+/,punctuation:/\./}}}),e.languages.insertBefore("ruby","string",{"string-literal":[{pattern:RegExp(/%[qQiIwWs]?/.source+n),greedy:!0,inside:{interpolation:t,string:/[\s\S]+/}},{pattern:/("|')(?:#\{[^}]+\}|#(?!\{)|\\(?:\r\n|[\s\S])|(?!\1)[^\\#\r\n])*\1/,greedy:!0,inside:{interpolation:t,string:/[\s\S]+/}},{pattern:/<<[-~]?([a-z_]\w*)[\r\n](?:.*[\r\n])*?[\t ]*\1/i,alias:"heredoc-string",greedy:!0,inside:{delimiter:{pattern:/^<<[-~]?[a-z_]\w*|\b[a-z_]\w*$/i,inside:{symbol:/\b\w+/,punctuation:/^<<[-~]?/}},interpolation:t,string:/[\s\S]+/}},{pattern:/<<[-~]?'([a-z_]\w*)'[\r\n](?:.*[\r\n])*?[\t ]*\1/i,alias:"heredoc-string",greedy:!0,inside:{delimiter:{pattern:/^<<[-~]?'[a-z_]\w*'|\b[a-z_]\w*$/i,inside:{symbol:/\b\w+/,punctuation:/^<<[-~]?'|'$/}},string:/[\s\S]+/}}],"command-literal":[{pattern:RegExp(/%x/.source+n),greedy:!0,inside:{interpolation:t,command:{pattern:/[\s\S]+/,alias:"string"}}},{pattern:/`(?:#\{[^}]+\}|#(?!\{)|\\(?:\r\n|[\s\S])|[^\\`#\r\n])*`/,greedy:!0,inside:{interpolation:t,command:{pattern:/[\s\S]+/,alias:"string"}}}]}),delete e.languages.ruby.string,e.languages.insertBefore("ruby","number",{builtin:/\b(?:Array|Bignum|Binding|Class|Continuation|Dir|Exception|FalseClass|File|Fixnum|Float|Hash|IO|Integer|MatchData|Method|Module|NilClass|Numeric|Object|Proc|Range|Regexp|Stat|String|Struct|Symbol|TMS|Thread|ThreadGroup|Time|TrueClass)\b/,constant:/\b[A-Z][A-Z0-9_]*(?:[?!]|\b)/}),e.languages.rb=e.languages.ruby})(Prism)},767:function(){(function(e){for(var t=/\/\*(?:[^*/]|\*(?!\/)|\/(?!\*)|)*\*\//.source,n=0;n<2;n++)t=t.replace(//g,(function(){return t}));t=t.replace(//g,(function(){return/[^\s\S]/.source})),e.languages.rust={comment:[{pattern:RegExp(/(^|[^\\])/.source+t),lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],string:{pattern:/b?"(?:\\[\s\S]|[^\\"])*"|b?r(#*)"(?:[^"]|"(?!\1))*"\1/,greedy:!0},char:{pattern:/b?'(?:\\(?:x[0-7][\da-fA-F]|u\{(?:[\da-fA-F]_*){1,6}\}|.)|[^\\\r\n\t'])'/,greedy:!0},attribute:{pattern:/#!?\[(?:[^\[\]"]|"(?:\\[\s\S]|[^\\"])*")*\]/,greedy:!0,alias:"attr-name",inside:{string:null}},"closure-params":{pattern:/([=(,:]\s*|\bmove\s*)\|[^|]*\||\|[^|]*\|(?=\s*(?:\{|->))/,lookbehind:!0,greedy:!0,inside:{"closure-punctuation":{pattern:/^\||\|$/,alias:"punctuation"},rest:null}},"lifetime-annotation":{pattern:/'\w+/,alias:"symbol"},"fragment-specifier":{pattern:/(\$\w+:)[a-z]+/,lookbehind:!0,alias:"punctuation"},variable:/\$\w+/,"function-definition":{pattern:/(\bfn\s+)\w+/,lookbehind:!0,alias:"function"},"type-definition":{pattern:/(\b(?:enum|struct|trait|type|union)\s+)\w+/,lookbehind:!0,alias:"class-name"},"module-declaration":[{pattern:/(\b(?:crate|mod)\s+)[a-z][a-z_\d]*/,lookbehind:!0,alias:"namespace"},{pattern:/(\b(?:crate|self|super)\s*)::\s*[a-z][a-z_\d]*\b(?:\s*::(?:\s*[a-z][a-z_\d]*\s*::)*)?/,lookbehind:!0,alias:"namespace",inside:{punctuation:/::/}}],keyword:[/\b(?:Self|abstract|as|async|await|become|box|break|const|continue|crate|do|dyn|else|enum|extern|final|fn|for|if|impl|in|let|loop|macro|match|mod|move|mut|override|priv|pub|ref|return|self|static|struct|super|trait|try|type|typeof|union|unsafe|unsized|use|virtual|where|while|yield)\b/,/\b(?:bool|char|f(?:32|64)|[ui](?:8|16|32|64|128|size)|str)\b/],function:/\b[a-z_]\w*(?=\s*(?:::\s*<|\())/,macro:{pattern:/\b\w+!/,alias:"property"},constant:/\b[A-Z_][A-Z_\d]+\b/,"class-name":/\b[A-Z]\w*\b/,namespace:{pattern:/(?:\b[a-z][a-z_\d]*\s*::\s*)*\b[a-z][a-z_\d]*\s*::(?!\s*<)/,inside:{punctuation:/::/}},number:/\b(?:0x[\dA-Fa-f](?:_?[\dA-Fa-f])*|0o[0-7](?:_?[0-7])*|0b[01](?:_?[01])*|(?:(?:\d(?:_?\d)*)?\.)?\d(?:_?\d)*(?:[Ee][+-]?\d+)?)(?:_?(?:f32|f64|[iu](?:8|16|32|64|size)?))?\b/,boolean:/\b(?:false|true)\b/,punctuation:/->|\.\.=|\.{1,3}|::|[{}[\];(),:]/,operator:/[-+*\/%!^]=?|=[=>]?|&[&=]?|\|[|=]?|<>?=?|[@?]/},e.languages.rust["closure-params"].inside.rest=e.languages.rust,e.languages.rust["attribute"].inside["string"]=e.languages.rust["string"]})(Prism)},1384:function(){(function(e){var t=/(?:"(?:""|[^"])*"(?!")|'(?:''|[^'])*'(?!'))/.source,n=/\b(?:\d[\da-f]*x|\d+(?:\.\d+)?(?:e[+-]?\d+)?)\b/i,r={pattern:RegExp(t+"[bx]"),alias:"number"},i={pattern:/&[a-z_]\w*/i},s={pattern:/((?:^|\s|=|\())%(?:ABORT|BY|CMS|COPY|DISPLAY|DO|ELSE|END|EVAL|GLOBAL|GO|GOTO|IF|INC|INCLUDE|INDEX|INPUT|KTRIM|LENGTH|LET|LIST|LOCAL|PUT|QKTRIM|QSCAN|QSUBSTR|QSYSFUNC|QUPCASE|RETURN|RUN|SCAN|SUBSTR|SUPERQ|SYMDEL|SYMEXIST|SYMGLOBL|SYMLOCAL|SYSCALL|SYSEVALF|SYSEXEC|SYSFUNC|SYSGET|SYSRPUT|THEN|TO|TSO|UNQUOTE|UNTIL|UPCASE|WHILE|WINDOW)\b/i,lookbehind:!0,alias:"keyword"},o={pattern:/(^|\s)(?:proc\s+\w+|data(?!=)|quit|run)\b/i,alias:"keyword",lookbehind:!0},a=[/\/\*[\s\S]*?\*\//,{pattern:/(^[ \t]*|;\s*)\*[^;]*;/m,lookbehind:!0}],l={pattern:RegExp(t),greedy:!0},c=/[$%@.(){}\[\];,\\]/,u={pattern:/%?\b\w+(?=\()/,alias:"keyword"},d={function:u,"arg-value":{pattern:/(=\s*)[A-Z\.]+/i,lookbehind:!0},operator:/=/,"macro-variable":i,arg:{pattern:/[A-Z]+/i,alias:"keyword"},number:n,"numeric-constant":r,punctuation:c,string:l},h={pattern:/\b(?:format|put)\b=?[\w'$.]+/i,inside:{keyword:/^(?:format|put)(?==)/i,equals:/=/,format:{pattern:/(?:\w|\$\d)+\.\d?/,alias:"number"}}},p={pattern:/\b(?:format|put)\s+[\w']+(?:\s+[$.\w]+)+(?=;)/i,inside:{keyword:/^(?:format|put)/i,format:{pattern:/[\w$]+\.\d?/,alias:"number"}}},f={pattern:/((?:^|\s)=?)(?:catname|checkpoint execute_always|dm|endsas|filename|footnote|%include|libname|%list|lock|missing|options|page|resetline|%run|sasfile|skip|sysecho|title\d?)\b/i,lookbehind:!0,alias:"keyword"},g={pattern:/(^|\s)(?:submit(?:\s+(?:load|norun|parseonly))?|endsubmit)\b/i,lookbehind:!0,alias:"keyword"},m=/aStore|accessControl|aggregation|audio|autotune|bayesianNetClassifier|bioMedImage|boolRule|builtins|cardinality|cdm|clustering|conditionalRandomFields|configuration|copula|countreg|dataDiscovery|dataPreprocess|dataSciencePilot|dataStep|decisionTree|deduplication|deepLearn|deepNeural|deepRnn|ds2|ecm|entityRes|espCluster|explainModel|factmac|fastKnn|fcmpact|fedSql|freqTab|gVarCluster|gam|gleam|graphSemiSupLearn|hiddenMarkovModel|hyperGroup|ica|image|iml|kernalPca|langModel|ldaTopic|loadStreams|mbc|mixed|mlTools|modelPublishing|network|neuralNet|nmf|nonParametricBayes|nonlinear|optNetwork|optimization|panel|pca|percentile|phreg|pls|qkb|qlim|quantreg|recommend|regression|reinforcementLearn|robustPca|ruleMining|sampling|sandwich|sccasl|search(?:Analytics)?|sentimentAnalysis|sequence|session(?:Prop)?|severity|simSystem|simple|smartData|sparkEmbeddedProcess|sparseML|spatialreg|spc|stabilityMonitoring|svDataDescription|svm|table|text(?:Filters|Frequency|Mining|Parse|Rule(?:Develop|Score)|Topic|Util)|timeData|transpose|tsInfo|tsReconcile|uniTimeSeries|varReduce/.source,b={pattern:RegExp(/(^|\s)(?:action\s+)?(?:)\.[a-z]+\b[^;]+/.source.replace(//g,(function(){return m})),"i"),lookbehind:!0,inside:{keyword:RegExp(/(?:)\.[a-z]+\b/.source.replace(//g,(function(){return m})),"i"),action:{pattern:/(?:action)/i,alias:"keyword"},comment:a,function:u,"arg-value":d["arg-value"],operator:d.operator,argument:d.arg,number:n,"numeric-constant":r,punctuation:c,string:l}},_={pattern:/((?:^|\s)=?)(?:after|analysis|and|array|barchart|barwidth|begingraph|by|call|cas|cbarline|cfill|class(?:lev)?|close|column|computed?|contains|continue|data(?==)|define|delete|describe|document|do\s+over|do|dol|drop|dul|else|end(?:comp|source)?|entryTitle|eval(?:uate)?|exec(?:ute)?|exit|file(?:name)?|fill(?:attrs)?|flist|fnc|function(?:list)?|global|goto|group(?:by)?|headline|headskip|histogram|if|infile|keep|keylabel|keyword|label|layout|leave|legendlabel|length|libname|loadactionset|merge|midpoints|_?null_|name|noobs|nowd|ods|options|or|otherwise|out(?:put)?|over(?:lay)?|plot|print|put|raise|ranexp|rannor|rbreak|retain|return|select|session|sessref|set|source|statgraph|sum|summarize|table|temp|terminate|then\s+do|then|title\d?|to|var|when|where|xaxisopts|y2axisopts|yaxisopts)\b/i,lookbehind:!0};e.languages.sas={datalines:{pattern:/^([ \t]*)(?:cards|(?:data)?lines);[\s\S]+?^[ \t]*;/im,lookbehind:!0,alias:"string",inside:{keyword:{pattern:/^(?:cards|(?:data)?lines)/i},punctuation:/;/}},"proc-sql":{pattern:/(^proc\s+(?:fed)?sql(?:\s+[\w|=]+)?;)[\s\S]+?(?=^(?:proc\s+\w+|data|quit|run);|(?![\s\S]))/im,lookbehind:!0,inside:{sql:{pattern:RegExp(/^[ \t]*(?:select|alter\s+table|(?:create|describe|drop)\s+(?:index|table(?:\s+constraints)?|view)|create\s+unique\s+index|insert\s+into|update)(?:|[^;"'])+;/.source.replace(//g,(function(){return t})),"im"),alias:"language-sql",inside:e.languages.sql},"global-statements":f,"sql-statements":{pattern:/(^|\s)(?:disconnect\s+from|begin|commit|exec(?:ute)?|reset|rollback|validate)\b/i,lookbehind:!0,alias:"keyword"},number:n,"numeric-constant":r,punctuation:c,string:l}},"proc-groovy":{pattern:/(^proc\s+groovy(?:\s+[\w|=]+)?;)[\s\S]+?(?=^(?:proc\s+\w+|data|quit|run);|(?![\s\S]))/im,lookbehind:!0,inside:{comment:a,groovy:{pattern:RegExp(/(^[ \t]*submit(?:\s+(?:load|norun|parseonly))?)(?:|[^"'])+?(?=endsubmit;)/.source.replace(//g,(function(){return t})),"im"),lookbehind:!0,alias:"language-groovy",inside:e.languages.groovy},keyword:_,"submit-statement":g,"global-statements":f,number:n,"numeric-constant":r,punctuation:c,string:l}},"proc-lua":{pattern:/(^proc\s+lua(?:\s+[\w|=]+)?;)[\s\S]+?(?=^(?:proc\s+\w+|data|quit|run);|(?![\s\S]))/im,lookbehind:!0,inside:{comment:a,lua:{pattern:RegExp(/(^[ \t]*submit(?:\s+(?:load|norun|parseonly))?)(?:|[^"'])+?(?=endsubmit;)/.source.replace(//g,(function(){return t})),"im"),lookbehind:!0,alias:"language-lua",inside:e.languages.lua},keyword:_,"submit-statement":g,"global-statements":f,number:n,"numeric-constant":r,punctuation:c,string:l}},"proc-cas":{pattern:/(^proc\s+cas(?:\s+[\w|=]+)?;)[\s\S]+?(?=^(?:proc\s+\w+|quit|data);|(?![\s\S]))/im,lookbehind:!0,inside:{comment:a,"statement-var":{pattern:/((?:^|\s)=?)saveresult\s[^;]+/im,lookbehind:!0,inside:{statement:{pattern:/^saveresult\s+\S+/i,inside:{keyword:/^(?:saveresult)/i}},rest:d}},"cas-actions":b,statement:{pattern:/((?:^|\s)=?)(?:default|(?:un)?set|on|output|upload)[^;]+/im,lookbehind:!0,inside:d},step:o,keyword:_,function:u,format:h,altformat:p,"global-statements":f,number:n,"numeric-constant":r,punctuation:c,string:l}},"proc-args":{pattern:RegExp(/(^proc\s+\w+\s+)(?!\s)(?:[^;"']|)+;/.source.replace(//g,(function(){return t})),"im"),lookbehind:!0,inside:d},"macro-keyword":s,"macro-variable":i,"macro-string-functions":{pattern:/((?:^|\s|=))%(?:BQUOTE|NRBQUOTE|NRQUOTE|NRSTR|QUOTE|STR)\(.*?(?:[^%]\))/i,lookbehind:!0,inside:{function:{pattern:/%(?:BQUOTE|NRBQUOTE|NRQUOTE|NRSTR|QUOTE|STR)/i,alias:"keyword"},"macro-keyword":s,"macro-variable":i,"escaped-char":{pattern:/%['"()<>=¬^~;,#]/},punctuation:c}},"macro-declaration":{pattern:/^%macro[^;]+(?=;)/im,inside:{keyword:/%macro/i}},"macro-end":{pattern:/^%mend[^;]+(?=;)/im,inside:{keyword:/%mend/i}},macro:{pattern:/%_\w+(?=\()/,alias:"keyword"},input:{pattern:/\binput\s[-\w\s/*.$&]+;/i,inside:{input:{alias:"keyword",pattern:/^input/i},comment:a,number:n,"numeric-constant":r}},"options-args":{pattern:/(^options)[-'"|/\\<>*+=:()\w\s]*(?=;)/im,lookbehind:!0,inside:d},"cas-actions":b,comment:a,function:u,format:h,altformat:p,"numeric-constant":r,datetime:{pattern:RegExp(t+"(?:dt?|t)"),alias:"number"},string:l,step:o,keyword:_,"operator-keyword":{pattern:/\b(?:eq|ge|gt|in|le|lt|ne|not)\b/i,alias:"operator"},number:n,operator:/\*\*?|\|\|?|!!?|¦¦?|<[>=]?|>[<=]?|[-+\/=&]|[~¬^]=?/,punctuation:c}})(Prism)},9865:function(){(function(e){e.languages.sass=e.languages.extend("css",{comment:{pattern:/^([ \t]*)\/[\/*].*(?:(?:\r?\n|\r)\1[ \t].+)*/m,lookbehind:!0,greedy:!0}}),e.languages.insertBefore("sass","atrule",{"atrule-line":{pattern:/^(?:[ \t]*)[@+=].+/m,greedy:!0,inside:{atrule:/(?:@[\w-]+|[+=])/}}}),delete e.languages.sass.atrule;var t=/\$[-\w]+|#\{\$[-\w]+\}/,n=[/[+*\/%]|[=!]=|<=?|>=?|\b(?:and|not|or)\b/,{pattern:/(\s)-(?=\s)/,lookbehind:!0}];e.languages.insertBefore("sass","property",{"variable-line":{pattern:/^[ \t]*\$.+/m,greedy:!0,inside:{punctuation:/:/,variable:t,operator:n}},"property-line":{pattern:/^[ \t]*(?:[^:\s]+ *:.*|:[^:\s].*)/m,greedy:!0,inside:{property:[/[^:\s]+(?=\s*:)/,{pattern:/(:)[^:\s]+/,lookbehind:!0}],punctuation:/:/,variable:t,operator:n,important:e.languages.sass.important}}}),delete e.languages.sass.property,delete e.languages.sass.important,e.languages.insertBefore("sass","punctuation",{selector:{pattern:/^([ \t]*)\S(?:,[^,\r\n]+|[^,\r\n]*)(?:,[^,\r\n]+)*(?:,(?:\r?\n|\r)\1[ \t]+\S(?:,[^,\r\n]+|[^,\r\n]*)(?:,[^,\r\n]+)*)*/m,lookbehind:!0,greedy:!0}})})(Prism)},2886:function(){Prism.languages.scala=Prism.languages.extend("java",{"triple-quoted-string":{pattern:/"""[\s\S]*?"""/,greedy:!0,alias:"string"},string:{pattern:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,greedy:!0},keyword:/<-|=>|\b(?:abstract|case|catch|class|def|derives|do|else|enum|extends|extension|final|finally|for|forSome|given|if|implicit|import|infix|inline|lazy|match|new|null|object|opaque|open|override|package|private|protected|return|sealed|self|super|this|throw|trait|transparent|try|type|using|val|var|while|with|yield)\b/,number:/\b0x(?:[\da-f]*\.)?[\da-f]+|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e\d+)?[dfl]?/i,builtin:/\b(?:Any|AnyRef|AnyVal|Boolean|Byte|Char|Double|Float|Int|Long|Nothing|Short|String|Unit)\b/,symbol:/'[^\d\s\\]\w*/}),Prism.languages.insertBefore("scala","triple-quoted-string",{"string-interpolation":{pattern:/\b[a-z]\w*(?:"""(?:[^$]|\$(?:[^{]|\{(?:[^{}]|\{[^{}]*\})*\}))*?"""|"(?:[^$"\r\n]|\$(?:[^{]|\{(?:[^{}]|\{[^{}]*\})*\}))*")/i,greedy:!0,inside:{id:{pattern:/^\w+/,greedy:!0,alias:"function"},escape:{pattern:/\\\$"|\$[$"]/,greedy:!0,alias:"symbol"},interpolation:{pattern:/\$(?:\w+|\{(?:[^{}]|\{[^{}]*\})*\})/,greedy:!0,inside:{punctuation:/^\$\{?|\}$/,expression:{pattern:/[\s\S]+/,inside:Prism.languages.scala}}},string:/[\s\S]+/}}}),delete Prism.languages.scala["class-name"],delete Prism.languages.scala["function"],delete Prism.languages.scala["constant"]},1412:function(){(function(e){function t(e){for(var t in e)e[t]=e[t].replace(/<[\w\s]+>/g,(function(t){return"(?:"+e[t].trim()+")"}));return e[t]}e.languages.scheme={comment:/;.*|#;\s*(?:\((?:[^()]|\([^()]*\))*\)|\[(?:[^\[\]]|\[[^\[\]]*\])*\])|#\|(?:[^#|]|#(?!\|)|\|(?!#)|#\|(?:[^#|]|#(?!\|)|\|(?!#))*\|#)*\|#/,string:{pattern:/"(?:[^"\\]|\\.)*"/,greedy:!0},symbol:{pattern:/'[^()\[\]#'\s]+/,greedy:!0},char:{pattern:/#\\(?:[ux][a-fA-F\d]+\b|[-a-zA-Z]+\b|[\uD800-\uDBFF][\uDC00-\uDFFF]|\S)/,greedy:!0},"lambda-parameter":[{pattern:/((?:^|[^'`#])[(\[]lambda\s+)(?:[^|()\[\]'\s]+|\|(?:[^\\|]|\\.)*\|)/,lookbehind:!0},{pattern:/((?:^|[^'`#])[(\[]lambda\s+[(\[])[^()\[\]']+/,lookbehind:!0}],keyword:{pattern:/((?:^|[^'`#])[(\[])(?:begin|case(?:-lambda)?|cond(?:-expand)?|define(?:-library|-macro|-record-type|-syntax|-values)?|defmacro|delay(?:-force)?|do|else|except|export|guard|if|import|include(?:-ci|-library-declarations)?|lambda|let(?:rec)?(?:-syntax|-values|\*)?|let\*-values|only|parameterize|prefix|(?:quasi-?)?quote|rename|set!|syntax-(?:case|rules)|unless|unquote(?:-splicing)?|when)(?=[()\[\]\s]|$)/,lookbehind:!0},builtin:{pattern:/((?:^|[^'`#])[(\[])(?:abs|and|append|apply|assoc|ass[qv]|binary-port\?|boolean=?\?|bytevector(?:-append|-copy|-copy!|-length|-u8-ref|-u8-set!|\?)?|caar|cadr|call-with-(?:current-continuation|port|values)|call\/cc|car|cdar|cddr|cdr|ceiling|char(?:->integer|-ready\?|\?|<\?|<=\?|=\?|>\?|>=\?)|close-(?:input-port|output-port|port)|complex\?|cons|current-(?:error|input|output)-port|denominator|dynamic-wind|eof-object\??|eq\?|equal\?|eqv\?|error|error-object(?:-irritants|-message|\?)|eval|even\?|exact(?:-integer-sqrt|-integer\?|\?)?|expt|features|file-error\?|floor(?:-quotient|-remainder|\/)?|flush-output-port|for-each|gcd|get-output-(?:bytevector|string)|inexact\??|input-port(?:-open\?|\?)|integer(?:->char|\?)|lcm|length|list(?:->string|->vector|-copy|-ref|-set!|-tail|\?)?|make-(?:bytevector|list|parameter|string|vector)|map|max|member|memq|memv|min|modulo|negative\?|newline|not|null\?|number(?:->string|\?)|numerator|odd\?|open-(?:input|output)-(?:bytevector|string)|or|output-port(?:-open\?|\?)|pair\?|peek-char|peek-u8|port\?|positive\?|procedure\?|quotient|raise|raise-continuable|rational\?|rationalize|read-(?:bytevector|bytevector!|char|error\?|line|string|u8)|real\?|remainder|reverse|round|set-c[ad]r!|square|string(?:->list|->number|->symbol|->utf8|->vector|-append|-copy|-copy!|-fill!|-for-each|-length|-map|-ref|-set!|\?|<\?|<=\?|=\?|>\?|>=\?)?|substring|symbol(?:->string|\?|=\?)|syntax-error|textual-port\?|truncate(?:-quotient|-remainder|\/)?|u8-ready\?|utf8->string|values|vector(?:->list|->string|-append|-copy|-copy!|-fill!|-for-each|-length|-map|-ref|-set!|\?)?|with-exception-handler|write-(?:bytevector|char|string|u8)|zero\?)(?=[()\[\]\s]|$)/,lookbehind:!0},operator:{pattern:/((?:^|[^'`#])[(\[])(?:[-+*%/]|[<>]=?|=>?)(?=[()\[\]\s]|$)/,lookbehind:!0},number:{pattern:RegExp(t({"":/\d+(?:\/\d+)|(?:\d+(?:\.\d*)?|\.\d+)(?:[esfdl][+-]?\d+)?/.source,"":/[+-]?|[+-](?:inf|nan)\.0/.source,"":/[+-](?:|(?:inf|nan)\.0)?i/.source,"":/(?:@|)?|/.source,"":/(?:#d(?:#[ei])?|#[ei](?:#d)?)?/.source,"":/[0-9a-f]+(?:\/[0-9a-f]+)?/.source,"":/[+-]?|[+-](?:inf|nan)\.0/.source,"":/[+-](?:|(?:inf|nan)\.0)?i/.source,"":/(?:@|)?|/.source,"":/#[box](?:#[ei])?|(?:#[ei])?#[box]/.source,"":/(^|[()\[\]\s])(?:|)(?=[()\[\]\s]|$)/.source}),"i"),lookbehind:!0},boolean:{pattern:/(^|[()\[\]\s])#(?:[ft]|false|true)(?=[()\[\]\s]|$)/,lookbehind:!0},function:{pattern:/((?:^|[^'`#])[(\[])(?:[^|()\[\]'\s]+|\|(?:[^\\|]|\\.)*\|)(?=[()\[\]\s]|$)/,lookbehind:!0},identifier:{pattern:/(^|[()\[\]\s])\|(?:[^\\|]|\\.)*\|(?=[()\[\]\s]|$)/,lookbehind:!0,greedy:!0},punctuation:/[()\[\]']/}})(Prism)},2447:function(){Prism.languages.scss=Prism.languages.extend("css",{comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|\/\/.*)/,lookbehind:!0},atrule:{pattern:/@[\w-](?:\([^()]+\)|[^()\s]|\s+(?!\s))*?(?=\s+[{;])/,inside:{rule:/@[\w-]+/}},url:/(?:[-a-z]+-)?url(?=\()/i,selector:{pattern:/(?=\S)[^@;{}()]?(?:[^@;{}()\s]|\s+(?!\s)|#\{\$[-\w]+\})+(?=\s*\{(?:\}|\s|[^}][^:{}]*[:{][^}]))/,inside:{parent:{pattern:/&/,alias:"important"},placeholder:/%[-\w]+/,variable:/\$[-\w]+|#\{\$[-\w]+\}/}},property:{pattern:/(?:[-\w]|\$[-\w]|#\{\$[-\w]+\})+(?=\s*:)/,inside:{variable:/\$[-\w]+|#\{\$[-\w]+\}/}}}),Prism.languages.insertBefore("scss","atrule",{keyword:[/@(?:content|debug|each|else(?: if)?|extend|for|forward|function|if|import|include|mixin|return|use|warn|while)\b/i,{pattern:/( )(?:from|through)(?= )/,lookbehind:!0}]}),Prism.languages.insertBefore("scss","important",{variable:/\$[-\w]+|#\{\$[-\w]+\}/}),Prism.languages.insertBefore("scss","function",{"module-modifier":{pattern:/\b(?:as|hide|show|with)\b/i,alias:"keyword"},placeholder:{pattern:/%[-\w]+/,alias:"selector"},statement:{pattern:/\B!(?:default|optional)\b/i,alias:"keyword"},boolean:/\b(?:false|true)\b/,null:{pattern:/\bnull\b/,alias:"keyword"},operator:{pattern:/(\s)(?:[-+*\/%]|[=!]=|<=?|>=?|and|not|or)(?=\s)/,lookbehind:!0}}),Prism.languages.scss["atrule"].inside.rest=Prism.languages.scss},2963:function(){(function(e){var t=[/"(?:\\[\s\S]|\$\([^)]+\)|\$(?!\()|`[^`]+`|[^"\\`$])*"/.source,/'[^']*'/.source,/\$'(?:[^'\\]|\\[\s\S])*'/.source,/<<-?\s*(["']?)(\w+)\1\s[\s\S]*?[\r\n]\2/.source].join("|");e.languages["shell-session"]={command:{pattern:RegExp(/^/.source+"(?:"+/[^\s@:$#%*!/\\]+@[^\r\n@:$#%*!/\\]+(?::[^\0-\x1F$#%*?"<>:;|]+)?/.source+"|"+/[/~.][^\0-\x1F$#%*?"<>@:;|]*/.source+")?"+/[$#%](?=\s)/.source+/(?:[^\\\r\n \t'"<$]|[ \t](?:(?!#)|#.*$)|\\(?:[^\r]|\r\n?)|\$(?!')|<(?!<)|<>)+/.source.replace(/<>/g,(function(){return t})),"m"),greedy:!0,inside:{info:{pattern:/^[^#$%]+/,alias:"punctuation",inside:{user:/^[^\s@:$#%*!/\\]+@[^\r\n@:$#%*!/\\]+/,punctuation:/:/,path:/[\s\S]+/}},bash:{pattern:/(^[$#%]\s*)\S[\s\S]*/,lookbehind:!0,alias:"language-bash",inside:e.languages.bash},"shell-symbol":{pattern:/^[$#%]/,alias:"important"}}},output:/.(?:.*(?:[\r\n]|.$))*/},e.languages["sh-session"]=e.languages["shellsession"]=e.languages["shell-session"]})(Prism)},509:function(){Prism.languages.smali={comment:/#.*/,string:{pattern:/"(?:[^\r\n\\"]|\\.)*"|'(?:[^\r\n\\']|\\(?:.|u[\da-fA-F]{4}))'/,greedy:!0},"class-name":{pattern:/(^|[^L])L(?:(?:\w+|`[^`\r\n]*`)\/)*(?:[\w$]+|`[^`\r\n]*`)(?=\s*;)/,lookbehind:!0,inside:{"class-name":{pattern:/(^L|\/)(?:[\w$]+|`[^`\r\n]*`)$/,lookbehind:!0},namespace:{pattern:/^(L)(?:(?:\w+|`[^`\r\n]*`)\/)+/,lookbehind:!0,inside:{punctuation:/\//}},builtin:/^L/}},builtin:[{pattern:/([();\[])[BCDFIJSVZ]+/,lookbehind:!0},{pattern:/([\w$>]:)[BCDFIJSVZ]/,lookbehind:!0}],keyword:[{pattern:/(\.end\s+)[\w-]+/,lookbehind:!0},{pattern:/(^|[^\w.-])\.(?!\d)[\w-]+/,lookbehind:!0},{pattern:/(^|[^\w.-])(?:abstract|annotation|bridge|constructor|enum|final|interface|private|protected|public|runtime|static|synthetic|system|transient)(?![\w.-])/,lookbehind:!0}],function:{pattern:/(^|[^\w.-])(?:\w+|<[\w$-]+>)(?=\()/,lookbehind:!0},field:{pattern:/[\w$]+(?=:)/,alias:"variable"},register:{pattern:/(^|[^\w.-])[vp]\d(?![\w.-])/,lookbehind:!0,alias:"variable"},boolean:{pattern:/(^|[^\w.-])(?:false|true)(?![\w.-])/,lookbehind:!0},number:{pattern:/(^|[^/\w.-])-?(?:NAN|INFINITY|0x(?:[\dA-F]+(?:\.[\dA-F]*)?|\.[\dA-F]+)(?:p[+-]?[\dA-F]+)?|(?:\d+(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?)[dflst]?(?![\w.-])/i,lookbehind:!0},label:{pattern:/(:)\w+/,lookbehind:!0,alias:"property"},operator:/->|\.\.|[\[=]/,punctuation:/[{}(),;:]/}},2738:function(){Prism.languages.smalltalk={comment:{pattern:/"(?:""|[^"])*"/,greedy:!0},char:{pattern:/\$./,greedy:!0},string:{pattern:/'(?:''|[^'])*'/,greedy:!0},symbol:/#[\da-z]+|#(?:-|([+\/\\*~<>=@%|&?!])\1?)|#(?=\()/i,"block-arguments":{pattern:/(\[\s*):[^\[|]*\|/,lookbehind:!0,inside:{variable:/:[\da-z]+/i,punctuation:/\|/}},"temporary-variables":{pattern:/\|[^|]+\|/,inside:{variable:/[\da-z]+/i,punctuation:/\|/}},keyword:/\b(?:new|nil|self|super)\b/,boolean:/\b(?:false|true)\b/,number:[/\d+r-?[\dA-Z]+(?:\.[\dA-Z]+)?(?:e-?\d+)?/,/\b\d+(?:\.\d+)?(?:e-?\d+)?/],operator:/[<=]=?|:=|~[~=]|\/\/?|\\\\|>[>=]?|[!^+\-*&|,@]/,punctuation:/[.;:?\[\](){}]/}},9281:function(){(function(e){e.languages.smarty={comment:{pattern:/^\{\*[\s\S]*?\*\}/,greedy:!0},"embedded-php":{pattern:/^\{php\}[\s\S]*?\{\/php\}/,greedy:!0,inside:{smarty:{pattern:/^\{php\}|\{\/php\}$/,inside:null},php:{pattern:/[\s\S]+/,alias:"language-php",inside:e.languages.php}}},string:[{pattern:/"(?:\\.|[^"\\\r\n])*"/,greedy:!0,inside:{interpolation:{pattern:/\{[^{}]*\}|`[^`]*`/,inside:{"interpolation-punctuation":{pattern:/^[{`]|[`}]$/,alias:"punctuation"},expression:{pattern:/[\s\S]+/,inside:null}}},variable:/\$\w+/}},{pattern:/'(?:\\.|[^'\\\r\n])*'/,greedy:!0}],keyword:{pattern:/(^\{\/?)[a-z_]\w*\b(?!\()/i,lookbehind:!0,greedy:!0},delimiter:{pattern:/^\{\/?|\}$/,greedy:!0,alias:"punctuation"},number:/\b0x[\dA-Fa-f]+|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee][-+]?\d+)?/,variable:[/\$(?!\d)\w+/,/#(?!\d)\w+#/,{pattern:/(\.|->|\w\s*=)(?!\d)\w+\b(?!\()/,lookbehind:!0},{pattern:/(\[)(?!\d)\w+(?=\])/,lookbehind:!0}],function:{pattern:/(\|\s*)@?[a-z_]\w*|\b[a-z_]\w*(?=\()/i,lookbehind:!0},"attr-name":/\b[a-z_]\w*(?=\s*=)/i,boolean:/\b(?:false|no|off|on|true|yes)\b/,punctuation:/[\[\](){}.,:`]|->/,operator:[/[+\-*\/%]|==?=?|[!<>]=?|&&|\|\|?/,/\bis\s+(?:not\s+)?(?:div|even|odd)(?:\s+by)?\b/,/\b(?:and|eq|gt?e|gt|lt?e|lt|mod|neq?|not|or)\b/]},e.languages.smarty["embedded-php"].inside.smarty.inside=e.languages.smarty,e.languages.smarty.string[0].inside.interpolation.inside.expression.inside=e.languages.smarty;var t=/"(?:\\.|[^"\\\r\n])*"|'(?:\\.|[^'\\\r\n])*'/,n=RegExp(/\{\*[\s\S]*?\*\}/.source+"|"+/\{php\}[\s\S]*?\{\/php\}/.source+"|"+/\{(?:[^{}"']||\{(?:[^{}"']||\{(?:[^{}"']|)*\})*\})*\}/.source.replace(//g,(function(){return t.source})),"g");e.hooks.add("before-tokenize",(function(t){var r="{literal}",i="{/literal}",s=!1;e.languages["markup-templating"].buildPlaceholders(t,"smarty",n,(function(e){return e===i&&(s=!1),!s&&(e===r&&(s=!0),!0)}))})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"smarty")}))})(Prism)},9983:function(){(function(e){var t=/\b(?:abstype|and|andalso|as|case|datatype|do|else|end|eqtype|exception|fn|fun|functor|handle|if|in|include|infix|infixr|let|local|nonfix|of|op|open|orelse|raise|rec|sharing|sig|signature|struct|structure|then|type|val|where|while|with|withtype)\b/i;e.languages.sml={comment:/\(\*(?:[^*(]|\*(?!\))|\((?!\*)|\(\*(?:[^*(]|\*(?!\))|\((?!\*))*\*\))*\*\)/,string:{pattern:/#?"(?:[^"\\]|\\.)*"/,greedy:!0},"class-name":[{pattern:RegExp(/((?:^|[^:]):\s*)(?:\s*(?:(?:\*|->)\s*|,\s*(?:(?=)|(?!)\s+)))*/.source.replace(//g,(function(){return/\s*(?:[*,]|->)/.source})).replace(//g,(function(){return/(?:'[\w']*||\((?:[^()]|\([^()]*\))*\)|\{(?:[^{}]|\{[^{}]*\})*\})(?:\s+)*/.source})).replace(//g,(function(){return/(?!)[a-z\d_][\w'.]*/.source})).replace(//g,(function(){return t.source})),"i"),lookbehind:!0,greedy:!0,inside:null},{pattern:/((?:^|[^\w'])(?:datatype|exception|functor|signature|structure|type)\s+)[a-z_][\w'.]*/i,lookbehind:!0}],function:{pattern:/((?:^|[^\w'])fun\s+)[a-z_][\w'.]*/i,lookbehind:!0},keyword:t,variable:{pattern:/(^|[^\w'])'[\w']*/,lookbehind:!0},number:/~?\b(?:\d+(?:\.\d+)?(?:e~?\d+)?|0x[\da-f]+)\b/i,word:{pattern:/\b0w(?:\d+|x[\da-f]+)\b/i,alias:"constant"},boolean:/\b(?:false|true)\b/i,operator:/\.\.\.|:[>=:]|=>?|->|[<>]=?|[!+\-*/^#|@~]/,punctuation:/[(){}\[\].:,;]/},e.languages.sml["class-name"][0].inside=e.languages.sml,e.languages.smlnj=e.languages.sml})(Prism)},893:function(){Prism.languages.solidity=Prism.languages.extend("clike",{"class-name":{pattern:/(\b(?:contract|enum|interface|library|new|struct|using)\s+)(?!\d)[\w$]+/,lookbehind:!0},keyword:/\b(?:_|anonymous|as|assembly|assert|break|calldata|case|constant|constructor|continue|contract|default|delete|do|else|emit|enum|event|external|for|from|function|if|import|indexed|inherited|interface|internal|is|let|library|mapping|memory|modifier|new|payable|pragma|private|public|pure|require|returns?|revert|selfdestruct|solidity|storage|struct|suicide|switch|this|throw|using|var|view|while)\b/,operator:/=>|->|:=|=:|\*\*|\+\+|--|\|\||&&|<<=?|>>=?|[-+*/%^&|<>!=]=?|[~?]/}),Prism.languages.insertBefore("solidity","keyword",{builtin:/\b(?:address|bool|byte|u?int(?:8|16|24|32|40|48|56|64|72|80|88|96|104|112|120|128|136|144|152|160|168|176|184|192|200|208|216|224|232|240|248|256)?|string|bytes(?:[1-9]|[12]\d|3[0-2])?)\b/}),Prism.languages.insertBefore("solidity","number",{version:{pattern:/([<>]=?|\^)\d+\.\d+\.\d+\b/,lookbehind:!0,alias:"number"}}),Prism.languages.sol=Prism.languages.solidity},7485:function(){(function(e){var t={pattern:/\{[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}\}/i,alias:"constant",inside:{punctuation:/[{}]/}};e.languages["solution-file"]={comment:{pattern:/#.*/,greedy:!0},string:{pattern:/"[^"\r\n]*"|'[^'\r\n]*'/,greedy:!0,inside:{guid:t}},object:{pattern:/^([ \t]*)(?:([A-Z]\w*)\b(?=.*(?:\r\n?|\n)(?:\1[ \t].*(?:\r\n?|\n))*\1End\2(?=[ \t]*$))|End[A-Z]\w*(?=[ \t]*$))/m,lookbehind:!0,greedy:!0,alias:"keyword"},property:{pattern:/^([ \t]*)(?!\s)[^\r\n"#=()]*[^\s"#=()](?=\s*=)/m,lookbehind:!0,inside:{guid:t}},guid:t,number:/\b\d+(?:\.\d+)*\b/,boolean:/\b(?:FALSE|TRUE)\b/,operator:/=/,punctuation:/[(),]/},e.languages["sln"]=e.languages["solution-file"]})(Prism)},4435:function(){(function(e){var t=/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,n=/\b\d+(?:\.\d+)?(?:[eE][+-]?\d+)?\b|\b0x[\dA-F]+\b/;e.languages.soy={comment:[/\/\*[\s\S]*?\*\//,{pattern:/(\s)\/\/.*/,lookbehind:!0,greedy:!0}],"command-arg":{pattern:/(\{+\/?\s*(?:alias|call|delcall|delpackage|deltemplate|namespace|template)\s+)\.?[\w.]+/,lookbehind:!0,alias:"string",inside:{punctuation:/\./}},parameter:{pattern:/(\{+\/?\s*@?param\??\s+)\.?[\w.]+/,lookbehind:!0,alias:"variable"},keyword:[{pattern:/(\{+\/?[^\S\r\n]*)(?:\\[nrt]|alias|call|case|css|default|delcall|delpackage|deltemplate|else(?:if)?|fallbackmsg|for(?:each)?|if(?:empty)?|lb|let|literal|msg|namespace|nil|@?param\??|rb|sp|switch|template|xid)/,lookbehind:!0},/\b(?:any|as|attributes|bool|css|float|html|in|int|js|list|map|null|number|string|uri)\b/],delimiter:{pattern:/^\{+\/?|\/?\}+$/,alias:"punctuation"},property:/\w+(?==)/,variable:{pattern:/\$[^\W\d]\w*(?:\??(?:\.\w+|\[[^\]]+\]))*/,inside:{string:{pattern:t,greedy:!0},number:n,punctuation:/[\[\].?]/}},string:{pattern:t,greedy:!0},function:[/\w+(?=\()/,{pattern:/(\|[^\S\r\n]*)\w+/,lookbehind:!0}],boolean:/\b(?:false|true)\b/,number:n,operator:/\?:?|<=?|>=?|==?|!=|[+*/%-]|\b(?:and|not|or)\b/,punctuation:/[{}()\[\]|.,:]/},e.hooks.add("before-tokenize",(function(t){var n=/\{\{.+?\}\}|\{.+?\}|\s\/\/.*|\/\*[\s\S]*?\*\//g,r="{literal}",i="{/literal}",s=!1;e.languages["markup-templating"].buildPlaceholders(t,"soy",n,(function(e){return e===i&&(s=!1),!s&&(e===r&&(s=!0),!0)}))})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"soy")}))})(Prism)},1327:function(){Prism.languages.sparql=Prism.languages.extend("turtle",{boolean:/\b(?:false|true)\b/i,variable:{pattern:/[?$]\w+/,greedy:!0}}),Prism.languages.insertBefore("sparql","punctuation",{keyword:[/\b(?:A|ADD|ALL|AS|ASC|ASK|BNODE|BY|CLEAR|CONSTRUCT|COPY|CREATE|DATA|DEFAULT|DELETE|DESC|DESCRIBE|DISTINCT|DROP|EXISTS|FILTER|FROM|GROUP|HAVING|INSERT|INTO|LIMIT|LOAD|MINUS|MOVE|NAMED|NOT|NOW|OFFSET|OPTIONAL|ORDER|RAND|REDUCED|SELECT|SEPARATOR|SERVICE|SILENT|STRUUID|UNION|USING|UUID|VALUES|WHERE)\b/i,/\b(?:ABS|AVG|BIND|BOUND|CEIL|COALESCE|CONCAT|CONTAINS|COUNT|DATATYPE|DAY|ENCODE_FOR_URI|FLOOR|GROUP_CONCAT|HOURS|IF|IRI|isBLANK|isIRI|isLITERAL|isNUMERIC|isURI|LANG|LANGMATCHES|LCASE|MAX|MD5|MIN|MINUTES|MONTH|REGEX|REPLACE|ROUND|sameTerm|SAMPLE|SECONDS|SHA1|SHA256|SHA384|SHA512|STR|STRAFTER|STRBEFORE|STRDT|STRENDS|STRLANG|STRLEN|STRSTARTS|SUBSTR|SUM|TIMEZONE|TZ|UCASE|URI|YEAR)\b(?=\s*\()/i,/\b(?:BASE|GRAPH|PREFIX)\b/i]}),Prism.languages.rq=Prism.languages.sparql},612:function(){Prism.languages["splunk-spl"]={comment:/`comment\("(?:\\.|[^\\"])*"\)`/,string:{pattern:/"(?:\\.|[^\\"])*"/,greedy:!0},keyword:/\b(?:abstract|accum|addcoltotals|addinfo|addtotals|analyzefields|anomalies|anomalousvalue|anomalydetection|append|appendcols|appendcsv|appendlookup|appendpipe|arules|associate|audit|autoregress|bin|bucket|bucketdir|chart|cluster|cofilter|collect|concurrency|contingency|convert|correlate|datamodel|dbinspect|dedup|delete|delta|diff|erex|eval|eventcount|eventstats|extract|fieldformat|fields|fieldsummary|filldown|fillnull|findtypes|folderize|foreach|format|from|gauge|gentimes|geom|geomfilter|geostats|head|highlight|history|iconify|input|inputcsv|inputlookup|iplocation|join|kmeans|kv|kvform|loadjob|localize|localop|lookup|makecontinuous|makemv|makeresults|map|mcollect|metadata|metasearch|meventcollect|mstats|multikv|multisearch|mvcombine|mvexpand|nomv|outlier|outputcsv|outputlookup|outputtext|overlap|pivot|predict|rangemap|rare|regex|relevancy|reltime|rename|replace|rest|return|reverse|rex|rtorder|run|savedsearch|script|scrub|search|searchtxn|selfjoin|sendemail|set|setfields|sichart|sirare|sistats|sitimechart|sitop|sort|spath|stats|strcat|streamstats|table|tags|tail|timechart|timewrap|top|transaction|transpose|trendline|tscollect|tstats|typeahead|typelearner|typer|union|uniq|untable|where|x11|xmlkv|xmlunescape|xpath|xyseries)\b/i,"operator-word":{pattern:/\b(?:and|as|by|not|or|xor)\b/i,alias:"operator"},function:/\b\w+(?=\s*\()/,property:/\b\w+(?=\s*=(?!=))/,date:{pattern:/\b\d{1,2}\/\d{1,2}\/\d{1,4}(?:(?::\d{1,2}){3})?\b/,alias:"number"},number:/\b\d+(?:\.\d+)?\b/,boolean:/\b(?:f|false|t|true)\b/i,operator:/[<>=]=?|[-+*/%|]/,punctuation:/[()[\],]/}},3113:function(){Prism.languages.sqf=Prism.languages.extend("clike",{string:{pattern:/"(?:(?:"")?[^"])*"(?!")|'(?:[^'])*'/,greedy:!0},keyword:/\b(?:breakOut|breakTo|call|case|catch|default|do|echo|else|execFSM|execVM|exitWith|for|forEach|forEachMember|forEachMemberAgent|forEachMemberTeam|from|goto|if|nil|preprocessFile|preprocessFileLineNumbers|private|scopeName|spawn|step|switch|then|throw|to|try|while|with)\b/i,boolean:/\b(?:false|true)\b/i,function:/\b(?:abs|accTime|acos|action|actionIDs|actionKeys|actionKeysImages|actionKeysNames|actionKeysNamesArray|actionName|actionParams|activateAddons|activatedAddons|activateKey|add3DENConnection|add3DENEventHandler|add3DENLayer|addAction|addBackpack|addBackpackCargo|addBackpackCargoGlobal|addBackpackGlobal|addCamShake|addCuratorAddons|addCuratorCameraArea|addCuratorEditableObjects|addCuratorEditingArea|addCuratorPoints|addEditorObject|addEventHandler|addForce|addForceGeneratorRTD|addGoggles|addGroupIcon|addHandgunItem|addHeadgear|addItem|addItemCargo|addItemCargoGlobal|addItemPool|addItemToBackpack|addItemToUniform|addItemToVest|addLiveStats|addMagazine|addMagazineAmmoCargo|addMagazineCargo|addMagazineCargoGlobal|addMagazineGlobal|addMagazinePool|addMagazines|addMagazineTurret|addMenu|addMenuItem|addMissionEventHandler|addMPEventHandler|addMusicEventHandler|addOwnedMine|addPlayerScores|addPrimaryWeaponItem|addPublicVariableEventHandler|addRating|addResources|addScore|addScoreSide|addSecondaryWeaponItem|addSwitchableUnit|addTeamMember|addToRemainsCollector|addTorque|addUniform|addVehicle|addVest|addWaypoint|addWeapon|addWeaponCargo|addWeaponCargoGlobal|addWeaponGlobal|addWeaponItem|addWeaponPool|addWeaponTurret|admin|agent|agents|AGLToASL|aimedAtTarget|aimPos|airDensityCurveRTD|airDensityRTD|airplaneThrottle|airportSide|AISFinishHeal|alive|all3DENEntities|allAirports|allControls|allCurators|allCutLayers|allDead|allDeadMen|allDisplays|allGroups|allMapMarkers|allMines|allMissionObjects|allow3DMode|allowCrewInImmobile|allowCuratorLogicIgnoreAreas|allowDamage|allowDammage|allowFileOperations|allowFleeing|allowGetIn|allowSprint|allPlayers|allSimpleObjects|allSites|allTurrets|allUnits|allUnitsUAV|allVariables|ammo|ammoOnPylon|animate|animateBay|animateDoor|animatePylon|animateSource|animationNames|animationPhase|animationSourcePhase|animationState|append|apply|armoryPoints|arrayIntersect|asin|ASLToAGL|ASLToATL|assert|assignAsCargo|assignAsCargoIndex|assignAsCommander|assignAsDriver|assignAsGunner|assignAsTurret|assignCurator|assignedCargo|assignedCommander|assignedDriver|assignedGunner|assignedItems|assignedTarget|assignedTeam|assignedVehicle|assignedVehicleRole|assignItem|assignTeam|assignToAirport|atan|atan2|atg|ATLToASL|attachedObject|attachedObjects|attachedTo|attachObject|attachTo|attackEnabled|backpack|backpackCargo|backpackContainer|backpackItems|backpackMagazines|backpackSpaceFor|behaviour|benchmark|binocular|blufor|boundingBox|boundingBoxReal|boundingCenter|briefingName|buildingExit|buildingPos|buldozer_EnableRoadDiag|buldozer_IsEnabledRoadDiag|buldozer_LoadNewRoads|buldozer_reloadOperMap|buttonAction|buttonSetAction|cadetMode|callExtension|camCommand|camCommit|camCommitPrepared|camCommitted|camConstuctionSetParams|camCreate|camDestroy|cameraEffect|cameraEffectEnableHUD|cameraInterest|cameraOn|cameraView|campaignConfigFile|camPreload|camPreloaded|camPrepareBank|camPrepareDir|camPrepareDive|camPrepareFocus|camPrepareFov|camPrepareFovRange|camPreparePos|camPrepareRelPos|camPrepareTarget|camSetBank|camSetDir|camSetDive|camSetFocus|camSetFov|camSetFovRange|camSetPos|camSetRelPos|camSetTarget|camTarget|camUseNVG|canAdd|canAddItemToBackpack|canAddItemToUniform|canAddItemToVest|cancelSimpleTaskDestination|canFire|canMove|canSlingLoad|canStand|canSuspend|canTriggerDynamicSimulation|canUnloadInCombat|canVehicleCargo|captive|captiveNum|cbChecked|cbSetChecked|ceil|channelEnabled|cheatsEnabled|checkAIFeature|checkVisibility|civilian|className|clear3DENAttribute|clear3DENInventory|clearAllItemsFromBackpack|clearBackpackCargo|clearBackpackCargoGlobal|clearForcesRTD|clearGroupIcons|clearItemCargo|clearItemCargoGlobal|clearItemPool|clearMagazineCargo|clearMagazineCargoGlobal|clearMagazinePool|clearOverlay|clearRadio|clearVehicleInit|clearWeaponCargo|clearWeaponCargoGlobal|clearWeaponPool|clientOwner|closeDialog|closeDisplay|closeOverlay|collapseObjectTree|collect3DENHistory|collectiveRTD|combatMode|commandArtilleryFire|commandChat|commander|commandFire|commandFollow|commandFSM|commandGetOut|commandingMenu|commandMove|commandRadio|commandStop|commandSuppressiveFire|commandTarget|commandWatch|comment|commitOverlay|compile|compileFinal|completedFSM|composeText|configClasses|configFile|configHierarchy|configName|configNull|configProperties|configSourceAddonList|configSourceMod|configSourceModList|confirmSensorTarget|connectTerminalToUAV|controlNull|controlsGroupCtrl|copyFromClipboard|copyToClipboard|copyWaypoints|cos|count|countEnemy|countFriendly|countSide|countType|countUnknown|create3DENComposition|create3DENEntity|createAgent|createCenter|createDialog|createDiaryLink|createDiaryRecord|createDiarySubject|createDisplay|createGearDialog|createGroup|createGuardedPoint|createLocation|createMarker|createMarkerLocal|createMenu|createMine|createMissionDisplay|createMPCampaignDisplay|createSimpleObject|createSimpleTask|createSite|createSoundSource|createTask|createTeam|createTrigger|createUnit|createVehicle|createVehicleCrew|createVehicleLocal|crew|ctAddHeader|ctAddRow|ctClear|ctCurSel|ctData|ctFindHeaderRows|ctFindRowHeader|ctHeaderControls|ctHeaderCount|ctRemoveHeaders|ctRemoveRows|ctrlActivate|ctrlAddEventHandler|ctrlAngle|ctrlAutoScrollDelay|ctrlAutoScrollRewind|ctrlAutoScrollSpeed|ctrlChecked|ctrlClassName|ctrlCommit|ctrlCommitted|ctrlCreate|ctrlDelete|ctrlEnable|ctrlEnabled|ctrlFade|ctrlHTMLLoaded|ctrlIDC|ctrlIDD|ctrlMapAnimAdd|ctrlMapAnimClear|ctrlMapAnimCommit|ctrlMapAnimDone|ctrlMapCursor|ctrlMapMouseOver|ctrlMapScale|ctrlMapScreenToWorld|ctrlMapWorldToScreen|ctrlModel|ctrlModelDirAndUp|ctrlModelScale|ctrlParent|ctrlParentControlsGroup|ctrlPosition|ctrlRemoveAllEventHandlers|ctrlRemoveEventHandler|ctrlScale|ctrlSetActiveColor|ctrlSetAngle|ctrlSetAutoScrollDelay|ctrlSetAutoScrollRewind|ctrlSetAutoScrollSpeed|ctrlSetBackgroundColor|ctrlSetChecked|ctrlSetDisabledColor|ctrlSetEventHandler|ctrlSetFade|ctrlSetFocus|ctrlSetFont|ctrlSetFontH1|ctrlSetFontH1B|ctrlSetFontH2|ctrlSetFontH2B|ctrlSetFontH3|ctrlSetFontH3B|ctrlSetFontH4|ctrlSetFontH4B|ctrlSetFontH5|ctrlSetFontH5B|ctrlSetFontH6|ctrlSetFontH6B|ctrlSetFontHeight|ctrlSetFontHeightH1|ctrlSetFontHeightH2|ctrlSetFontHeightH3|ctrlSetFontHeightH4|ctrlSetFontHeightH5|ctrlSetFontHeightH6|ctrlSetFontHeightSecondary|ctrlSetFontP|ctrlSetFontPB|ctrlSetFontSecondary|ctrlSetForegroundColor|ctrlSetModel|ctrlSetModelDirAndUp|ctrlSetModelScale|ctrlSetPixelPrecision|ctrlSetPosition|ctrlSetScale|ctrlSetStructuredText|ctrlSetText|ctrlSetTextColor|ctrlSetTextColorSecondary|ctrlSetTextSecondary|ctrlSetTooltip|ctrlSetTooltipColorBox|ctrlSetTooltipColorShade|ctrlSetTooltipColorText|ctrlShow|ctrlShown|ctrlText|ctrlTextHeight|ctrlTextSecondary|ctrlTextWidth|ctrlType|ctrlVisible|ctRowControls|ctRowCount|ctSetCurSel|ctSetData|ctSetHeaderTemplate|ctSetRowTemplate|ctSetValue|ctValue|curatorAddons|curatorCamera|curatorCameraArea|curatorCameraAreaCeiling|curatorCoef|curatorEditableObjects|curatorEditingArea|curatorEditingAreaType|curatorMouseOver|curatorPoints|curatorRegisteredObjects|curatorSelected|curatorWaypointCost|current3DENOperation|currentChannel|currentCommand|currentMagazine|currentMagazineDetail|currentMagazineDetailTurret|currentMagazineTurret|currentMuzzle|currentNamespace|currentTask|currentTasks|currentThrowable|currentVisionMode|currentWaypoint|currentWeapon|currentWeaponMode|currentWeaponTurret|currentZeroing|cursorObject|cursorTarget|customChat|customRadio|cutFadeOut|cutObj|cutRsc|cutText|damage|date|dateToNumber|daytime|deActivateKey|debriefingText|debugFSM|debugLog|deg|delete3DENEntities|deleteAt|deleteCenter|deleteCollection|deleteEditorObject|deleteGroup|deleteGroupWhenEmpty|deleteIdentity|deleteLocation|deleteMarker|deleteMarkerLocal|deleteRange|deleteResources|deleteSite|deleteStatus|deleteTeam|deleteVehicle|deleteVehicleCrew|deleteWaypoint|detach|detectedMines|diag_activeMissionFSMs|diag_activeScripts|diag_activeSQFScripts|diag_activeSQSScripts|diag_captureFrame|diag_captureFrameToFile|diag_captureSlowFrame|diag_codePerformance|diag_drawMode|diag_dynamicSimulationEnd|diag_enable|diag_enabled|diag_fps|diag_fpsMin|diag_frameNo|diag_lightNewLoad|diag_list|diag_log|diag_logSlowFrame|diag_mergeConfigFile|diag_recordTurretLimits|diag_setLightNew|diag_tickTime|diag_toggle|dialog|diarySubjectExists|didJIP|didJIPOwner|difficulty|difficultyEnabled|difficultyEnabledRTD|difficultyOption|direction|directSay|disableAI|disableCollisionWith|disableConversation|disableDebriefingStats|disableMapIndicators|disableNVGEquipment|disableRemoteSensors|disableSerialization|disableTIEquipment|disableUAVConnectability|disableUserInput|displayAddEventHandler|displayCtrl|displayNull|displayParent|displayRemoveAllEventHandlers|displayRemoveEventHandler|displaySetEventHandler|dissolveTeam|distance|distance2D|distanceSqr|distributionRegion|do3DENAction|doArtilleryFire|doFire|doFollow|doFSM|doGetOut|doMove|doorPhase|doStop|doSuppressiveFire|doTarget|doWatch|drawArrow|drawEllipse|drawIcon|drawIcon3D|drawLine|drawLine3D|drawLink|drawLocation|drawPolygon|drawRectangle|drawTriangle|driver|drop|dynamicSimulationDistance|dynamicSimulationDistanceCoef|dynamicSimulationEnabled|dynamicSimulationSystemEnabled|east|edit3DENMissionAttributes|editObject|editorSetEventHandler|effectiveCommander|emptyPositions|enableAI|enableAIFeature|enableAimPrecision|enableAttack|enableAudioFeature|enableAutoStartUpRTD|enableAutoTrimRTD|enableCamShake|enableCaustics|enableChannel|enableCollisionWith|enableCopilot|enableDebriefingStats|enableDiagLegend|enableDynamicSimulation|enableDynamicSimulationSystem|enableEndDialog|enableEngineArtillery|enableEnvironment|enableFatigue|enableGunLights|enableInfoPanelComponent|enableIRLasers|enableMimics|enablePersonTurret|enableRadio|enableReload|enableRopeAttach|enableSatNormalOnDetail|enableSaving|enableSentences|enableSimulation|enableSimulationGlobal|enableStamina|enableStressDamage|enableTeamSwitch|enableTraffic|enableUAVConnectability|enableUAVWaypoints|enableVehicleCargo|enableVehicleSensor|enableWeaponDisassembly|endl|endLoadingScreen|endMission|engineOn|enginesIsOnRTD|enginesPowerRTD|enginesRpmRTD|enginesTorqueRTD|entities|environmentEnabled|estimatedEndServerTime|estimatedTimeLeft|evalObjectArgument|everyBackpack|everyContainer|exec|execEditorScript|exp|expectedDestination|exportJIPMessages|eyeDirection|eyePos|face|faction|fadeMusic|fadeRadio|fadeSound|fadeSpeech|failMission|fillWeaponsFromPool|find|findCover|findDisplay|findEditorObject|findEmptyPosition|findEmptyPositionReady|findIf|findNearestEnemy|finishMissionInit|finite|fire|fireAtTarget|firstBackpack|flag|flagAnimationPhase|flagOwner|flagSide|flagTexture|fleeing|floor|flyInHeight|flyInHeightASL|fog|fogForecast|fogParams|forceAddUniform|forceAtPositionRTD|forcedMap|forceEnd|forceFlagTexture|forceFollowRoad|forceGeneratorRTD|forceMap|forceRespawn|forceSpeed|forceWalk|forceWeaponFire|forceWeatherChange|forgetTarget|format|formation|formationDirection|formationLeader|formationMembers|formationPosition|formationTask|formatText|formLeader|freeLook|fromEditor|fuel|fullCrew|gearIDCAmmoCount|gearSlotAmmoCount|gearSlotData|get3DENActionState|get3DENAttribute|get3DENCamera|get3DENConnections|get3DENEntity|get3DENEntityID|get3DENGrid|get3DENIconsVisible|get3DENLayerEntities|get3DENLinesVisible|get3DENMissionAttribute|get3DENMouseOver|get3DENSelected|getAimingCoef|getAllEnvSoundControllers|getAllHitPointsDamage|getAllOwnedMines|getAllSoundControllers|getAmmoCargo|getAnimAimPrecision|getAnimSpeedCoef|getArray|getArtilleryAmmo|getArtilleryComputerSettings|getArtilleryETA|getAssignedCuratorLogic|getAssignedCuratorUnit|getBackpackCargo|getBleedingRemaining|getBurningValue|getCameraViewDirection|getCargoIndex|getCenterOfMass|getClientState|getClientStateNumber|getCompatiblePylonMagazines|getConnectedUAV|getContainerMaxLoad|getCursorObjectParams|getCustomAimCoef|getDammage|getDescription|getDir|getDirVisual|getDLCAssetsUsage|getDLCAssetsUsageByName|getDLCs|getDLCUsageTime|getEditorCamera|getEditorMode|getEditorObjectScope|getElevationOffset|getEngineTargetRpmRTD|getEnvSoundController|getFatigue|getFieldManualStartPage|getForcedFlagTexture|getFriend|getFSMVariable|getFuelCargo|getGroupIcon|getGroupIconParams|getGroupIcons|getHideFrom|getHit|getHitIndex|getHitPointDamage|getItemCargo|getMagazineCargo|getMarkerColor|getMarkerPos|getMarkerSize|getMarkerType|getMass|getMissionConfig|getMissionConfigValue|getMissionDLCs|getMissionLayerEntities|getMissionLayers|getModelInfo|getMousePosition|getMusicPlayedTime|getNumber|getObjectArgument|getObjectChildren|getObjectDLC|getObjectMaterials|getObjectProxy|getObjectTextures|getObjectType|getObjectViewDistance|getOxygenRemaining|getPersonUsedDLCs|getPilotCameraDirection|getPilotCameraPosition|getPilotCameraRotation|getPilotCameraTarget|getPlateNumber|getPlayerChannel|getPlayerScores|getPlayerUID|getPlayerUIDOld|getPos|getPosASL|getPosASLVisual|getPosASLW|getPosATL|getPosATLVisual|getPosVisual|getPosWorld|getPylonMagazines|getRelDir|getRelPos|getRemoteSensorsDisabled|getRepairCargo|getResolution|getRotorBrakeRTD|getShadowDistance|getShotParents|getSlingLoad|getSoundController|getSoundControllerResult|getSpeed|getStamina|getStatValue|getSuppression|getTerrainGrid|getTerrainHeightASL|getText|getTotalDLCUsageTime|getTrimOffsetRTD|getUnitLoadout|getUnitTrait|getUserMFDText|getUserMFDValue|getVariable|getVehicleCargo|getWeaponCargo|getWeaponSway|getWingsOrientationRTD|getWingsPositionRTD|getWPPos|glanceAt|globalChat|globalRadio|goggles|group|groupChat|groupFromNetId|groupIconSelectable|groupIconsVisible|groupId|groupOwner|groupRadio|groupSelectedUnits|groupSelectUnit|grpNull|gunner|gusts|halt|handgunItems|handgunMagazine|handgunWeapon|handsHit|hasInterface|hasPilotCamera|hasWeapon|hcAllGroups|hcGroupParams|hcLeader|hcRemoveAllGroups|hcRemoveGroup|hcSelected|hcSelectGroup|hcSetGroup|hcShowBar|hcShownBar|headgear|hideBody|hideObject|hideObjectGlobal|hideSelection|hint|hintC|hintCadet|hintSilent|hmd|hostMission|htmlLoad|HUDMovementLevels|humidity|image|importAllGroups|importance|in|inArea|inAreaArray|incapacitatedState|independent|inflame|inflamed|infoPanel|infoPanelComponentEnabled|infoPanelComponents|infoPanels|inGameUISetEventHandler|inheritsFrom|initAmbientLife|inPolygon|inputAction|inRangeOfArtillery|insertEditorObject|intersect|is3DEN|is3DENMultiplayer|isAbleToBreathe|isAgent|isAimPrecisionEnabled|isArray|isAutoHoverOn|isAutonomous|isAutoStartUpEnabledRTD|isAutotest|isAutoTrimOnRTD|isBleeding|isBurning|isClass|isCollisionLightOn|isCopilotEnabled|isDamageAllowed|isDedicated|isDLCAvailable|isEngineOn|isEqualTo|isEqualType|isEqualTypeAll|isEqualTypeAny|isEqualTypeArray|isEqualTypeParams|isFilePatchingEnabled|isFlashlightOn|isFlatEmpty|isForcedWalk|isFormationLeader|isGroupDeletedWhenEmpty|isHidden|isInRemainsCollector|isInstructorFigureEnabled|isIRLaserOn|isKeyActive|isKindOf|isLaserOn|isLightOn|isLocalized|isManualFire|isMarkedForCollection|isMultiplayer|isMultiplayerSolo|isNil|isNull|isNumber|isObjectHidden|isObjectRTD|isOnRoad|isPipEnabled|isPlayer|isRealTime|isRemoteExecuted|isRemoteExecutedJIP|isServer|isShowing3DIcons|isSimpleObject|isSprintAllowed|isStaminaEnabled|isSteamMission|isStreamFriendlyUIEnabled|isStressDamageEnabled|isText|isTouchingGround|isTurnedOut|isTutHintsEnabled|isUAVConnectable|isUAVConnected|isUIContext|isUniformAllowed|isVehicleCargo|isVehicleRadarOn|isVehicleSensorEnabled|isWalking|isWeaponDeployed|isWeaponRested|itemCargo|items|itemsWithMagazines|join|joinAs|joinAsSilent|joinSilent|joinString|kbAddDatabase|kbAddDatabaseTargets|kbAddTopic|kbHasTopic|kbReact|kbRemoveTopic|kbTell|kbWasSaid|keyImage|keyName|knowsAbout|land|landAt|landResult|language|laserTarget|lbAdd|lbClear|lbColor|lbColorRight|lbCurSel|lbData|lbDelete|lbIsSelected|lbPicture|lbPictureRight|lbSelection|lbSetColor|lbSetColorRight|lbSetCurSel|lbSetData|lbSetPicture|lbSetPictureColor|lbSetPictureColorDisabled|lbSetPictureColorSelected|lbSetPictureRight|lbSetPictureRightColor|lbSetPictureRightColorDisabled|lbSetPictureRightColorSelected|lbSetSelectColor|lbSetSelectColorRight|lbSetSelected|lbSetText|lbSetTextRight|lbSetTooltip|lbSetValue|lbSize|lbSort|lbSortByValue|lbText|lbTextRight|lbValue|leader|leaderboardDeInit|leaderboardGetRows|leaderboardInit|leaderboardRequestRowsFriends|leaderboardRequestRowsGlobal|leaderboardRequestRowsGlobalAroundUser|leaderboardsRequestUploadScore|leaderboardsRequestUploadScoreKeepBest|leaderboardState|leaveVehicle|libraryCredits|libraryDisclaimers|lifeState|lightAttachObject|lightDetachObject|lightIsOn|lightnings|limitSpeed|linearConversion|lineBreak|lineIntersects|lineIntersectsObjs|lineIntersectsSurfaces|lineIntersectsWith|linkItem|list|listObjects|listRemoteTargets|listVehicleSensors|ln|lnbAddArray|lnbAddColumn|lnbAddRow|lnbClear|lnbColor|lnbColorRight|lnbCurSelRow|lnbData|lnbDeleteColumn|lnbDeleteRow|lnbGetColumnsPosition|lnbPicture|lnbPictureRight|lnbSetColor|lnbSetColorRight|lnbSetColumnsPos|lnbSetCurSelRow|lnbSetData|lnbSetPicture|lnbSetPictureColor|lnbSetPictureColorRight|lnbSetPictureColorSelected|lnbSetPictureColorSelectedRight|lnbSetPictureRight|lnbSetText|lnbSetTextRight|lnbSetValue|lnbSize|lnbSort|lnbSortByValue|lnbText|lnbTextRight|lnbValue|load|loadAbs|loadBackpack|loadFile|loadGame|loadIdentity|loadMagazine|loadOverlay|loadStatus|loadUniform|loadVest|local|localize|locationNull|locationPosition|lock|lockCameraTo|lockCargo|lockDriver|locked|lockedCargo|lockedDriver|lockedTurret|lockIdentity|lockTurret|lockWP|log|logEntities|logNetwork|logNetworkTerminate|lookAt|lookAtPos|magazineCargo|magazines|magazinesAllTurrets|magazinesAmmo|magazinesAmmoCargo|magazinesAmmoFull|magazinesDetail|magazinesDetailBackpack|magazinesDetailUniform|magazinesDetailVest|magazinesTurret|magazineTurretAmmo|mapAnimAdd|mapAnimClear|mapAnimCommit|mapAnimDone|mapCenterOnCamera|mapGridPosition|markAsFinishedOnSteam|markerAlpha|markerBrush|markerColor|markerDir|markerPos|markerShape|markerSize|markerText|markerType|max|members|menuAction|menuAdd|menuChecked|menuClear|menuCollapse|menuData|menuDelete|menuEnable|menuEnabled|menuExpand|menuHover|menuPicture|menuSetAction|menuSetCheck|menuSetData|menuSetPicture|menuSetValue|menuShortcut|menuShortcutText|menuSize|menuSort|menuText|menuURL|menuValue|min|mineActive|mineDetectedBy|missionConfigFile|missionDifficulty|missionName|missionNamespace|missionStart|missionVersion|modelToWorld|modelToWorldVisual|modelToWorldVisualWorld|modelToWorldWorld|modParams|moonIntensity|moonPhase|morale|move|move3DENCamera|moveInAny|moveInCargo|moveInCommander|moveInDriver|moveInGunner|moveInTurret|moveObjectToEnd|moveOut|moveTime|moveTo|moveToCompleted|moveToFailed|musicVolume|name|nameSound|nearEntities|nearestBuilding|nearestLocation|nearestLocations|nearestLocationWithDubbing|nearestObject|nearestObjects|nearestTerrainObjects|nearObjects|nearObjectsReady|nearRoads|nearSupplies|nearTargets|needReload|netId|netObjNull|newOverlay|nextMenuItemIndex|nextWeatherChange|nMenuItems|numberOfEnginesRTD|numberToDate|objectCurators|objectFromNetId|objectParent|objNull|objStatus|onBriefingGear|onBriefingGroup|onBriefingNotes|onBriefingPlan|onBriefingTeamSwitch|onCommandModeChanged|onDoubleClick|onEachFrame|onGroupIconClick|onGroupIconOverEnter|onGroupIconOverLeave|onHCGroupSelectionChanged|onMapSingleClick|onPlayerConnected|onPlayerDisconnected|onPreloadFinished|onPreloadStarted|onShowNewObject|onTeamSwitch|openCuratorInterface|openDLCPage|openDSInterface|openMap|openSteamApp|openYoutubeVideo|opfor|orderGetIn|overcast|overcastForecast|owner|param|params|parseNumber|parseSimpleArray|parseText|parsingNamespace|particlesQuality|pi|pickWeaponPool|pitch|pixelGrid|pixelGridBase|pixelGridNoUIScale|pixelH|pixelW|playableSlotsNumber|playableUnits|playAction|playActionNow|player|playerRespawnTime|playerSide|playersNumber|playGesture|playMission|playMove|playMoveNow|playMusic|playScriptedMission|playSound|playSound3D|position|positionCameraToWorld|posScreenToWorld|posWorldToScreen|ppEffectAdjust|ppEffectCommit|ppEffectCommitted|ppEffectCreate|ppEffectDestroy|ppEffectEnable|ppEffectEnabled|ppEffectForceInNVG|precision|preloadCamera|preloadObject|preloadSound|preloadTitleObj|preloadTitleRsc|primaryWeapon|primaryWeaponItems|primaryWeaponMagazine|priority|processDiaryLink|processInitCommands|productVersion|profileName|profileNamespace|profileNameSteam|progressLoadingScreen|progressPosition|progressSetPosition|publicVariable|publicVariableClient|publicVariableServer|pushBack|pushBackUnique|putWeaponPool|queryItemsPool|queryMagazinePool|queryWeaponPool|rad|radioChannelAdd|radioChannelCreate|radioChannelRemove|radioChannelSetCallSign|radioChannelSetLabel|radioVolume|rain|rainbow|random|rank|rankId|rating|rectangular|registeredTasks|registerTask|reload|reloadEnabled|remoteControl|remoteExec|remoteExecCall|remoteExecutedOwner|remove3DENConnection|remove3DENEventHandler|remove3DENLayer|removeAction|removeAll3DENEventHandlers|removeAllActions|removeAllAssignedItems|removeAllContainers|removeAllCuratorAddons|removeAllCuratorCameraAreas|removeAllCuratorEditingAreas|removeAllEventHandlers|removeAllHandgunItems|removeAllItems|removeAllItemsWithMagazines|removeAllMissionEventHandlers|removeAllMPEventHandlers|removeAllMusicEventHandlers|removeAllOwnedMines|removeAllPrimaryWeaponItems|removeAllWeapons|removeBackpack|removeBackpackGlobal|removeCuratorAddons|removeCuratorCameraArea|removeCuratorEditableObjects|removeCuratorEditingArea|removeDrawIcon|removeDrawLinks|removeEventHandler|removeFromRemainsCollector|removeGoggles|removeGroupIcon|removeHandgunItem|removeHeadgear|removeItem|removeItemFromBackpack|removeItemFromUniform|removeItemFromVest|removeItems|removeMagazine|removeMagazineGlobal|removeMagazines|removeMagazinesTurret|removeMagazineTurret|removeMenuItem|removeMissionEventHandler|removeMPEventHandler|removeMusicEventHandler|removeOwnedMine|removePrimaryWeaponItem|removeSecondaryWeaponItem|removeSimpleTask|removeSwitchableUnit|removeTeamMember|removeUniform|removeVest|removeWeapon|removeWeaponAttachmentCargo|removeWeaponCargo|removeWeaponGlobal|removeWeaponTurret|reportRemoteTarget|requiredVersion|resetCamShake|resetSubgroupDirection|resistance|resize|resources|respawnVehicle|restartEditorCamera|reveal|revealMine|reverse|reversedMouseY|roadAt|roadsConnectedTo|roleDescription|ropeAttachedObjects|ropeAttachedTo|ropeAttachEnabled|ropeAttachTo|ropeCreate|ropeCut|ropeDestroy|ropeDetach|ropeEndPosition|ropeLength|ropes|ropeUnwind|ropeUnwound|rotorsForcesRTD|rotorsRpmRTD|round|runInitScript|safeZoneH|safeZoneW|safeZoneWAbs|safeZoneX|safeZoneXAbs|safeZoneY|save3DENInventory|saveGame|saveIdentity|saveJoysticks|saveOverlay|saveProfileNamespace|saveStatus|saveVar|savingEnabled|say|say2D|say3D|score|scoreSide|screenshot|screenToWorld|scriptDone|scriptName|scriptNull|scudState|secondaryWeapon|secondaryWeaponItems|secondaryWeaponMagazine|select|selectBestPlaces|selectDiarySubject|selectedEditorObjects|selectEditorObject|selectionNames|selectionPosition|selectLeader|selectMax|selectMin|selectNoPlayer|selectPlayer|selectRandom|selectRandomWeighted|selectWeapon|selectWeaponTurret|sendAUMessage|sendSimpleCommand|sendTask|sendTaskResult|sendUDPMessage|serverCommand|serverCommandAvailable|serverCommandExecutable|serverName|serverTime|set|set3DENAttribute|set3DENAttributes|set3DENGrid|set3DENIconsVisible|set3DENLayer|set3DENLinesVisible|set3DENLogicType|set3DENMissionAttribute|set3DENMissionAttributes|set3DENModelsVisible|set3DENObjectType|set3DENSelected|setAccTime|setActualCollectiveRTD|setAirplaneThrottle|setAirportSide|setAmmo|setAmmoCargo|setAmmoOnPylon|setAnimSpeedCoef|setAperture|setApertureNew|setArmoryPoints|setAttributes|setAutonomous|setBehaviour|setBleedingRemaining|setBrakesRTD|setCameraInterest|setCamShakeDefParams|setCamShakeParams|setCamUseTI|setCaptive|setCenterOfMass|setCollisionLight|setCombatMode|setCompassOscillation|setConvoySeparation|setCuratorCameraAreaCeiling|setCuratorCoef|setCuratorEditingAreaType|setCuratorWaypointCost|setCurrentChannel|setCurrentTask|setCurrentWaypoint|setCustomAimCoef|setCustomWeightRTD|setDamage|setDammage|setDate|setDebriefingText|setDefaultCamera|setDestination|setDetailMapBlendPars|setDir|setDirection|setDrawIcon|setDriveOnPath|setDropInterval|setDynamicSimulationDistance|setDynamicSimulationDistanceCoef|setEditorMode|setEditorObjectScope|setEffectCondition|setEngineRpmRTD|setFace|setFaceAnimation|setFatigue|setFeatureType|setFlagAnimationPhase|setFlagOwner|setFlagSide|setFlagTexture|setFog|setForceGeneratorRTD|setFormation|setFormationTask|setFormDir|setFriend|setFromEditor|setFSMVariable|setFuel|setFuelCargo|setGroupIcon|setGroupIconParams|setGroupIconsSelectable|setGroupIconsVisible|setGroupId|setGroupIdGlobal|setGroupOwner|setGusts|setHideBehind|setHit|setHitIndex|setHitPointDamage|setHorizonParallaxCoef|setHUDMovementLevels|setIdentity|setImportance|setInfoPanel|setLeader|setLightAmbient|setLightAttenuation|setLightBrightness|setLightColor|setLightDayLight|setLightFlareMaxDistance|setLightFlareSize|setLightIntensity|setLightnings|setLightUseFlare|setLocalWindParams|setMagazineTurretAmmo|setMarkerAlpha|setMarkerAlphaLocal|setMarkerBrush|setMarkerBrushLocal|setMarkerColor|setMarkerColorLocal|setMarkerDir|setMarkerDirLocal|setMarkerPos|setMarkerPosLocal|setMarkerShape|setMarkerShapeLocal|setMarkerSize|setMarkerSizeLocal|setMarkerText|setMarkerTextLocal|setMarkerType|setMarkerTypeLocal|setMass|setMimic|setMousePosition|setMusicEffect|setMusicEventHandler|setName|setNameSound|setObjectArguments|setObjectMaterial|setObjectMaterialGlobal|setObjectProxy|setObjectTexture|setObjectTextureGlobal|setObjectViewDistance|setOvercast|setOwner|setOxygenRemaining|setParticleCircle|setParticleClass|setParticleFire|setParticleParams|setParticleRandom|setPilotCameraDirection|setPilotCameraRotation|setPilotCameraTarget|setPilotLight|setPiPEffect|setPitch|setPlateNumber|setPlayable|setPlayerRespawnTime|setPos|setPosASL|setPosASL2|setPosASLW|setPosATL|setPosition|setPosWorld|setPylonLoadOut|setPylonsPriority|setRadioMsg|setRain|setRainbow|setRandomLip|setRank|setRectangular|setRepairCargo|setRotorBrakeRTD|setShadowDistance|setShotParents|setSide|setSimpleTaskAlwaysVisible|setSimpleTaskCustomData|setSimpleTaskDescription|setSimpleTaskDestination|setSimpleTaskTarget|setSimpleTaskType|setSimulWeatherLayers|setSize|setSkill|setSlingLoad|setSoundEffect|setSpeaker|setSpeech|setSpeedMode|setStamina|setStaminaScheme|setStatValue|setSuppression|setSystemOfUnits|setTargetAge|setTaskMarkerOffset|setTaskResult|setTaskState|setTerrainGrid|setText|setTimeMultiplier|setTitleEffect|setToneMapping|setToneMappingParams|setTrafficDensity|setTrafficDistance|setTrafficGap|setTrafficSpeed|setTriggerActivation|setTriggerArea|setTriggerStatements|setTriggerText|setTriggerTimeout|setTriggerType|setType|setUnconscious|setUnitAbility|setUnitLoadout|setUnitPos|setUnitPosWeak|setUnitRank|setUnitRecoilCoefficient|setUnitTrait|setUnloadInCombat|setUserActionText|setUserMFDText|setUserMFDValue|setVariable|setVectorDir|setVectorDirAndUp|setVectorUp|setVehicleAmmo|setVehicleAmmoDef|setVehicleArmor|setVehicleCargo|setVehicleId|setVehicleInit|setVehicleLock|setVehiclePosition|setVehicleRadar|setVehicleReceiveRemoteTargets|setVehicleReportOwnPosition|setVehicleReportRemoteTargets|setVehicleTIPars|setVehicleVarName|setVelocity|setVelocityModelSpace|setVelocityTransformation|setViewDistance|setVisibleIfTreeCollapsed|setWantedRpmRTD|setWaves|setWaypointBehaviour|setWaypointCombatMode|setWaypointCompletionRadius|setWaypointDescription|setWaypointForceBehaviour|setWaypointFormation|setWaypointHousePosition|setWaypointLoiterRadius|setWaypointLoiterType|setWaypointName|setWaypointPosition|setWaypointScript|setWaypointSpeed|setWaypointStatements|setWaypointTimeout|setWaypointType|setWaypointVisible|setWeaponReloadingTime|setWind|setWindDir|setWindForce|setWindStr|setWingForceScaleRTD|setWPPos|show3DIcons|showChat|showCinemaBorder|showCommandingMenu|showCompass|showCuratorCompass|showGPS|showHUD|showLegend|showMap|shownArtilleryComputer|shownChat|shownCompass|shownCuratorCompass|showNewEditorObject|shownGPS|shownHUD|shownMap|shownPad|shownRadio|shownScoretable|shownUAVFeed|shownWarrant|shownWatch|showPad|showRadio|showScoretable|showSubtitles|showUAVFeed|showWarrant|showWatch|showWaypoint|showWaypoints|side|sideAmbientLife|sideChat|sideEmpty|sideEnemy|sideFriendly|sideLogic|sideRadio|sideUnknown|simpleTasks|simulationEnabled|simulCloudDensity|simulCloudOcclusion|simulInClouds|simulWeatherSync|sin|size|sizeOf|skill|skillFinal|skipTime|sleep|sliderPosition|sliderRange|sliderSetPosition|sliderSetRange|sliderSetSpeed|sliderSpeed|slingLoadAssistantShown|soldierMagazines|someAmmo|sort|soundVolume|speaker|speed|speedMode|splitString|sqrt|squadParams|stance|startLoadingScreen|stop|stopEngineRTD|stopped|str|sunOrMoon|supportInfo|suppressFor|surfaceIsWater|surfaceNormal|surfaceType|swimInDepth|switchableUnits|switchAction|switchCamera|switchGesture|switchLight|switchMove|synchronizedObjects|synchronizedTriggers|synchronizedWaypoints|synchronizeObjectsAdd|synchronizeObjectsRemove|synchronizeTrigger|synchronizeWaypoint|systemChat|systemOfUnits|tan|targetKnowledge|targets|targetsAggregate|targetsQuery|taskAlwaysVisible|taskChildren|taskCompleted|taskCustomData|taskDescription|taskDestination|taskHint|taskMarkerOffset|taskNull|taskParent|taskResult|taskState|taskType|teamMember|teamMemberNull|teamName|teams|teamSwitch|teamSwitchEnabled|teamType|terminate|terrainIntersect|terrainIntersectASL|terrainIntersectAtASL|text|textLog|textLogFormat|tg|time|timeMultiplier|titleCut|titleFadeOut|titleObj|titleRsc|titleText|toArray|toFixed|toLower|toString|toUpper|triggerActivated|triggerActivation|triggerArea|triggerAttachedVehicle|triggerAttachObject|triggerAttachVehicle|triggerDynamicSimulation|triggerStatements|triggerText|triggerTimeout|triggerTimeoutCurrent|triggerType|turretLocal|turretOwner|turretUnit|tvAdd|tvClear|tvCollapse|tvCollapseAll|tvCount|tvCurSel|tvData|tvDelete|tvExpand|tvExpandAll|tvPicture|tvPictureRight|tvSetColor|tvSetCurSel|tvSetData|tvSetPicture|tvSetPictureColor|tvSetPictureColorDisabled|tvSetPictureColorSelected|tvSetPictureRight|tvSetPictureRightColor|tvSetPictureRightColorDisabled|tvSetPictureRightColorSelected|tvSetSelectColor|tvSetText|tvSetTooltip|tvSetValue|tvSort|tvSortByValue|tvText|tvTooltip|tvValue|type|typeName|typeOf|UAVControl|uiNamespace|uiSleep|unassignCurator|unassignItem|unassignTeam|unassignVehicle|underwater|uniform|uniformContainer|uniformItems|uniformMagazines|unitAddons|unitAimPosition|unitAimPositionVisual|unitBackpack|unitIsUAV|unitPos|unitReady|unitRecoilCoefficient|units|unitsBelowHeight|unlinkItem|unlockAchievement|unregisterTask|updateDrawIcon|updateMenuItem|updateObjectTree|useAIOperMapObstructionTest|useAISteeringComponent|useAudioTimeForMoves|userInputDisabled|vectorAdd|vectorCos|vectorCrossProduct|vectorDiff|vectorDir|vectorDirVisual|vectorDistance|vectorDistanceSqr|vectorDotProduct|vectorFromTo|vectorMagnitude|vectorMagnitudeSqr|vectorModelToWorld|vectorModelToWorldVisual|vectorMultiply|vectorNormalized|vectorUp|vectorUpVisual|vectorWorldToModel|vectorWorldToModelVisual|vehicle|vehicleCargoEnabled|vehicleChat|vehicleRadio|vehicleReceiveRemoteTargets|vehicleReportOwnPosition|vehicleReportRemoteTargets|vehicles|vehicleVarName|velocity|velocityModelSpace|verifySignature|vest|vestContainer|vestItems|vestMagazines|viewDistance|visibleCompass|visibleGPS|visibleMap|visiblePosition|visiblePositionASL|visibleScoretable|visibleWatch|waitUntil|waves|waypointAttachedObject|waypointAttachedVehicle|waypointAttachObject|waypointAttachVehicle|waypointBehaviour|waypointCombatMode|waypointCompletionRadius|waypointDescription|waypointForceBehaviour|waypointFormation|waypointHousePosition|waypointLoiterRadius|waypointLoiterType|waypointName|waypointPosition|waypoints|waypointScript|waypointsEnabledUAV|waypointShow|waypointSpeed|waypointStatements|waypointTimeout|waypointTimeoutCurrent|waypointType|waypointVisible|weaponAccessories|weaponAccessoriesCargo|weaponCargo|weaponDirection|weaponInertia|weaponLowered|weapons|weaponsItems|weaponsItemsCargo|weaponState|weaponsTurret|weightRTD|west|WFSideText|wind|windDir|windRTD|windStr|wingsForcesRTD|worldName|worldSize|worldToModel|worldToModelVisual|worldToScreen)\b/i,number:/(?:\$|\b0x)[\da-f]+\b|(?:\B\.\d+|\b\d+(?:\.\d+)?)(?:e[+-]?\d+)?\b/i,operator:/##|>>|&&|\|\||[!=<>]=?|[-+*/%#^]|\b(?:and|mod|not|or)\b/i,"magic-variable":{pattern:/\b(?:this|thisList|thisTrigger|_exception|_fnc_scriptName|_fnc_scriptNameParent|_forEachIndex|_this|_thisEventHandler|_thisFSM|_thisScript|_x)\b/i,alias:"keyword"},constant:/\bDIK(?:_[a-z\d]+)+\b/i}),Prism.languages.insertBefore("sqf","string",{macro:{pattern:/(^[ \t]*)#[a-z](?:[^\r\n\\]|\\(?:\r\n|[\s\S]))*/im,lookbehind:!0,greedy:!0,alias:"property",inside:{directive:{pattern:/#[a-z]+\b/i,alias:"keyword"},comment:Prism.languages.sqf.comment}}}),delete Prism.languages.sqf["class-name"]},5266:function(){Prism.languages.sql={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|(?:--|\/\/|#).*)/,lookbehind:!0},variable:[{pattern:/@(["'`])(?:\\[\s\S]|(?!\1)[^\\])+\1/,greedy:!0},/@[\w.$]+/],string:{pattern:/(^|[^@\\])("|')(?:\\[\s\S]|(?!\2)[^\\]|\2\2)*\2/,greedy:!0,lookbehind:!0},identifier:{pattern:/(^|[^@\\])`(?:\\[\s\S]|[^`\\]|``)*`/,greedy:!0,lookbehind:!0,inside:{punctuation:/^`|`$/}},function:/\b(?:AVG|COUNT|FIRST|FORMAT|LAST|LCASE|LEN|MAX|MID|MIN|MOD|NOW|ROUND|SUM|UCASE)(?=\s*\()/i,keyword:/\b(?:ACTION|ADD|AFTER|ALGORITHM|ALL|ALTER|ANALYZE|ANY|APPLY|AS|ASC|AUTHORIZATION|AUTO_INCREMENT|BACKUP|BDB|BEGIN|BERKELEYDB|BIGINT|BINARY|BIT|BLOB|BOOL|BOOLEAN|BREAK|BROWSE|BTREE|BULK|BY|CALL|CASCADED?|CASE|CHAIN|CHAR(?:ACTER|SET)?|CHECK(?:POINT)?|CLOSE|CLUSTERED|COALESCE|COLLATE|COLUMNS?|COMMENT|COMMIT(?:TED)?|COMPUTE|CONNECT|CONSISTENT|CONSTRAINT|CONTAINS(?:TABLE)?|CONTINUE|CONVERT|CREATE|CROSS|CURRENT(?:_DATE|_TIME|_TIMESTAMP|_USER)?|CURSOR|CYCLE|DATA(?:BASES?)?|DATE(?:TIME)?|DAY|DBCC|DEALLOCATE|DEC|DECIMAL|DECLARE|DEFAULT|DEFINER|DELAYED|DELETE|DELIMITERS?|DENY|DESC|DESCRIBE|DETERMINISTIC|DISABLE|DISCARD|DISK|DISTINCT|DISTINCTROW|DISTRIBUTED|DO|DOUBLE|DROP|DUMMY|DUMP(?:FILE)?|DUPLICATE|ELSE(?:IF)?|ENABLE|ENCLOSED|END|ENGINE|ENUM|ERRLVL|ERRORS|ESCAPED?|EXCEPT|EXEC(?:UTE)?|EXISTS|EXIT|EXPLAIN|EXTENDED|FETCH|FIELDS|FILE|FILLFACTOR|FIRST|FIXED|FLOAT|FOLLOWING|FOR(?: EACH ROW)?|FORCE|FOREIGN|FREETEXT(?:TABLE)?|FROM|FULL|FUNCTION|GEOMETRY(?:COLLECTION)?|GLOBAL|GOTO|GRANT|GROUP|HANDLER|HASH|HAVING|HOLDLOCK|HOUR|IDENTITY(?:COL|_INSERT)?|IF|IGNORE|IMPORT|INDEX|INFILE|INNER|INNODB|INOUT|INSERT|INT|INTEGER|INTERSECT|INTERVAL|INTO|INVOKER|ISOLATION|ITERATE|JOIN|KEYS?|KILL|LANGUAGE|LAST|LEAVE|LEFT|LEVEL|LIMIT|LINENO|LINES|LINESTRING|LOAD|LOCAL|LOCK|LONG(?:BLOB|TEXT)|LOOP|MATCH(?:ED)?|MEDIUM(?:BLOB|INT|TEXT)|MERGE|MIDDLEINT|MINUTE|MODE|MODIFIES|MODIFY|MONTH|MULTI(?:LINESTRING|POINT|POLYGON)|NATIONAL|NATURAL|NCHAR|NEXT|NO|NONCLUSTERED|NULLIF|NUMERIC|OFF?|OFFSETS?|ON|OPEN(?:DATASOURCE|QUERY|ROWSET)?|OPTIMIZE|OPTION(?:ALLY)?|ORDER|OUT(?:ER|FILE)?|OVER|PARTIAL|PARTITION|PERCENT|PIVOT|PLAN|POINT|POLYGON|PRECEDING|PRECISION|PREPARE|PREV|PRIMARY|PRINT|PRIVILEGES|PROC(?:EDURE)?|PUBLIC|PURGE|QUICK|RAISERROR|READS?|REAL|RECONFIGURE|REFERENCES|RELEASE|RENAME|REPEAT(?:ABLE)?|REPLACE|REPLICATION|REQUIRE|RESIGNAL|RESTORE|RESTRICT|RETURN(?:ING|S)?|REVOKE|RIGHT|ROLLBACK|ROUTINE|ROW(?:COUNT|GUIDCOL|S)?|RTREE|RULE|SAVE(?:POINT)?|SCHEMA|SECOND|SELECT|SERIAL(?:IZABLE)?|SESSION(?:_USER)?|SET(?:USER)?|SHARE|SHOW|SHUTDOWN|SIMPLE|SMALLINT|SNAPSHOT|SOME|SONAME|SQL|START(?:ING)?|STATISTICS|STATUS|STRIPED|SYSTEM_USER|TABLES?|TABLESPACE|TEMP(?:ORARY|TABLE)?|TERMINATED|TEXT(?:SIZE)?|THEN|TIME(?:STAMP)?|TINY(?:BLOB|INT|TEXT)|TOP?|TRAN(?:SACTIONS?)?|TRIGGER|TRUNCATE|TSEQUAL|TYPES?|UNBOUNDED|UNCOMMITTED|UNDEFINED|UNION|UNIQUE|UNLOCK|UNPIVOT|UNSIGNED|UPDATE(?:TEXT)?|USAGE|USE|USER|USING|VALUES?|VAR(?:BINARY|CHAR|CHARACTER|YING)|VIEW|WAITFOR|WARNINGS|WHEN|WHERE|WHILE|WITH(?: ROLLUP|IN)?|WORK|WRITE(?:TEXT)?|YEAR)\b/i,boolean:/\b(?:FALSE|NULL|TRUE)\b/i,number:/\b0x[\da-f]+\b|\b\d+(?:\.\d*)?|\B\.\d+\b/i,operator:/[-+*\/=%^~]|&&?|\|\|?|!=?|<(?:=>?|<|>)?|>[>=]?|\b(?:AND|BETWEEN|DIV|ILIKE|IN|IS|LIKE|NOT|OR|REGEXP|RLIKE|SOUNDS LIKE|XOR)\b/i,punctuation:/[;[\]()`,.]/}},4229:function(){Prism.languages.squirrel=Prism.languages.extend("clike",{comment:[Prism.languages.clike["comment"][0],{pattern:/(^|[^\\:])(?:\/\/|#).*/,lookbehind:!0,greedy:!0}],string:{pattern:/(^|[^\\"'@])(?:@"(?:[^"]|"")*"(?!")|"(?:[^\\\r\n"]|\\.)*")/,lookbehind:!0,greedy:!0},"class-name":{pattern:/(\b(?:class|enum|extends|instanceof)\s+)\w+(?:\.\w+)*/,lookbehind:!0,inside:{punctuation:/\./}},keyword:/\b(?:__FILE__|__LINE__|base|break|case|catch|class|clone|const|constructor|continue|default|delete|else|enum|extends|for|foreach|function|if|in|instanceof|local|null|resume|return|static|switch|this|throw|try|typeof|while|yield)\b/,number:/\b(?:0x[0-9a-fA-F]+|\d+(?:\.(?:\d+|[eE][+-]?\d+))?)\b/,operator:/\+\+|--|<=>|<[-<]|>>>?|&&?|\|\|?|[-+*/%!=<>]=?|[~^]|::?/,punctuation:/[(){}\[\],;.]/}),Prism.languages.insertBefore("squirrel","string",{char:{pattern:/(^|[^\\"'])'(?:[^\\']|\\(?:[xuU][0-9a-fA-F]{0,8}|[\s\S]))'/,lookbehind:!0,greedy:!0}}),Prism.languages.insertBefore("squirrel","operator",{"attribute-punctuation":{pattern:/<\/|\/>/,alias:"important"},lambda:{pattern:/@(?=\()/,alias:"operator"}})},5683:function(){(function(e){var t=/\b(?:algebra_solver|algebra_solver_newton|integrate_1d|integrate_ode|integrate_ode_bdf|integrate_ode_rk45|map_rect|ode_(?:adams|bdf|ckrk|rk45)(?:_tol)?|ode_adjoint_tol_ctl|reduce_sum|reduce_sum_static)\b/;e.languages.stan={comment:/\/\/.*|\/\*[\s\S]*?\*\/|#(?!include).*/,string:{pattern:/"[\x20\x21\x23-\x5B\x5D-\x7E]*"/,greedy:!0},directive:{pattern:/^([ \t]*)#include\b.*/m,lookbehind:!0,alias:"property"},"function-arg":{pattern:RegExp("("+t.source+/\s*\(\s*/.source+")"+/[a-zA-Z]\w*/.source),lookbehind:!0,alias:"function"},constraint:{pattern:/(\b(?:int|matrix|real|row_vector|vector)\s*)<[^<>]*>/,lookbehind:!0,inside:{expression:{pattern:/(=\s*)\S(?:\S|\s+(?!\s))*?(?=\s*(?:>$|,\s*\w+\s*=))/,lookbehind:!0,inside:null},property:/\b[a-z]\w*(?=\s*=)/i,operator:/=/,punctuation:/^<|>$|,/}},keyword:[{pattern:/\bdata(?=\s*\{)|\b(?:functions|generated|model|parameters|quantities|transformed)\b/,alias:"program-block"},/\b(?:array|break|cholesky_factor_corr|cholesky_factor_cov|complex|continue|corr_matrix|cov_matrix|data|else|for|if|in|increment_log_prob|int|matrix|ordered|positive_ordered|print|real|reject|return|row_vector|simplex|target|unit_vector|vector|void|while)\b/,t],function:/\b[a-z]\w*(?=\s*\()/i,number:/(?:\b\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\B\.\d+(?:_\d+)*)(?:E[+-]?\d+(?:_\d+)*)?i?(?!\w)/i,boolean:/\b(?:false|true)\b/,operator:/<-|\.[*/]=?|\|\|?|&&|[!=<>+\-*/]=?|['^%~?:]/,punctuation:/[()\[\]{},;]/},e.languages.stan.constraint.inside.expression.inside=e.languages.stan})(Prism)},9031:function(){Prism.languages.stata={comment:[{pattern:/(^[ \t]*)\*.*/m,lookbehind:!0,greedy:!0},{pattern:/(^|\s)\/\/.*|\/\*[\s\S]*?\*\//,lookbehind:!0,greedy:!0}],"string-literal":{pattern:/"[^"\r\n]*"|[‘`']".*?"[’`']/,greedy:!0,inside:{interpolation:{pattern:/\$\{[^{}]*\}|[‘`']\w[^’`'\r\n]*[’`']/,inside:{punctuation:/^\$\{|\}$/,expression:{pattern:/[\s\S]+/,inside:null}}},string:/[\s\S]+/}},mata:{pattern:/(^[ \t]*mata[ \t]*:)[\s\S]+?(?=^end\b)/m,lookbehind:!0,greedy:!0,alias:"language-mata",inside:Prism.languages.mata},java:{pattern:/(^[ \t]*java[ \t]*:)[\s\S]+?(?=^end\b)/m,lookbehind:!0,greedy:!0,alias:"language-java",inside:Prism.languages.java},python:{pattern:/(^[ \t]*python[ \t]*:)[\s\S]+?(?=^end\b)/m,lookbehind:!0,greedy:!0,alias:"language-python",inside:Prism.languages.python},command:{pattern:/(^[ \t]*(?:\.[ \t]+)?(?:(?:bayes|bootstrap|by|bysort|capture|collect|fmm|fp|frame|jackknife|mfp|mi|nestreg|noisily|permute|quietly|rolling|simulate|statsby|stepwise|svy|version|xi)\b[^:\r\n]*:[ \t]*|(?:capture|noisily|quietly|version)[ \t]+)?)[a-zA-Z]\w*/m,lookbehind:!0,greedy:!0,alias:"keyword"},variable:/\$\w+|[‘`']\w[^’`'\r\n]*[’`']/,keyword:/\b(?:bayes|bootstrap|by|bysort|capture|clear|collect|fmm|fp|frame|if|in|jackknife|mi[ \t]+estimate|mfp|nestreg|noisily|of|permute|quietly|rolling|simulate|sort|statsby|stepwise|svy|varlist|version|xi)\b/,boolean:/\b(?:off|on)\b/,number:/\b\d+(?:\.\d+)?\b|\B\.\d+/,function:/\b[a-z_]\w*(?=\()/i,operator:/\+\+|--|##?|[<>!=~]=?|[+\-*^&|/]/,punctuation:/[(){}[\],:]/},Prism.languages.stata["string-literal"].inside.interpolation.inside.expression.inside=Prism.languages.stata},4906:function(){(function(e){var t={pattern:/(\b\d+)(?:%|[a-z]+)/,lookbehind:!0},n={pattern:/(^|[^\w.-])-?(?:\d+(?:\.\d+)?|\.\d+)/,lookbehind:!0},r={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|\/\/.*)/,lookbehind:!0},url:{pattern:/\burl\((["']?).*?\1\)/i,greedy:!0},string:{pattern:/("|')(?:(?!\1)[^\\\r\n]|\\(?:\r\n|[\s\S]))*\1/,greedy:!0},interpolation:null,func:null,important:/\B!(?:important|optional)\b/i,keyword:{pattern:/(^|\s+)(?:(?:else|for|if|return|unless)(?=\s|$)|@[\w-]+)/,lookbehind:!0},hexcode:/#[\da-f]{3,6}/i,color:[/\b(?:AliceBlue|AntiqueWhite|Aqua|Aquamarine|Azure|Beige|Bisque|Black|BlanchedAlmond|Blue|BlueViolet|Brown|BurlyWood|CadetBlue|Chartreuse|Chocolate|Coral|CornflowerBlue|Cornsilk|Crimson|Cyan|DarkBlue|DarkCyan|DarkGoldenRod|DarkGr[ae]y|DarkGreen|DarkKhaki|DarkMagenta|DarkOliveGreen|DarkOrange|DarkOrchid|DarkRed|DarkSalmon|DarkSeaGreen|DarkSlateBlue|DarkSlateGr[ae]y|DarkTurquoise|DarkViolet|DeepPink|DeepSkyBlue|DimGr[ae]y|DodgerBlue|FireBrick|FloralWhite|ForestGreen|Fuchsia|Gainsboro|GhostWhite|Gold|GoldenRod|Gr[ae]y|Green|GreenYellow|HoneyDew|HotPink|IndianRed|Indigo|Ivory|Khaki|Lavender|LavenderBlush|LawnGreen|LemonChiffon|LightBlue|LightCoral|LightCyan|LightGoldenRodYellow|LightGr[ae]y|LightGreen|LightPink|LightSalmon|LightSeaGreen|LightSkyBlue|LightSlateGr[ae]y|LightSteelBlue|LightYellow|Lime|LimeGreen|Linen|Magenta|Maroon|MediumAquaMarine|MediumBlue|MediumOrchid|MediumPurple|MediumSeaGreen|MediumSlateBlue|MediumSpringGreen|MediumTurquoise|MediumVioletRed|MidnightBlue|MintCream|MistyRose|Moccasin|NavajoWhite|Navy|OldLace|Olive|OliveDrab|Orange|OrangeRed|Orchid|PaleGoldenRod|PaleGreen|PaleTurquoise|PaleVioletRed|PapayaWhip|PeachPuff|Peru|Pink|Plum|PowderBlue|Purple|Red|RosyBrown|RoyalBlue|SaddleBrown|Salmon|SandyBrown|SeaGreen|SeaShell|Sienna|Silver|SkyBlue|SlateBlue|SlateGr[ae]y|Snow|SpringGreen|SteelBlue|Tan|Teal|Thistle|Tomato|Transparent|Turquoise|Violet|Wheat|White|WhiteSmoke|Yellow|YellowGreen)\b/i,{pattern:/\b(?:hsl|rgb)\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*\)\B|\b(?:hsl|rgb)a\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*,\s*(?:0|0?\.\d+|1)\s*\)\B/i,inside:{unit:t,number:n,function:/[\w-]+(?=\()/,punctuation:/[(),]/}}],entity:/\\[\da-f]{1,8}/i,unit:t,boolean:/\b(?:false|true)\b/,operator:[/~|[+!\/%<>?=]=?|[-:]=|\*[*=]?|\.{2,3}|&&|\|\||\B-\B|\b(?:and|in|is(?: a| defined| not|nt)?|not|or)\b/],number:n,punctuation:/[{}()\[\];:,]/};r["interpolation"]={pattern:/\{[^\r\n}:]+\}/,alias:"variable",inside:{delimiter:{pattern:/^\{|\}$/,alias:"punctuation"},rest:r}},r["func"]={pattern:/[\w-]+\([^)]*\).*/,inside:{function:/^[^(]+/,rest:r}},e.languages.stylus={"atrule-declaration":{pattern:/(^[ \t]*)@.+/m,lookbehind:!0,inside:{atrule:/^@[\w-]+/,rest:r}},"variable-declaration":{pattern:/(^[ \t]*)[\w$-]+\s*.?=[ \t]*(?:\{[^{}]*\}|\S.*|$)/m,lookbehind:!0,inside:{variable:/^\S+/,rest:r}},statement:{pattern:/(^[ \t]*)(?:else|for|if|return|unless)[ \t].+/m,lookbehind:!0,inside:{keyword:/^\S+/,rest:r}},"property-declaration":{pattern:/((?:^|\{)([ \t]*))(?:[\w-]|\{[^}\r\n]+\})+(?:\s*:\s*|[ \t]+)(?!\s)[^{\r\n]*(?:;|[^{\r\n,]$(?!(?:\r?\n|\r)(?:\{|\2[ \t])))/m,lookbehind:!0,inside:{property:{pattern:/^[^\s:]+/,inside:{interpolation:r.interpolation}},rest:r}},selector:{pattern:/(^[ \t]*)(?:(?=\S)(?:[^{}\r\n:()]|::?[\w-]+(?:\([^)\r\n]*\)|(?![\w-]))|\{[^}\r\n]+\})+)(?:(?:\r?\n|\r)(?:\1(?:(?=\S)(?:[^{}\r\n:()]|::?[\w-]+(?:\([^)\r\n]*\)|(?![\w-]))|\{[^}\r\n]+\})+)))*(?:,$|\{|(?=(?:\r?\n|\r)(?:\{|\1[ \t])))/m,lookbehind:!0,inside:{interpolation:r.interpolation,comment:r.comment,punctuation:/[{},]/}},func:r.func,string:r.string,comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|\/\/.*)/,lookbehind:!0,greedy:!0},interpolation:r.interpolation,punctuation:/[{}()\[\];:.]/}})(Prism)},8571:function(){Prism.languages.supercollider={comment:{pattern:/\/\/.*|\/\*(?:[^*/]|\*(?!\/)|\/(?!\*)|\/\*(?:[^*]|\*(?!\/))*\*\/)*\*\//,greedy:!0},string:{pattern:/(^|[^\\])"(?:[^"\\]|\\[\s\S])*"/,lookbehind:!0,greedy:!0},char:{pattern:/\$(?:[^\\\r\n]|\\.)/,greedy:!0},symbol:{pattern:/(^|[^\\])'(?:[^'\\]|\\[\s\S])*'|\\\w+/,lookbehind:!0,greedy:!0},keyword:/\b(?:_|arg|classvar|const|nil|var|while)\b/,boolean:/\b(?:false|true)\b/,label:{pattern:/\b[a-z_]\w*(?=\s*:)/,alias:"property"},number:/\b(?:inf|pi|0x[0-9a-fA-F]+|\d+(?:\.\d+)?(?:[eE][+-]?\d+)?(?:pi)?|\d+r[0-9a-zA-Z]+(?:\.[0-9a-zA-Z]+)?|\d+[sb]{1,4}\d*)\b/,"class-name":/\b[A-Z]\w*\b/,operator:/\.{2,3}|#(?![[{])|&&|[!=]==?|\+>>|\+{1,3}|-[->]|=>|>>|\?\?|@\|?@|\|(?:@|[!=]=)?\||!\?|<[!=>]|\*{1,2}|<{2,3}\*?|[-!%&/<>?@|=`]/,punctuation:/[{}()[\].:,;]|#[[{]/},Prism.languages.sclang=Prism.languages.supercollider},874:function(){Prism.languages.swift={comment:{pattern:/(^|[^\\:])(?:\/\/.*|\/\*(?:[^/*]|\/(?!\*)|\*(?!\/)|\/\*(?:[^*]|\*(?!\/))*\*\/)*\*\/)/,lookbehind:!0,greedy:!0},"string-literal":[{pattern:RegExp(/(^|[^"#])/.source+"(?:"+/"(?:\\(?:\((?:[^()]|\([^()]*\))*\)|\r\n|[^(])|[^\\\r\n"])*"/.source+"|"+/"""(?:\\(?:\((?:[^()]|\([^()]*\))*\)|[^(])|[^\\"]|"(?!""))*"""/.source+")"+/(?!["#])/.source),lookbehind:!0,greedy:!0,inside:{interpolation:{pattern:/(\\\()(?:[^()]|\([^()]*\))*(?=\))/,lookbehind:!0,inside:null},"interpolation-punctuation":{pattern:/^\)|\\\($/,alias:"punctuation"},punctuation:/\\(?=[\r\n])/,string:/[\s\S]+/}},{pattern:RegExp(/(^|[^"#])(#+)/.source+"(?:"+/"(?:\\(?:#+\((?:[^()]|\([^()]*\))*\)|\r\n|[^#])|[^\\\r\n])*?"/.source+"|"+/"""(?:\\(?:#+\((?:[^()]|\([^()]*\))*\)|[^#])|[^\\])*?"""/.source+")\\2"),lookbehind:!0,greedy:!0,inside:{interpolation:{pattern:/(\\#+\()(?:[^()]|\([^()]*\))*(?=\))/,lookbehind:!0,inside:null},"interpolation-punctuation":{pattern:/^\)|\\#+\($/,alias:"punctuation"},string:/[\s\S]+/}}],directive:{pattern:RegExp(/#/.source+"(?:"+/(?:elseif|if)\b/.source+"(?:[ \t]*"+/(?:![ \t]*)?(?:\b\w+\b(?:[ \t]*\((?:[^()]|\([^()]*\))*\))?|\((?:[^()]|\([^()]*\))*\))(?:[ \t]*(?:&&|\|\|))?/.source+")+|"+/(?:else|endif)\b/.source+")"),alias:"property",inside:{"directive-name":/^#\w+/,boolean:/\b(?:false|true)\b/,number:/\b\d+(?:\.\d+)*\b/,operator:/!|&&|\|\||[<>]=?/,punctuation:/[(),]/}},literal:{pattern:/#(?:colorLiteral|column|dsohandle|file(?:ID|Literal|Path)?|function|imageLiteral|line)\b/,alias:"constant"},"other-directive":{pattern:/#\w+\b/,alias:"property"},attribute:{pattern:/@\w+/,alias:"atrule"},"function-definition":{pattern:/(\bfunc\s+)\w+/,lookbehind:!0,alias:"function"},label:{pattern:/\b(break|continue)\s+\w+|\b[a-zA-Z_]\w*(?=\s*:\s*(?:for|repeat|while)\b)/,lookbehind:!0,alias:"important"},keyword:/\b(?:Any|Protocol|Self|Type|actor|as|assignment|associatedtype|associativity|async|await|break|case|catch|class|continue|convenience|default|defer|deinit|didSet|do|dynamic|else|enum|extension|fallthrough|fileprivate|final|for|func|get|guard|higherThan|if|import|in|indirect|infix|init|inout|internal|is|isolated|lazy|left|let|lowerThan|mutating|none|nonisolated|nonmutating|open|operator|optional|override|postfix|precedencegroup|prefix|private|protocol|public|repeat|required|rethrows|return|right|safe|self|set|some|static|struct|subscript|super|switch|throw|throws|try|typealias|unowned|unsafe|var|weak|where|while|willSet)\b/,boolean:/\b(?:false|true)\b/,nil:{pattern:/\bnil\b/,alias:"constant"},"short-argument":/\$\d+\b/,omit:{pattern:/\b_\b/,alias:"keyword"},number:/\b(?:[\d_]+(?:\.[\de_]+)?|0x[a-f0-9_]+(?:\.[a-f0-9p_]+)?|0b[01_]+|0o[0-7_]+)\b/i,"class-name":/\b[A-Z](?:[A-Z_\d]*[a-z]\w*)?\b/,function:/\b[a-z_]\w*(?=\s*\()/i,constant:/\b(?:[A-Z_]{2,}|k[A-Z][A-Za-z_]+)\b/,operator:/[-+*/%=!<>&|^~?]+|\.[.\-+*/%=!<>&|^~?]+/,punctuation:/[{}[\]();,.:\\]/},Prism.languages.swift["string-literal"].forEach((function(e){e.inside["interpolation"].inside=Prism.languages.swift}))},8598:function(){(function(e){var t={pattern:/^[;#].*/m,greedy:!0},n=/"(?:[^\r\n"\\]|\\(?:[^\r]|\r\n?))*"(?!\S)/.source;e.languages.systemd={comment:t,section:{pattern:/^\[[^\n\r\[\]]*\](?=[ \t]*$)/m,greedy:!0,inside:{punctuation:/^\[|\]$/,"section-name":{pattern:/[\s\S]+/,alias:"selector"}}},key:{pattern:/^[^\s=]+(?=[ \t]*=)/m,greedy:!0,alias:"attr-name"},value:{pattern:RegExp(/(=[ \t]*(?!\s))/.source+"(?:"+n+'|(?=[^"\r\n]))(?:'+/[^\s\\]/.source+'|[ \t]+(?:(?![ \t"])|'+n+")|"+/\\[\r\n]+(?:[#;].*[\r\n]+)*(?![#;])/.source+")*"),lookbehind:!0,greedy:!0,alias:"attr-value",inside:{comment:t,quoted:{pattern:RegExp(/(^|\s)/.source+n),lookbehind:!0,greedy:!0},punctuation:/\\$/m,boolean:{pattern:/^(?:false|no|off|on|true|yes)$/,greedy:!0}}},punctuation:/=/}})(Prism)},3401:function(){Prism.languages.t4=Prism.languages["t4-cs"]=Prism.languages["t4-templating"].createT4("csharp")},9239:function(){(function(e){function t(e,t,n){return{pattern:RegExp("<#"+e+"[\\s\\S]*?#>"),alias:"block",inside:{delimiter:{pattern:RegExp("^<#"+e+"|#>$"),alias:"important"},content:{pattern:/[\s\S]+/,inside:t,alias:n}}}}function n(n){var r=e.languages[n],i="language-"+n;return{block:{pattern:/<#[\s\S]+?#>/,inside:{directive:t("@",{"attr-value":{pattern:/=(?:("|')(?:\\[\s\S]|(?!\1)[^\\])*\1|[^\s'">=]+)/,inside:{punctuation:/^=|^["']|["']$/}},keyword:/\b\w+(?=\s)/,"attr-name":/\b\w+/}),expression:t("=",r,i),"class-feature":t("\\+",r,i),standard:t("",r,i)}}}}e.languages["t4-templating"]=Object.defineProperty({},"createT4",{value:n})})(Prism)},6241:function(){Prism.languages["t4-vb"]=Prism.languages["t4-templating"].createT4("vbnet")},6193:function(){Prism.languages.tap={fail:/not ok[^#{\n\r]*/,pass:/ok[^#{\n\r]*/,pragma:/pragma [+-][a-z]+/,bailout:/bail out!.*/i,version:/TAP version \d+/i,plan:/\b\d+\.\.\d+(?: +#.*)?/,subtest:{pattern:/# Subtest(?:: .*)?/,greedy:!0},punctuation:/[{}]/,directive:/#.*/,yamlish:{pattern:/(^[ \t]*)---[\s\S]*?[\r\n][ \t]*\.\.\.$/m,lookbehind:!0,inside:Prism.languages.yaml,alias:"language-yaml"}}},1607:function(){Prism.languages.tcl={comment:{pattern:/(^|[^\\])#.*/,lookbehind:!0},string:{pattern:/"(?:[^"\\\r\n]|\\(?:\r\n|[\s\S]))*"/,greedy:!0},variable:[{pattern:/(\$)(?:::)?(?:[a-zA-Z0-9]+::)*\w+/,lookbehind:!0},{pattern:/(\$)\{[^}]+\}/,lookbehind:!0},{pattern:/(^[\t ]*set[ \t]+)(?:::)?(?:[a-zA-Z0-9]+::)*\w+/m,lookbehind:!0}],function:{pattern:/(^[\t ]*proc[ \t]+)\S+/m,lookbehind:!0},builtin:[{pattern:/(^[\t ]*)(?:break|class|continue|error|eval|exit|for|foreach|if|proc|return|switch|while)\b/m,lookbehind:!0},/\b(?:else|elseif)\b/],scope:{pattern:/(^[\t ]*)(?:global|upvar|variable)\b/m,lookbehind:!0,alias:"constant"},keyword:{pattern:/(^[\t ]*|\[)(?:Safe_Base|Tcl|after|append|apply|array|auto_(?:execok|import|load|mkindex|qualify|reset)|automkindex_old|bgerror|binary|catch|cd|chan|clock|close|concat|dde|dict|encoding|eof|exec|expr|fblocked|fconfigure|fcopy|file(?:event|name)?|flush|gets|glob|history|http|incr|info|interp|join|lappend|lassign|lindex|linsert|list|llength|load|lrange|lrepeat|lreplace|lreverse|lsearch|lset|lsort|math(?:func|op)|memory|msgcat|namespace|open|package|parray|pid|pkg_mkIndex|platform|puts|pwd|re_syntax|read|refchan|regexp|registry|regsub|rename|scan|seek|set|socket|source|split|string|subst|tcl(?:_endOfWord|_findLibrary|startOf(?:Next|Previous)Word|test|vars|wordBreak(?:After|Before))|tell|time|tm|trace|unknown|unload|unset|update|uplevel|vwait)\b/m,lookbehind:!0},operator:/!=?|\*\*?|==|&&?|\|\|?|<[=<]?|>[=>]?|[-+~\/%?^]|\b(?:eq|in|ne|ni)\b/,punctuation:/[{}()\[\]]/}},75:function(){(function(e){var t=/\([^|()\n]+\)|\[[^\]\n]+\]|\{[^}\n]+\}/.source,n=/\)|\((?![^|()\n]+\))/.source;function r(e,r){return RegExp(e.replace(//g,(function(){return"(?:"+t+")"})).replace(//g,(function(){return"(?:"+n+")"})),r||"")}var i={css:{pattern:/\{[^{}]+\}/,inside:{rest:e.languages.css}},"class-id":{pattern:/(\()[^()]+(?=\))/,lookbehind:!0,alias:"attr-value"},lang:{pattern:/(\[)[^\[\]]+(?=\])/,lookbehind:!0,alias:"attr-value"},punctuation:/[\\\/]\d+|\S/},s=e.languages.textile=e.languages.extend("markup",{phrase:{pattern:/(^|\r|\n)\S[\s\S]*?(?=$|\r?\n\r?\n|\r\r)/,lookbehind:!0,inside:{"block-tag":{pattern:r(/^[a-z]\w*(?:||[<>=])*\./.source),inside:{modifier:{pattern:r(/(^[a-z]\w*)(?:||[<>=])+(?=\.)/.source),lookbehind:!0,inside:i},tag:/^[a-z]\w*/,punctuation:/\.$/}},list:{pattern:r(/^[*#]+*\s+\S.*/.source,"m"),inside:{modifier:{pattern:r(/(^[*#]+)+/.source),lookbehind:!0,inside:i},punctuation:/^[*#]+/}},table:{pattern:r(/^(?:(?:||[<>=^~])+\.\s*)?(?:\|(?:(?:||[<>=^~_]|[\\/]\d+)+\.|(?!(?:||[<>=^~_]|[\\/]\d+)+\.))[^|]*)+\|/.source,"m"),inside:{modifier:{pattern:r(/(^|\|(?:\r?\n|\r)?)(?:||[<>=^~_]|[\\/]\d+)+(?=\.)/.source),lookbehind:!0,inside:i},punctuation:/\||^\./}},inline:{pattern:r(/(^|[^a-zA-Z\d])(\*\*|__|\?\?|[*_%@+\-^~])*.+?\2(?![a-zA-Z\d])/.source),lookbehind:!0,inside:{bold:{pattern:r(/(^(\*\*?)*).+?(?=\2)/.source),lookbehind:!0},italic:{pattern:r(/(^(__?)*).+?(?=\2)/.source),lookbehind:!0},cite:{pattern:r(/(^\?\?*).+?(?=\?\?)/.source),lookbehind:!0,alias:"string"},code:{pattern:r(/(^@*).+?(?=@)/.source),lookbehind:!0,alias:"keyword"},inserted:{pattern:r(/(^\+*).+?(?=\+)/.source),lookbehind:!0},deleted:{pattern:r(/(^-*).+?(?=-)/.source),lookbehind:!0},span:{pattern:r(/(^%*).+?(?=%)/.source),lookbehind:!0},modifier:{pattern:r(/(^\*\*|__|\?\?|[*_%@+\-^~])+/.source),lookbehind:!0,inside:i},punctuation:/[*_%?@+\-^~]+/}},"link-ref":{pattern:/^\[[^\]]+\]\S+$/m,inside:{string:{pattern:/(^\[)[^\]]+(?=\])/,lookbehind:!0},url:{pattern:/(^\])\S+$/,lookbehind:!0},punctuation:/[\[\]]/}},link:{pattern:r(/"*[^"]+":.+?(?=[^\w/]?(?:\s|$))/.source),inside:{text:{pattern:r(/(^"*)[^"]+(?=")/.source),lookbehind:!0},modifier:{pattern:r(/(^")+/.source),lookbehind:!0,inside:i},url:{pattern:/(:).+/,lookbehind:!0},punctuation:/[":]/}},image:{pattern:r(/!(?:||[<>=])*(?![<>=])[^!\s()]+(?:\([^)]+\))?!(?::.+?(?=[^\w/]?(?:\s|$)))?/.source),inside:{source:{pattern:r(/(^!(?:||[<>=])*)(?![<>=])[^!\s()]+(?:\([^)]+\))?(?=!)/.source),lookbehind:!0,alias:"url"},modifier:{pattern:r(/(^!)(?:||[<>=])+/.source),lookbehind:!0,inside:i},url:{pattern:/(:).+/,lookbehind:!0},punctuation:/[!:]/}},footnote:{pattern:/\b\[\d+\]/,alias:"comment",inside:{punctuation:/\[|\]/}},acronym:{pattern:/\b[A-Z\d]+\([^)]+\)/,inside:{comment:{pattern:/(\()[^()]+(?=\))/,lookbehind:!0},punctuation:/[()]/}},mark:{pattern:/\b\((?:C|R|TM)\)/,alias:"comment",inside:{punctuation:/[()]/}}}}}),o=s["phrase"].inside,a={inline:o["inline"],link:o["link"],image:o["image"],footnote:o["footnote"],acronym:o["acronym"],mark:o["mark"]};s.tag.pattern=/<\/?(?!\d)[a-z0-9]+(?:\s+[^\s>\/=]+(?:=(?:("|')(?:\\[\s\S]|(?!\1)[^\\])*\1|[^\s'">=]+))?)*\s*\/?>/i;var l=o["inline"].inside;l["bold"].inside=a,l["italic"].inside=a,l["inserted"].inside=a,l["deleted"].inside=a,l["span"].inside=a;var c=o["table"].inside;c["inline"]=a["inline"],c["link"]=a["link"],c["image"]=a["image"],c["footnote"]=a["footnote"],c["acronym"]=a["acronym"],c["mark"]=a["mark"]})(Prism)},9930:function(){(function(e){var t=/(?:[\w-]+|'[^'\n\r]*'|"(?:\\.|[^\\"\r\n])*")/.source;function n(e){return e.replace(/__/g,(function(){return t}))}e.languages.toml={comment:{pattern:/#.*/,greedy:!0},table:{pattern:RegExp(n(/(^[\t ]*\[\s*(?:\[\s*)?)__(?:\s*\.\s*__)*(?=\s*\])/.source),"m"),lookbehind:!0,greedy:!0,alias:"class-name"},key:{pattern:RegExp(n(/(^[\t ]*|[{,]\s*)__(?:\s*\.\s*__)*(?=\s*=)/.source),"m"),lookbehind:!0,greedy:!0,alias:"property"},string:{pattern:/"""(?:\\[\s\S]|[^\\])*?"""|'''[\s\S]*?'''|'[^'\n\r]*'|"(?:\\.|[^\\"\r\n])*"/,greedy:!0},date:[{pattern:/\b\d{4}-\d{2}-\d{2}(?:[T\s]\d{2}:\d{2}:\d{2}(?:\.\d+)?(?:Z|[+-]\d{2}:\d{2})?)?\b/i,alias:"number"},{pattern:/\b\d{2}:\d{2}:\d{2}(?:\.\d+)?\b/,alias:"number"}],number:/(?:\b0(?:x[\da-zA-Z]+(?:_[\da-zA-Z]+)*|o[0-7]+(?:_[0-7]+)*|b[10]+(?:_[10]+)*))\b|[-+]?\b\d+(?:_\d+)*(?:\.\d+(?:_\d+)*)?(?:[eE][+-]?\d+(?:_\d+)*)?\b|[-+]?\b(?:inf|nan)\b/,boolean:/\b(?:false|true)\b/,punctuation:/[.,=[\]{}]/}})(Prism)},4315:function(){(function(e){e.languages.tremor={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|(?:--|\/\/|#).*)/,lookbehind:!0},"interpolated-string":null,extractor:{pattern:/\b[a-z_]\w*\|(?:[^\r\n\\|]|\\(?:\r\n|[\s\S]))*\|/i,greedy:!0,inside:{regex:{pattern:/(^re)\|[\s\S]+/,lookbehind:!0},function:/^\w+/,value:/\|[\s\S]+/}},identifier:{pattern:/`[^`]*`/,greedy:!0},function:/\b[a-z_]\w*(?=\s*(?:::\s*<|\())\b/,keyword:/\b(?:args|as|by|case|config|connect|connector|const|copy|create|default|define|deploy|drop|each|emit|end|erase|event|flow|fn|for|from|group|having|insert|into|intrinsic|let|links|match|merge|mod|move|of|operator|patch|pipeline|recur|script|select|set|sliding|state|stream|to|tumbling|update|use|when|where|window|with)\b/,boolean:/\b(?:false|null|true)\b/i,number:/\b(?:0b[01_]*|0x[0-9a-fA-F_]*|\d[\d_]*(?:\.\d[\d_]*)?(?:[Ee][+-]?[\d_]+)?)\b/,"pattern-punctuation":{pattern:/%(?=[({[])/,alias:"punctuation"},operator:/[-+*\/%~!^]=?|=[=>]?|&[&=]?|\|[|=]?|<>?>?=?|(?:absent|and|not|or|present|xor)\b/,punctuation:/::|[;\[\]()\{\},.:]/};var t=/#\{(?:[^"{}]|\{[^{}]*\}|"(?:[^"\\\r\n]|\\(?:\r\n|[\s\S]))*")*\}/.source;e.languages.tremor["interpolated-string"]={pattern:RegExp(/(^|[^\\])/.source+'(?:"""(?:'+/[^"\\#]|\\[\s\S]|"(?!"")|#(?!\{)/.source+"|"+t+')*"""|"(?:'+/[^"\\\r\n#]|\\(?:\r\n|[\s\S])|#(?!\{)/.source+"|"+t+')*")'),lookbehind:!0,greedy:!0,inside:{interpolation:{pattern:RegExp(t),inside:{punctuation:/^#\{|\}$/,expression:{pattern:/[\s\S]+/,inside:e.languages.tremor}}},string:/[\s\S]+/}},e.languages.troy=e.languages["tremor"],e.languages.trickle=e.languages["tremor"]})(Prism)},1029:function(){(function(e){var t=e.util.clone(e.languages.typescript);e.languages.tsx=e.languages.extend("jsx",t),delete e.languages.tsx["parameter"],delete e.languages.tsx["literal-property"];var n=e.languages.tsx.tag;n.pattern=RegExp(/(^|[^\w$]|(?=<\/))/.source+"(?:"+n.pattern.source+")",n.pattern.flags),n.lookbehind=!0})(Prism)},7838:function(){(function(e){e.languages.tt2=e.languages.extend("clike",{comment:/#.*|\[%#[\s\S]*?%\]/,keyword:/\b(?:BLOCK|CALL|CASE|CATCH|CLEAR|DEBUG|DEFAULT|ELSE|ELSIF|END|FILTER|FINAL|FOREACH|GET|IF|IN|INCLUDE|INSERT|LAST|MACRO|META|NEXT|PERL|PROCESS|RAWPERL|RETURN|SET|STOP|SWITCH|TAGS|THROW|TRY|UNLESS|USE|WHILE|WRAPPER)\b/,punctuation:/[[\]{},()]/}),e.languages.insertBefore("tt2","number",{operator:/=[>=]?|!=?|<=?|>=?|&&|\|\|?|\b(?:and|not|or)\b/,variable:{pattern:/\b[a-z]\w*(?:\s*\.\s*(?:\d+|\$?[a-z]\w*))*\b/i}}),e.languages.insertBefore("tt2","keyword",{delimiter:{pattern:/^(?:\[%|%%)-?|-?%\]$/,alias:"punctuation"}}),e.languages.insertBefore("tt2","string",{"single-quoted-string":{pattern:/'[^\\']*(?:\\[\s\S][^\\']*)*'/,greedy:!0,alias:"string"},"double-quoted-string":{pattern:/"[^\\"]*(?:\\[\s\S][^\\"]*)*"/,greedy:!0,alias:"string",inside:{variable:{pattern:/\$(?:[a-z]\w*(?:\.(?:\d+|\$?[a-z]\w*))*)/i}}}}),delete e.languages.tt2.string,e.hooks.add("before-tokenize",(function(t){var n=/\[%[\s\S]+?%\]/g;e.languages["markup-templating"].buildPlaceholders(t,"tt2",n)})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"tt2")}))})(Prism)},8092:function(){Prism.languages.turtle={comment:{pattern:/#.*/,greedy:!0},"multiline-string":{pattern:/"""(?:(?:""?)?(?:[^"\\]|\\.))*"""|'''(?:(?:''?)?(?:[^'\\]|\\.))*'''/,greedy:!0,alias:"string",inside:{comment:/#.*/}},string:{pattern:/"(?:[^\\"\r\n]|\\.)*"|'(?:[^\\'\r\n]|\\.)*'/,greedy:!0},url:{pattern:/<(?:[^\x00-\x20<>"{}|^`\\]|\\(?:u[\da-fA-F]{4}|U[\da-fA-F]{8}))*>/,greedy:!0,inside:{punctuation:/[<>]/}},function:{pattern:/(?:(?![-.\d\xB7])[-.\w\xB7\xC0-\uFFFD]+)?:(?:(?![-.])(?:[-.:\w\xC0-\uFFFD]|%[\da-f]{2}|\\.)+)?/i,inside:{"local-name":{pattern:/([^:]*:)[\s\S]+/,lookbehind:!0},prefix:{pattern:/[\s\S]+/,inside:{punctuation:/:/}}}},number:/[+-]?\b\d+(?:\.\d*)?(?:e[+-]?\d+)?/i,punctuation:/[{}.,;()[\]]|\^\^/,boolean:/\b(?:false|true)\b/,keyword:[/(?:\ba|@prefix|@base)\b|=/,/\b(?:base|graph|prefix)\b/i],tag:{pattern:/@[a-z]+(?:-[a-z\d]+)*/i,inside:{punctuation:/@/}}},Prism.languages.trig=Prism.languages["turtle"]},1429:function(){Prism.languages.twig={comment:/^\{#[\s\S]*?#\}$/,"tag-name":{pattern:/(^\{%-?\s*)\w+/,lookbehind:!0,alias:"keyword"},delimiter:{pattern:/^\{[{%]-?|-?[%}]\}$/,alias:"punctuation"},string:{pattern:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,inside:{punctuation:/^['"]|['"]$/}},keyword:/\b(?:even|if|odd)\b/,boolean:/\b(?:false|null|true)\b/,number:/\b0x[\dA-Fa-f]+|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee][-+]?\d+)?/,operator:[{pattern:/(\s)(?:and|b-and|b-or|b-xor|ends with|in|is|matches|not|or|same as|starts with)(?=\s)/,lookbehind:!0},/[=<>]=?|!=|\*\*?|\/\/?|\?:?|[-+~%|]/],punctuation:/[()\[\]{}:.,]/},Prism.hooks.add("before-tokenize",(function(e){if("twig"===e.language){var t=/\{(?:#[\s\S]*?#|%[\s\S]*?%|\{[\s\S]*?\})\}/g;Prism.languages["markup-templating"].buildPlaceholders(e,"twig",t)}})),Prism.hooks.add("after-tokenize",(function(e){Prism.languages["markup-templating"].tokenizePlaceholders(e,"twig")}))},6836:function(){(function(e){e.languages.typescript=e.languages.extend("javascript",{"class-name":{pattern:/(\b(?:class|extends|implements|instanceof|interface|new|type)\s+)(?!keyof\b)(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?:\s*<(?:[^<>]|<(?:[^<>]|<[^<>]*>)*>)*>)?/,lookbehind:!0,greedy:!0,inside:null},builtin:/\b(?:Array|Function|Promise|any|boolean|console|never|number|string|symbol|unknown)\b/}),e.languages.typescript.keyword.push(/\b(?:abstract|declare|is|keyof|readonly|require)\b/,/\b(?:asserts|infer|interface|module|namespace|type)\b(?=\s*(?:[{_$a-zA-Z\xA0-\uFFFF]|$))/,/\btype\b(?=\s*(?:[\{*]|$))/),delete e.languages.typescript["parameter"],delete e.languages.typescript["literal-property"];var t=e.languages.extend("typescript",{});delete t["class-name"],e.languages.typescript["class-name"].inside=t,e.languages.insertBefore("typescript","function",{decorator:{pattern:/@[$\w\xA0-\uFFFF]+/,inside:{at:{pattern:/^@/,alias:"operator"},function:/^[\s\S]+/}},"generic-function":{pattern:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*\s*<(?:[^<>]|<(?:[^<>]|<[^<>]*>)*>)*>(?=\s*\()/,greedy:!0,inside:{function:/^#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*/,generic:{pattern:/<[\s\S]+/,alias:"class-name",inside:t}}}}),e.languages.ts=e.languages.typescript})(Prism)},4032:function(){(function(e){var t=/\b(?:ACT|ACTIFSUB|CARRAY|CASE|CLEARGIF|COA|COA_INT|CONSTANTS|CONTENT|CUR|EDITPANEL|EFFECT|EXT|FILE|FLUIDTEMPLATE|FORM|FRAME|FRAMESET|GIFBUILDER|GMENU|GMENU_FOLDOUT|GMENU_LAYERS|GP|HMENU|HRULER|HTML|IENV|IFSUB|IMAGE|IMGMENU|IMGMENUITEM|IMGTEXT|IMG_RESOURCE|INCLUDE_TYPOSCRIPT|JSMENU|JSMENUITEM|LLL|LOAD_REGISTER|NO|PAGE|RECORDS|RESTORE_REGISTER|TEMPLATE|TEXT|TMENU|TMENUITEM|TMENU_LAYERS|USER|USER_INT|_GIFBUILDER|global|globalString|globalVar)\b/;e.languages.typoscript={comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0},{pattern:/(^|[^\\:= \t]|(?:^|[^= \t])[ \t]+)\/\/.*/,lookbehind:!0,greedy:!0},{pattern:/(^|[^"'])#.*/,lookbehind:!0,greedy:!0}],function:[{pattern://,inside:{string:{pattern:/"[^"\r\n]*"|'[^'\r\n]*'/,inside:{keyword:t}},keyword:{pattern:/INCLUDE_TYPOSCRIPT/}}},{pattern:/@import\s*(?:"[^"\r\n]*"|'[^'\r\n]*')/,inside:{string:/"[^"\r\n]*"|'[^'\r\n]*'/}}],string:{pattern:/^([^=]*=[< ]?)(?:(?!\]\n).)*/,lookbehind:!0,inside:{function:/\{\$.*\}/,keyword:t,number:/^\d+$/,punctuation:/[,|:]/}},keyword:t,number:{pattern:/\b\d+\s*[.{=]/,inside:{operator:/[.{=]/}},tag:{pattern:/\.?[-\w\\]+\.?/,inside:{punctuation:/\./}},punctuation:/[{}[\];(),.:|]/,operator:/[<>]=?|[!=]=?=?|--?|\+\+?|&&?|\|\|?|[?*/~^%]/},e.languages.tsconfig=e.languages.typoscript})(Prism)},196:function(){Prism.languages.unrealscript={comment:/\/\/.*|\/\*[\s\S]*?\*\//,string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},category:{pattern:/(\b(?:(?:autoexpand|hide|show)categories|var)\s*\()[^()]+(?=\))/,lookbehind:!0,greedy:!0,alias:"property"},metadata:{pattern:/(\w\s*)<\s*\w+\s*=[^<>|=\r\n]+(?:\|\s*\w+\s*=[^<>|=\r\n]+)*>/,lookbehind:!0,greedy:!0,inside:{property:/\b\w+(?=\s*=)/,operator:/=/,punctuation:/[<>|]/}},macro:{pattern:/`\w+/,alias:"property"},"class-name":{pattern:/(\b(?:class|enum|extends|interface|state(?:\(\))?|struct|within)\s+)\w+/,lookbehind:!0},keyword:/\b(?:abstract|actor|array|auto|autoexpandcategories|bool|break|byte|case|class|classgroup|client|coerce|collapsecategories|config|const|continue|default|defaultproperties|delegate|dependson|deprecated|do|dontcollapsecategories|editconst|editinlinenew|else|enum|event|exec|export|extends|final|float|for|forcescriptorder|foreach|function|goto|guid|hidecategories|hidedropdown|if|ignores|implements|inherits|input|int|interface|iterator|latent|local|material|name|native|nativereplication|noexport|nontransient|noteditinlinenew|notplaceable|operator|optional|out|pawn|perobjectconfig|perobjectlocalized|placeable|postoperator|preoperator|private|protected|reliable|replication|return|server|showcategories|simulated|singular|state|static|string|struct|structdefault|structdefaultproperties|switch|texture|transient|travel|unreliable|until|var|vector|while|within)\b/,function:/\b[a-z_]\w*(?=\s*\()/i,boolean:/\b(?:false|true)\b/,number:/\b0x[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?/i,operator:/>>|<<|--|\+\+|\*\*|[-+*/~!=<>$@]=?|&&?|\|\|?|\^\^?|[?:%]|\b(?:ClockwiseFrom|Cross|Dot)\b/,punctuation:/[()[\]{};,.]/},Prism.languages.uc=Prism.languages.uscript=Prism.languages.unrealscript},2467:function(){Prism.languages.uorazor={"comment-hash":{pattern:/#.*/,alias:"comment",greedy:!0},"comment-slash":{pattern:/\/\/.*/,alias:"comment",greedy:!0},string:{pattern:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,inside:{punctuation:/^['"]|['"]$/},greedy:!0},"source-layers":{pattern:/\b(?:arms|backpack|blue|bracelet|cancel|clear|cloak|criminal|earrings|enemy|facialhair|friend|friendly|gloves|gray|grey|ground|hair|head|innerlegs|innertorso|innocent|lefthand|middletorso|murderer|neck|nonfriendly|onehandedsecondary|outerlegs|outertorso|pants|red|righthand|ring|self|shirt|shoes|talisman|waist)\b/i,alias:"function"},"source-commands":{pattern:/\b(?:alliance|attack|cast|clearall|clearignore|clearjournal|clearlist|clearsysmsg|createlist|createtimer|dclick|dclicktype|dclickvar|dress|dressconfig|drop|droprelloc|emote|getlabel|guild|gumpclose|gumpresponse|hotkey|ignore|lasttarget|lift|lifttype|menu|menuresponse|msg|org|organize|organizer|overhead|pause|poplist|potion|promptresponse|pushlist|removelist|removetimer|rename|restock|say|scav|scavenger|script|setability|setlasttarget|setskill|settimer|setvar|sysmsg|target|targetloc|targetrelloc|targettype|undress|unignore|unsetvar|useobject|useonce|useskill|usetype|virtue|wait|waitforgump|waitformenu|waitforprompt|waitforstat|waitforsysmsg|waitfortarget|walk|wfsysmsg|wft|whisper|yell)\b/,alias:"function"},"tag-name":{pattern:/(^\{%-?\s*)\w+/,lookbehind:!0,alias:"keyword"},delimiter:{pattern:/^\{[{%]-?|-?[%}]\}$/,alias:"punctuation"},function:/\b(?:atlist|close|closest|count|counter|counttype|dead|dex|diffhits|diffmana|diffstam|diffweight|find|findbuff|finddebuff|findlayer|findtype|findtypelist|followers|gumpexists|hidden|hits|hp|hue|human|humanoid|ingump|inlist|insysmessage|insysmsg|int|invul|lhandempty|list|listexists|mana|maxhits|maxhp|maxmana|maxstam|maxweight|monster|mounted|name|next|noto|paralyzed|poisoned|position|prev|previous|queued|rand|random|rhandempty|skill|stam|str|targetexists|timer|timerexists|varexist|warmode|weight)\b/,keyword:/\b(?:and|as|break|continue|else|elseif|endfor|endif|endwhile|for|if|loop|not|or|replay|stop|while)\b/,boolean:/\b(?:false|null|true)\b/,number:/\b0x[\dA-Fa-f]+|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee][-+]?\d+)?/,operator:[{pattern:/(\s)(?:and|b-and|b-or|b-xor|ends with|in|is|matches|not|or|same as|starts with)(?=\s)/,lookbehind:!0},/[=<>]=?|!=|\*\*?|\/\/?|\?:?|[-+~%|]/],punctuation:/[()\[\]{}:.,]/}},5503:function(){Prism.languages.uri={scheme:{pattern:/^[a-z][a-z0-9+.-]*:/im,greedy:!0,inside:{"scheme-delimiter":/:$/}},fragment:{pattern:/#[\w\-.~!$&'()*+,;=%:@/?]*/,inside:{"fragment-delimiter":/^#/}},query:{pattern:/\?[\w\-.~!$&'()*+,;=%:@/?]*/,inside:{"query-delimiter":{pattern:/^\?/,greedy:!0},"pair-delimiter":/[&;]/,pair:{pattern:/^[^=][\s\S]*/,inside:{key:/^[^=]+/,value:{pattern:/(^=)[\s\S]+/,lookbehind:!0}}}}},authority:{pattern:RegExp(/^\/\//.source+/(?:[\w\-.~!$&'()*+,;=%:]*@)?/.source+"(?:"+/\[(?:[0-9a-fA-F:.]{2,48}|v[0-9a-fA-F]+\.[\w\-.~!$&'()*+,;=]+)\]/.source+"|"+/[\w\-.~!$&'()*+,;=%]*/.source+")"+/(?::\d*)?/.source,"m"),inside:{"authority-delimiter":/^\/\//,"user-info-segment":{pattern:/^[\w\-.~!$&'()*+,;=%:]*@/,inside:{"user-info-delimiter":/@$/,"user-info":/^[\w\-.~!$&'()*+,;=%:]+/}},"port-segment":{pattern:/:\d*$/,inside:{"port-delimiter":/^:/,port:/^\d+/}},host:{pattern:/[\s\S]+/,inside:{"ip-literal":{pattern:/^\[[\s\S]+\]$/,inside:{"ip-literal-delimiter":/^\[|\]$/,"ipv-future":/^v[\s\S]+/,"ipv6-address":/^[\s\S]+/}},"ipv4-address":/^(?:(?:[03-9]\d?|[12]\d{0,2})\.){3}(?:[03-9]\d?|[12]\d{0,2})$/}}}},path:{pattern:/^[\w\-.~!$&'()*+,;=%:@/]+/m,inside:{"path-separator":/\//}}},Prism.languages.url=Prism.languages.uri},4641:function(){(function(e){var t={pattern:/[\s\S]+/,inside:null};e.languages.v=e.languages.extend("clike",{string:{pattern:/r?(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,alias:"quoted-string",greedy:!0,inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$(?:\{[^{}]*\}|\w+(?:\.\w+(?:\([^\(\)]*\))?|\[[^\[\]]+\])*)/,lookbehind:!0,inside:{"interpolation-variable":{pattern:/^\$\w[\s\S]*$/,alias:"variable"},"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"},"interpolation-expression":t}}}},"class-name":{pattern:/(\b(?:enum|interface|struct|type)\s+)(?:C\.)?\w+/,lookbehind:!0},keyword:/(?:\b(?:__global|as|asm|assert|atomic|break|chan|const|continue|defer|else|embed|enum|fn|for|go(?:to)?|if|import|in|interface|is|lock|match|module|mut|none|or|pub|return|rlock|select|shared|sizeof|static|struct|type(?:of)?|union|unsafe)|\$(?:else|for|if)|#(?:flag|include))\b/,number:/\b(?:0x[a-f\d]+(?:_[a-f\d]+)*|0b[01]+(?:_[01]+)*|0o[0-7]+(?:_[0-7]+)*|\d+(?:_\d+)*(?:\.\d+(?:_\d+)*)?)\b/i,operator:/~|\?|[*\/%^!=]=?|\+[=+]?|-[=-]?|\|[=|]?|&(?:=|&|\^=?)?|>(?:>=?|=)?|<(?:<=?|=|-)?|:=|\.\.\.?/,builtin:/\b(?:any(?:_float|_int)?|bool|byte(?:ptr)?|charptr|f(?:32|64)|i(?:8|16|64|128|nt)|rune|size_t|string|u(?:16|32|64|128)|voidptr)\b/}),t.inside=e.languages.v,e.languages.insertBefore("v","string",{char:{pattern:/`(?:\\`|\\?[^`]{1,2})`/,alias:"rune"}}),e.languages.insertBefore("v","operator",{attribute:{pattern:/(^[\t ]*)\[(?:deprecated|direct_array_access|flag|inline|live|ref_only|typedef|unsafe_fn|windows_stdcall)\]/m,lookbehind:!0,alias:"annotation",inside:{punctuation:/[\[\]]/,keyword:/\w+/}},generic:{pattern:/<\w+>(?=\s*[\)\{])/,inside:{punctuation:/[<>]/,"class-name":/\w+/}}}),e.languages.insertBefore("v","function",{"generic-function":{pattern:/\b\w+\s*<\w+>(?=\()/,inside:{function:/^\w+/,generic:{pattern:/<\w+>/,inside:e.languages.v.generic.inside}}}})})(Prism)},35:function(){Prism.languages.vala=Prism.languages.extend("clike",{"class-name":[{pattern:/\b[A-Z]\w*(?:\.\w+)*\b(?=(?:\?\s+|\*?\s+\*?)\w)/,inside:{punctuation:/\./}},{pattern:/(\[)[A-Z]\w*(?:\.\w+)*\b/,lookbehind:!0,inside:{punctuation:/\./}},{pattern:/(\b(?:class|interface)\s+[A-Z]\w*(?:\.\w+)*\s*:\s*)[A-Z]\w*(?:\.\w+)*\b/,lookbehind:!0,inside:{punctuation:/\./}},{pattern:/((?:\b(?:class|enum|interface|new|struct)\s+)|(?:catch\s+\())[A-Z]\w*(?:\.\w+)*\b/,lookbehind:!0,inside:{punctuation:/\./}}],keyword:/\b(?:abstract|as|assert|async|base|bool|break|case|catch|char|class|const|construct|continue|default|delegate|delete|do|double|dynamic|else|ensures|enum|errordomain|extern|finally|float|for|foreach|get|if|in|inline|int|int16|int32|int64|int8|interface|internal|is|lock|long|namespace|new|null|out|override|owned|params|private|protected|public|ref|requires|return|set|short|signal|sizeof|size_t|ssize_t|static|string|struct|switch|this|throw|throws|try|typeof|uchar|uint|uint16|uint32|uint64|uint8|ulong|unichar|unowned|ushort|using|value|var|virtual|void|volatile|weak|while|yield)\b/i,function:/\b\w+(?=\s*\()/,number:/(?:\b0x[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?)(?:f|u?l?)?/i,operator:/\+\+|--|&&|\|\||<<=?|>>=?|=>|->|~|[+\-*\/%&^|=!<>]=?|\?\??|\.\.\./,punctuation:/[{}[\];(),.:]/,constant:/\b[A-Z0-9_]+\b/}),Prism.languages.insertBefore("vala","string",{"raw-string":{pattern:/"""[\s\S]*?"""/,greedy:!0,alias:"string"},"template-string":{pattern:/@"[\s\S]*?"/,greedy:!0,inside:{interpolation:{pattern:/\$(?:\([^)]*\)|[a-zA-Z]\w*)/,inside:{delimiter:{pattern:/^\$\(?|\)$/,alias:"punctuation"},rest:Prism.languages.vala}},string:/[\s\S]+/}}}),Prism.languages.insertBefore("vala","keyword",{regex:{pattern:/\/(?:\[(?:[^\]\\\r\n]|\\.)*\]|\\.|[^/\\\[\r\n])+\/[imsx]{0,4}(?=\s*(?:$|[\r\n,.;})\]]))/,greedy:!0,inside:{"regex-source":{pattern:/^(\/)[\s\S]+(?=\/[a-z]*$)/,lookbehind:!0,alias:"language-regex",inside:Prism.languages.regex},"regex-delimiter":/^\//,"regex-flags":/^[a-z]+$/}}})},5398:function(){Prism.languages.vbnet=Prism.languages.extend("basic",{comment:[{pattern:/(?:!|REM\b).+/i,inside:{keyword:/^REM/i}},{pattern:/(^|[^\\:])'.*/,lookbehind:!0,greedy:!0}],string:{pattern:/(^|[^"])"(?:""|[^"])*"(?!")/,lookbehind:!0,greedy:!0},keyword:/(?:\b(?:ADDHANDLER|ADDRESSOF|ALIAS|AND|ANDALSO|AS|BEEP|BLOAD|BOOLEAN|BSAVE|BYREF|BYTE|BYVAL|CALL(?: ABSOLUTE)?|CASE|CATCH|CBOOL|CBYTE|CCHAR|CDATE|CDBL|CDEC|CHAIN|CHAR|CHDIR|CINT|CLASS|CLEAR|CLNG|CLOSE|CLS|COBJ|COM|COMMON|CONST|CONTINUE|CSBYTE|CSHORT|CSNG|CSTR|CTYPE|CUINT|CULNG|CUSHORT|DATA|DATE|DECIMAL|DECLARE|DEF(?: FN| SEG|DBL|INT|LNG|SNG|STR)|DEFAULT|DELEGATE|DIM|DIRECTCAST|DO|DOUBLE|ELSE|ELSEIF|END|ENUM|ENVIRON|ERASE|ERROR|EVENT|EXIT|FALSE|FIELD|FILES|FINALLY|FOR(?: EACH)?|FRIEND|FUNCTION|GET|GETTYPE|GETXMLNAMESPACE|GLOBAL|GOSUB|GOTO|HANDLES|IF|IMPLEMENTS|IMPORTS|IN|INHERITS|INPUT|INTEGER|INTERFACE|IOCTL|IS|ISNOT|KEY|KILL|LET|LIB|LIKE|LINE INPUT|LOCATE|LOCK|LONG|LOOP|LSET|ME|MKDIR|MOD|MODULE|MUSTINHERIT|MUSTOVERRIDE|MYBASE|MYCLASS|NAME|NAMESPACE|NARROWING|NEW|NEXT|NOT|NOTHING|NOTINHERITABLE|NOTOVERRIDABLE|OBJECT|OF|OFF|ON(?: COM| ERROR| KEY| TIMER)?|OPEN|OPERATOR|OPTION(?: BASE)?|OPTIONAL|OR|ORELSE|OUT|OVERLOADS|OVERRIDABLE|OVERRIDES|PARAMARRAY|PARTIAL|POKE|PRIVATE|PROPERTY|PROTECTED|PUBLIC|PUT|RAISEEVENT|READ|READONLY|REDIM|REM|REMOVEHANDLER|RESTORE|RESUME|RETURN|RMDIR|RSET|RUN|SBYTE|SELECT(?: CASE)?|SET|SHADOWS|SHARED|SHELL|SHORT|SINGLE|SLEEP|STATIC|STEP|STOP|STRING|STRUCTURE|SUB|SWAP|SYNCLOCK|SYSTEM|THEN|THROW|TIMER|TO|TROFF|TRON|TRUE|TRY|TRYCAST|TYPE|TYPEOF|UINTEGER|ULONG|UNLOCK|UNTIL|USHORT|USING|VIEW PRINT|WAIT|WEND|WHEN|WHILE|WIDENING|WITH|WITHEVENTS|WRITE|WRITEONLY|XOR)|\B(?:#CONST|#ELSE|#ELSEIF|#END|#IF))(?:\$|\b)/i,punctuation:/[,;:(){}]/})},981:function(){(function(e){e.languages.velocity=e.languages.extend("markup",{});var t={variable:{pattern:/(^|[^\\](?:\\\\)*)\$!?(?:[a-z][\w-]*(?:\([^)]*\))?(?:\.[a-z][\w-]*(?:\([^)]*\))?|\[[^\]]+\])*|\{[^}]+\})/i,lookbehind:!0,inside:{}},string:{pattern:/"[^"]*"|'[^']*'/,greedy:!0},number:/\b\d+\b/,boolean:/\b(?:false|true)\b/,operator:/[=!<>]=?|[+*/%-]|&&|\|\||\.\.|\b(?:eq|g[et]|l[et]|n(?:e|ot))\b/,punctuation:/[(){}[\]:,.]/};t.variable.inside={string:t["string"],function:{pattern:/([^\w-])[a-z][\w-]*(?=\()/,lookbehind:!0},number:t["number"],boolean:t["boolean"],punctuation:t["punctuation"]},e.languages.insertBefore("velocity","comment",{unparsed:{pattern:/(^|[^\\])#\[\[[\s\S]*?\]\]#/,lookbehind:!0,greedy:!0,inside:{punctuation:/^#\[\[|\]\]#$/}},"velocity-comment":[{pattern:/(^|[^\\])#\*[\s\S]*?\*#/,lookbehind:!0,greedy:!0,alias:"comment"},{pattern:/(^|[^\\])##.*/,lookbehind:!0,greedy:!0,alias:"comment"}],directive:{pattern:/(^|[^\\](?:\\\\)*)#@?(?:[a-z][\w-]*|\{[a-z][\w-]*\})(?:\s*\((?:[^()]|\([^()]*\))*\))?/i,lookbehind:!0,inside:{keyword:{pattern:/^#@?(?:[a-z][\w-]*|\{[a-z][\w-]*\})|\bin\b/,inside:{punctuation:/[{}]/}},rest:t}},variable:t["variable"]}),e.languages.velocity["tag"].inside["attr-value"].inside.rest=e.languages.velocity})(Prism)},7251:function(){Prism.languages.verilog={comment:{pattern:/\/\/.*|\/\*[\s\S]*?\*\//,greedy:!0},string:{pattern:/"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"/,greedy:!0},"kernel-function":{pattern:/\B\$\w+\b/,alias:"property"},constant:/\B`\w+\b/,function:/\b\w+(?=\()/,keyword:/\b(?:alias|and|assert|assign|assume|automatic|before|begin|bind|bins|binsof|bit|break|buf|bufif0|bufif1|byte|case|casex|casez|cell|chandle|class|clocking|cmos|config|const|constraint|context|continue|cover|covergroup|coverpoint|cross|deassign|default|defparam|design|disable|dist|do|edge|else|end|endcase|endclass|endclocking|endconfig|endfunction|endgenerate|endgroup|endinterface|endmodule|endpackage|endprimitive|endprogram|endproperty|endsequence|endspecify|endtable|endtask|enum|event|expect|export|extends|extern|final|first_match|for|force|foreach|forever|fork|forkjoin|function|generate|genvar|highz0|highz1|if|iff|ifnone|ignore_bins|illegal_bins|import|incdir|include|initial|inout|input|inside|instance|int|integer|interface|intersect|join|join_any|join_none|large|liblist|library|local|localparam|logic|longint|macromodule|matches|medium|modport|module|nand|negedge|new|nmos|nor|noshowcancelled|not|notif0|notif1|null|or|output|package|packed|parameter|pmos|posedge|primitive|priority|program|property|protected|pull0|pull1|pulldown|pullup|pulsestyle_ondetect|pulsestyle_onevent|pure|rand|randc|randcase|randsequence|rcmos|real|realtime|ref|reg|release|repeat|return|rnmos|rpmos|rtran|rtranif0|rtranif1|scalared|sequence|shortint|shortreal|showcancelled|signed|small|solve|specify|specparam|static|string|strong0|strong1|struct|super|supply0|supply1|table|tagged|task|this|throughout|time|timeprecision|timeunit|tran|tranif0|tranif1|tri|tri0|tri1|triand|trior|trireg|type|typedef|union|unique|unsigned|use|uwire|var|vectored|virtual|void|wait|wait_order|wand|weak0|weak1|while|wildcard|wire|with|within|wor|xnor|xor)\b/,important:/\b(?:always|always_comb|always_ff|always_latch)\b(?: *@)?/,number:/\B##?\d+|(?:\b\d+)?'[odbh] ?[\da-fzx_?]+|\b(?:\d*[._])?\d+(?:e[-+]?\d+)?/i,operator:/[-+{}^~%*\/?=!<>&|]+/,punctuation:/[[\];(),.:]/}},8564:function(){Prism.languages.vhdl={comment:/--.+/,"vhdl-vectors":{pattern:/\b[oxb]"[\da-f_]+"|"[01uxzwlh-]+"/i,alias:"number"},"quoted-function":{pattern:/"\S+?"(?=\()/,alias:"function"},string:/"(?:[^\\"\r\n]|\\(?:\r\n|[\s\S]))*"/,attribute:{pattern:/\b'\w+/,alias:"attr-name"},keyword:/\b(?:access|after|alias|all|architecture|array|assert|attribute|begin|block|body|buffer|bus|case|component|configuration|constant|disconnect|downto|else|elsif|end|entity|exit|file|for|function|generate|generic|group|guarded|if|impure|in|inertial|inout|is|label|library|linkage|literal|loop|map|new|next|null|of|on|open|others|out|package|port|postponed|private|procedure|process|pure|range|record|register|reject|report|return|select|severity|shared|signal|subtype|then|to|transport|type|unaffected|units|until|use|variable|view|wait|when|while|with)\b/i,boolean:/\b(?:false|true)\b/i,function:/\w+(?=\()/,number:/'[01uxzwlh-]'|\b(?:\d+#[\da-f_.]+#|\d[\d_.]*)(?:e[-+]?\d+)?/i,operator:/[<>]=?|:=|[-+*/&=]|\b(?:abs|and|mod|nand|nor|not|or|rem|rol|ror|sla|sll|sra|srl|xnor|xor)\b/i,punctuation:/[{}[\];(),.:]/}},4438:function(){Prism.languages.vim={string:/"(?:[^"\\\r\n]|\\.)*"|'(?:[^'\r\n]|'')*'/,comment:/".*/,function:/\b\w+(?=\()/,keyword:/\b(?:N|Next|P|Print|X|XMLent|XMLns|ab|abbreviate|abc|abclear|abo|aboveleft|al|all|ar|arga|argadd|argd|argdelete|argdo|arge|argedit|argg|argglobal|argl|arglocal|args|argu|argument|as|ascii|b|bN|bNext|ba|bad|badd|ball|bd|bdelete|be|bel|belowright|bf|bfirst|bl|blast|bm|bmodified|bn|bnext|bo|botright|bp|bprevious|br|brea|break|breaka|breakadd|breakd|breakdel|breakl|breaklist|brewind|bro|browse|bufdo|buffer|buffers|bun|bunload|bw|bwipeout|c|cN|cNext|cNfcNfile|ca|cabbrev|cabc|cabclear|cad|caddb|caddbuffer|caddexpr|caddf|caddfile|cal|call|cat|catch|cb|cbuffer|cc|ccl|cclose|cd|ce|center|cex|cexpr|cf|cfile|cfir|cfirst|cg|cgetb|cgetbuffer|cgete|cgetexpr|cgetfile|change|changes|chd|chdir|che|checkpath|checkt|checktime|cl|cla|clast|clist|clo|close|cmapc|cmapclear|cn|cnew|cnewer|cnext|cnf|cnfile|cnorea|cnoreabbrev|co|col|colder|colo|colorscheme|comc|comclear|comp|compiler|con|conf|confirm|continue|cope|copen|copy|cp|cpf|cpfile|cprevious|cq|cquit|cr|crewind|cu|cuna|cunabbrev|cunmap|cw|cwindow|d|debugg|debuggreedy|delc|delcommand|delete|delf|delfunction|delm|delmarks|di|diffg|diffget|diffoff|diffpatch|diffpu|diffput|diffsplit|diffthis|diffu|diffupdate|dig|digraphs|display|dj|djump|dl|dlist|dr|drop|ds|dsearch|dsp|dsplit|e|earlier|echoe|echoerr|echom|echomsg|echon|edit|el|else|elsei|elseif|em|emenu|en|endf|endfo|endfor|endfun|endfunction|endif|endt|endtry|endw|endwhile|ene|enew|ex|exi|exit|exu|exusage|f|file|files|filetype|fin|fina|finally|find|fini|finish|fir|first|fix|fixdel|fo|fold|foldc|foldclose|foldd|folddoc|folddoclosed|folddoopen|foldo|foldopen|for|fu|fun|function|go|goto|gr|grep|grepa|grepadd|h|ha|hardcopy|help|helpf|helpfind|helpg|helpgrep|helpt|helptags|hid|hide|his|history|ia|iabbrev|iabc|iabclear|if|ij|ijump|il|ilist|imapc|imapclear|in|inorea|inoreabbrev|isearch|isp|isplit|iu|iuna|iunabbrev|iunmap|j|join|ju|jumps|k|kee|keepalt|keepj|keepjumps|keepmarks|l|lN|lNext|lNf|lNfile|la|lad|laddb|laddbuffer|laddexpr|laddf|laddfile|lan|language|last|later|lb|lbuffer|lc|lcd|lch|lchdir|lcl|lclose|left|lefta|leftabove|let|lex|lexpr|lf|lfile|lfir|lfirst|lg|lgetb|lgetbuffer|lgete|lgetexpr|lgetfile|lgr|lgrep|lgrepa|lgrepadd|lh|lhelpgrep|list|ll|lla|llast|lli|llist|lm|lmak|lmake|lmap|lmapc|lmapclear|ln|lne|lnew|lnewer|lnext|lnf|lnfile|lnoremap|lo|loadview|loc|lockmarks|lockv|lockvar|lol|lolder|lop|lopen|lp|lpf|lpfile|lprevious|lr|lrewind|ls|lt|ltag|lu|lunmap|lv|lvimgrep|lvimgrepa|lvimgrepadd|lw|lwindow|m|ma|mak|make|mark|marks|mat|match|menut|menutranslate|mk|mkexrc|mks|mksession|mksp|mkspell|mkv|mkvie|mkview|mkvimrc|mod|mode|move|mz|mzf|mzfile|mzscheme|n|nbkey|new|next|nmapc|nmapclear|noh|nohlsearch|norea|noreabbrev|nu|number|nun|nunmap|o|omapc|omapclear|on|only|open|opt|options|ou|ounmap|p|pc|pclose|pe|ped|pedit|perl|perld|perldo|po|pop|popu|popup|pp|ppop|pre|preserve|prev|previous|print|prof|profd|profdel|profile|promptf|promptfind|promptr|promptrepl|ps|psearch|ptN|ptNext|pta|ptag|ptf|ptfirst|ptj|ptjump|ptl|ptlast|ptn|ptnext|ptp|ptprevious|ptr|ptrewind|pts|ptselect|pu|put|pw|pwd|py|pyf|pyfile|python|q|qa|qall|quit|quita|quitall|r|read|rec|recover|red|redi|redir|redo|redr|redraw|redraws|redrawstatus|reg|registers|res|resize|ret|retab|retu|return|rew|rewind|ri|right|rightb|rightbelow|ru|rub|ruby|rubyd|rubydo|rubyf|rubyfile|runtime|rv|rviminfo|sN|sNext|sa|sal|sall|san|sandbox|sargument|sav|saveas|sb|sbN|sbNext|sba|sball|sbf|sbfirst|sbl|sblast|sbm|sbmodified|sbn|sbnext|sbp|sbprevious|sbr|sbrewind|sbuffer|scrip|scripte|scriptencoding|scriptnames|se|set|setf|setfiletype|setg|setglobal|setl|setlocal|sf|sfind|sfir|sfirst|sh|shell|sign|sil|silent|sim|simalt|sl|sla|slast|sleep|sm|smagic|smap|smapc|smapclear|sme|smenu|sn|snext|sni|sniff|sno|snomagic|snor|snoremap|snoreme|snoremenu|so|sor|sort|source|sp|spe|spelld|spelldump|spellgood|spelli|spellinfo|spellr|spellrepall|spellu|spellundo|spellw|spellwrong|split|spr|sprevious|sre|srewind|st|sta|stag|star|startg|startgreplace|startinsert|startr|startreplace|stj|stjump|stop|stopi|stopinsert|sts|stselect|sun|sunhide|sunm|sunmap|sus|suspend|sv|sview|syncbind|t|tN|tNext|ta|tab|tabN|tabNext|tabc|tabclose|tabd|tabdo|tabe|tabedit|tabf|tabfind|tabfir|tabfirst|tabl|tablast|tabm|tabmove|tabn|tabnew|tabnext|tabo|tabonly|tabp|tabprevious|tabr|tabrewind|tabs|tag|tags|tc|tcl|tcld|tcldo|tclf|tclfile|te|tearoff|tf|tfirst|th|throw|tj|tjump|tl|tlast|tm|tmenu|tn|tnext|to|topleft|tp|tprevious|tr|trewind|try|ts|tselect|tu|tunmenu|u|una|unabbreviate|undo|undoj|undojoin|undol|undolist|unh|unhide|unlet|unlo|unlockvar|unm|unmap|up|update|ve|verb|verbose|version|vert|vertical|vi|vie|view|vim|vimgrep|vimgrepa|vimgrepadd|visual|viu|viusage|vmapc|vmapclear|vne|vnew|vs|vsplit|vu|vunmap|w|wN|wNext|wa|wall|wh|while|win|winc|wincmd|windo|winp|winpos|winsize|wn|wnext|wp|wprevious|wq|wqa|wqall|write|ws|wsverb|wv|wviminfo|x|xa|xall|xit|xm|xmap|xmapc|xmapclear|xme|xmenu|xn|xnoremap|xnoreme|xnoremenu|xu|xunmap|y|yank)\b/,builtin:/\b(?:acd|ai|akm|aleph|allowrevins|altkeymap|ambiwidth|ambw|anti|antialias|arab|arabic|arabicshape|ari|arshape|autochdir|autocmd|autoindent|autoread|autowrite|autowriteall|aw|awa|background|backspace|backup|backupcopy|backupdir|backupext|backupskip|balloondelay|ballooneval|balloonexpr|bdir|bdlay|beval|bex|bexpr|bg|bh|bin|binary|biosk|bioskey|bk|bkc|bomb|breakat|brk|browsedir|bs|bsdir|bsk|bt|bufhidden|buflisted|buftype|casemap|ccv|cdpath|cedit|cfu|ch|charconvert|ci|cin|cindent|cink|cinkeys|cino|cinoptions|cinw|cinwords|clipboard|cmdheight|cmdwinheight|cmp|cms|columns|com|comments|commentstring|compatible|complete|completefunc|completeopt|consk|conskey|copyindent|cot|cpo|cpoptions|cpt|cscopepathcomp|cscopeprg|cscopequickfix|cscopetag|cscopetagorder|cscopeverbose|cspc|csprg|csqf|cst|csto|csverb|cuc|cul|cursorcolumn|cursorline|cwh|debug|deco|def|define|delcombine|dex|dg|dict|dictionary|diff|diffexpr|diffopt|digraph|dip|dir|directory|dy|ea|ead|eadirection|eb|ed|edcompatible|ef|efm|ei|ek|enc|encoding|endofline|eol|ep|equalalways|equalprg|errorbells|errorfile|errorformat|esckeys|et|eventignore|expandtab|exrc|fcl|fcs|fdc|fde|fdi|fdl|fdls|fdm|fdn|fdo|fdt|fen|fenc|fencs|fex|ff|ffs|fileencoding|fileencodings|fileformat|fileformats|fillchars|fk|fkmap|flp|fml|fmr|foldcolumn|foldenable|foldexpr|foldignore|foldlevel|foldlevelstart|foldmarker|foldmethod|foldminlines|foldnestmax|foldtext|formatexpr|formatlistpat|formatoptions|formatprg|fp|fs|fsync|ft|gcr|gd|gdefault|gfm|gfn|gfs|gfw|ghr|gp|grepformat|grepprg|gtl|gtt|guicursor|guifont|guifontset|guifontwide|guiheadroom|guioptions|guipty|guitablabel|guitabtooltip|helpfile|helpheight|helplang|hf|hh|hi|hidden|highlight|hk|hkmap|hkmapp|hkp|hl|hlg|hls|hlsearch|ic|icon|iconstring|ignorecase|im|imactivatekey|imak|imc|imcmdline|imd|imdisable|imi|iminsert|ims|imsearch|inc|include|includeexpr|incsearch|inde|indentexpr|indentkeys|indk|inex|inf|infercase|insertmode|invacd|invai|invakm|invallowrevins|invaltkeymap|invanti|invantialias|invar|invarab|invarabic|invarabicshape|invari|invarshape|invautochdir|invautoindent|invautoread|invautowrite|invautowriteall|invaw|invawa|invbackup|invballooneval|invbeval|invbin|invbinary|invbiosk|invbioskey|invbk|invbl|invbomb|invbuflisted|invcf|invci|invcin|invcindent|invcompatible|invconfirm|invconsk|invconskey|invcopyindent|invcp|invcscopetag|invcscopeverbose|invcst|invcsverb|invcuc|invcul|invcursorcolumn|invcursorline|invdeco|invdelcombine|invdg|invdiff|invdigraph|invdisable|invea|inveb|inved|invedcompatible|invek|invendofline|inveol|invequalalways|inverrorbells|invesckeys|invet|invex|invexpandtab|invexrc|invfen|invfk|invfkmap|invfoldenable|invgd|invgdefault|invguipty|invhid|invhidden|invhk|invhkmap|invhkmapp|invhkp|invhls|invhlsearch|invic|invicon|invignorecase|invim|invimc|invimcmdline|invimd|invincsearch|invinf|invinfercase|invinsertmode|invis|invjoinspaces|invjs|invlazyredraw|invlbr|invlinebreak|invlisp|invlist|invloadplugins|invlpl|invlz|invma|invmacatsui|invmagic|invmh|invml|invmod|invmodeline|invmodifiable|invmodified|invmore|invmousef|invmousefocus|invmousehide|invnu|invnumber|invodev|invopendevice|invpaste|invpi|invpreserveindent|invpreviewwindow|invprompt|invpvw|invreadonly|invremap|invrestorescreen|invrevins|invri|invrightleft|invrightleftcmd|invrl|invrlc|invro|invrs|invru|invruler|invsb|invsc|invscb|invscrollbind|invscs|invsecure|invsft|invshellslash|invshelltemp|invshiftround|invshortname|invshowcmd|invshowfulltag|invshowmatch|invshowmode|invsi|invsm|invsmartcase|invsmartindent|invsmarttab|invsmd|invsn|invsol|invspell|invsplitbelow|invsplitright|invspr|invsr|invssl|invsta|invstartofline|invstmp|invswapfile|invswf|invta|invtagbsearch|invtagrelative|invtagstack|invtbi|invtbidi|invtbs|invtermbidi|invterse|invtextauto|invtextmode|invtf|invtgst|invtildeop|invtimeout|invtitle|invto|invtop|invtr|invttimeout|invttybuiltin|invttyfast|invtx|invvb|invvisualbell|invwa|invwarn|invwb|invweirdinvert|invwfh|invwfw|invwildmenu|invwinfixheight|invwinfixwidth|invwiv|invwmnu|invwrap|invwrapscan|invwrite|invwriteany|invwritebackup|invws|isf|isfname|isi|isident|isk|iskeyword|isprint|joinspaces|js|key|keymap|keymodel|keywordprg|km|kmp|kp|langmap|langmenu|laststatus|lazyredraw|lbr|lcs|linebreak|lines|linespace|lisp|lispwords|listchars|loadplugins|lpl|lsp|lz|macatsui|magic|makeef|makeprg|matchpairs|matchtime|maxcombine|maxfuncdepth|maxmapdepth|maxmem|maxmempattern|maxmemtot|mco|mef|menuitems|mfd|mh|mis|mkspellmem|ml|mls|mm|mmd|mmp|mmt|modeline|modelines|modifiable|modified|more|mouse|mousef|mousefocus|mousehide|mousem|mousemodel|mouses|mouseshape|mouset|mousetime|mp|mps|msm|mzq|mzquantum|nf|noacd|noai|noakm|noallowrevins|noaltkeymap|noanti|noantialias|noar|noarab|noarabic|noarabicshape|noari|noarshape|noautochdir|noautoindent|noautoread|noautowrite|noautowriteall|noaw|noawa|nobackup|noballooneval|nobeval|nobin|nobinary|nobiosk|nobioskey|nobk|nobl|nobomb|nobuflisted|nocf|noci|nocin|nocindent|nocompatible|noconfirm|noconsk|noconskey|nocopyindent|nocp|nocscopetag|nocscopeverbose|nocst|nocsverb|nocuc|nocul|nocursorcolumn|nocursorline|nodeco|nodelcombine|nodg|nodiff|nodigraph|nodisable|noea|noeb|noed|noedcompatible|noek|noendofline|noeol|noequalalways|noerrorbells|noesckeys|noet|noex|noexpandtab|noexrc|nofen|nofk|nofkmap|nofoldenable|nogd|nogdefault|noguipty|nohid|nohidden|nohk|nohkmap|nohkmapp|nohkp|nohls|noic|noicon|noignorecase|noim|noimc|noimcmdline|noimd|noincsearch|noinf|noinfercase|noinsertmode|nois|nojoinspaces|nojs|nolazyredraw|nolbr|nolinebreak|nolisp|nolist|noloadplugins|nolpl|nolz|noma|nomacatsui|nomagic|nomh|noml|nomod|nomodeline|nomodifiable|nomodified|nomore|nomousef|nomousefocus|nomousehide|nonu|nonumber|noodev|noopendevice|nopaste|nopi|nopreserveindent|nopreviewwindow|noprompt|nopvw|noreadonly|noremap|norestorescreen|norevins|nori|norightleft|norightleftcmd|norl|norlc|noro|nors|noru|noruler|nosb|nosc|noscb|noscrollbind|noscs|nosecure|nosft|noshellslash|noshelltemp|noshiftround|noshortname|noshowcmd|noshowfulltag|noshowmatch|noshowmode|nosi|nosm|nosmartcase|nosmartindent|nosmarttab|nosmd|nosn|nosol|nospell|nosplitbelow|nosplitright|nospr|nosr|nossl|nosta|nostartofline|nostmp|noswapfile|noswf|nota|notagbsearch|notagrelative|notagstack|notbi|notbidi|notbs|notermbidi|noterse|notextauto|notextmode|notf|notgst|notildeop|notimeout|notitle|noto|notop|notr|nottimeout|nottybuiltin|nottyfast|notx|novb|novisualbell|nowa|nowarn|nowb|noweirdinvert|nowfh|nowfw|nowildmenu|nowinfixheight|nowinfixwidth|nowiv|nowmnu|nowrap|nowrapscan|nowrite|nowriteany|nowritebackup|nows|nrformats|numberwidth|nuw|odev|oft|ofu|omnifunc|opendevice|operatorfunc|opfunc|osfiletype|pa|para|paragraphs|paste|pastetoggle|patchexpr|patchmode|path|pdev|penc|pex|pexpr|pfn|ph|pheader|pi|pm|pmbcs|pmbfn|popt|preserveindent|previewheight|previewwindow|printdevice|printencoding|printexpr|printfont|printheader|printmbcharset|printmbfont|printoptions|prompt|pt|pumheight|pvh|pvw|qe|quoteescape|readonly|remap|report|restorescreen|revins|rightleft|rightleftcmd|rl|rlc|ro|rs|rtp|ruf|ruler|rulerformat|runtimepath|sbo|sc|scb|scr|scroll|scrollbind|scrolljump|scrolloff|scrollopt|scs|sect|sections|secure|sel|selection|selectmode|sessionoptions|sft|shcf|shellcmdflag|shellpipe|shellquote|shellredir|shellslash|shelltemp|shelltype|shellxquote|shiftround|shiftwidth|shm|shortmess|shortname|showbreak|showcmd|showfulltag|showmatch|showmode|showtabline|shq|si|sidescroll|sidescrolloff|siso|sj|slm|smartcase|smartindent|smarttab|smc|smd|softtabstop|sol|spc|spell|spellcapcheck|spellfile|spelllang|spellsuggest|spf|spl|splitbelow|splitright|sps|sr|srr|ss|ssl|ssop|stal|startofline|statusline|stl|stmp|su|sua|suffixes|suffixesadd|sw|swapfile|swapsync|swb|swf|switchbuf|sws|sxq|syn|synmaxcol|syntax|t_AB|t_AF|t_AL|t_CS|t_CV|t_Ce|t_Co|t_Cs|t_DL|t_EI|t_F1|t_F2|t_F3|t_F4|t_F5|t_F6|t_F7|t_F8|t_F9|t_IE|t_IS|t_K1|t_K3|t_K4|t_K5|t_K6|t_K7|t_K8|t_K9|t_KA|t_KB|t_KC|t_KD|t_KE|t_KF|t_KG|t_KH|t_KI|t_KJ|t_KK|t_KL|t_RI|t_RV|t_SI|t_Sb|t_Sf|t_WP|t_WS|t_ZH|t_ZR|t_al|t_bc|t_cd|t_ce|t_cl|t_cm|t_cs|t_da|t_db|t_dl|t_fs|t_k1|t_k2|t_k3|t_k4|t_k5|t_k6|t_k7|t_k8|t_k9|t_kB|t_kD|t_kI|t_kN|t_kP|t_kb|t_kd|t_ke|t_kh|t_kl|t_kr|t_ks|t_ku|t_le|t_mb|t_md|t_me|t_mr|t_ms|t_nd|t_op|t_se|t_so|t_sr|t_te|t_ti|t_ts|t_ue|t_us|t_ut|t_vb|t_ve|t_vi|t_vs|t_xs|tabline|tabpagemax|tabstop|tagbsearch|taglength|tagrelative|tagstack|tal|tb|tbi|tbidi|tbis|tbs|tenc|term|termbidi|termencoding|terse|textauto|textmode|textwidth|tgst|thesaurus|tildeop|timeout|timeoutlen|title|titlelen|titleold|titlestring|toolbar|toolbariconsize|top|tpm|tsl|tsr|ttimeout|ttimeoutlen|ttm|tty|ttybuiltin|ttyfast|ttym|ttymouse|ttyscroll|ttytype|tw|tx|uc|ul|undolevels|updatecount|updatetime|ut|vb|vbs|vdir|verbosefile|vfile|viewdir|viewoptions|viminfo|virtualedit|visualbell|vop|wak|warn|wb|wc|wcm|wd|weirdinvert|wfh|wfw|whichwrap|wi|wig|wildchar|wildcharm|wildignore|wildmenu|wildmode|wildoptions|wim|winaltkeys|window|winfixheight|winfixwidth|winheight|winminheight|winminwidth|winwidth|wiv|wiw|wm|wmh|wmnu|wmw|wop|wrap|wrapmargin|wrapscan|writeany|writebackup|writedelay|ww)\b/,number:/\b(?:0x[\da-f]+|\d+(?:\.\d+)?)\b/i,operator:/\|\||&&|[-+.]=?|[=!](?:[=~][#?]?)?|[<>]=?[#?]?|[*\/%?]|\b(?:is(?:not)?)\b/,punctuation:/[{}[\](),;:]/}},3082:function(){Prism.languages["visual-basic"]={comment:{pattern:/(?:['‘’]|REM\b)(?:[^\r\n_]|_(?:\r\n?|\n)?)*/i,inside:{keyword:/^REM/i}},directive:{pattern:/#(?:Const|Else|ElseIf|End|ExternalChecksum|ExternalSource|If|Region)(?:\b_[ \t]*(?:\r\n?|\n)|.)+/i,alias:"property",greedy:!0},string:{pattern:/\$?["“”](?:["“”]{2}|[^"“”])*["“”]C?/i,greedy:!0},date:{pattern:/#[ \t]*(?:\d+([/-])\d+\1\d+(?:[ \t]+(?:\d+[ \t]*(?:AM|PM)|\d+:\d+(?::\d+)?(?:[ \t]*(?:AM|PM))?))?|\d+[ \t]*(?:AM|PM)|\d+:\d+(?::\d+)?(?:[ \t]*(?:AM|PM))?)[ \t]*#/i,alias:"number"},number:/(?:(?:\b\d+(?:\.\d+)?|\.\d+)(?:E[+-]?\d+)?|&[HO][\dA-F]+)(?:[FRD]|U?[ILS])?/i,boolean:/\b(?:False|Nothing|True)\b/i,keyword:/\b(?:AddHandler|AddressOf|Alias|And(?:Also)?|As|Boolean|ByRef|Byte|ByVal|Call|Case|Catch|C(?:Bool|Byte|Char|Date|Dbl|Dec|Int|Lng|Obj|SByte|Short|Sng|Str|Type|UInt|ULng|UShort)|Char|Class|Const|Continue|Currency|Date|Decimal|Declare|Default|Delegate|Dim|DirectCast|Do|Double|Each|Else(?:If)?|End(?:If)?|Enum|Erase|Error|Event|Exit|Finally|For|Friend|Function|Get(?:Type|XMLNamespace)?|Global|GoSub|GoTo|Handles|If|Implements|Imports|In|Inherits|Integer|Interface|Is|IsNot|Let|Lib|Like|Long|Loop|Me|Mod|Module|Must(?:Inherit|Override)|My(?:Base|Class)|Namespace|Narrowing|New|Next|Not(?:Inheritable|Overridable)?|Object|Of|On|Operator|Option(?:al)?|Or(?:Else)?|Out|Overloads|Overridable|Overrides|ParamArray|Partial|Private|Property|Protected|Public|RaiseEvent|ReadOnly|ReDim|RemoveHandler|Resume|Return|SByte|Select|Set|Shadows|Shared|short|Single|Static|Step|Stop|String|Structure|Sub|SyncLock|Then|Throw|To|Try|TryCast|Type|TypeOf|U(?:Integer|Long|Short)|Until|Using|Variant|Wend|When|While|Widening|With(?:Events)?|WriteOnly|Xor)\b/i,operator:/[+\-*/\\^<=>&#@$%!]|\b_(?=[ \t]*[\r\n])/,punctuation:/[{}().,:?]/},Prism.languages.vb=Prism.languages["visual-basic"],Prism.languages.vba=Prism.languages["visual-basic"]},8:function(){Prism.languages.warpscript={comment:/#.*|\/\/.*|\/\*[\s\S]*?\*\//,string:{pattern:/"(?:[^"\\\r\n]|\\.)*"|'(?:[^'\\\r\n]|\\.)*'|<'(?:[^\\']|'(?!>)|\\.)*'>/,greedy:!0},variable:/\$\S+/,macro:{pattern:/@\S+/,alias:"property"},keyword:/\b(?:BREAK|CHECKMACRO|CONTINUE|CUDF|DEFINED|DEFINEDMACRO|EVAL|FAIL|FOR|FOREACH|FORSTEP|IFT|IFTE|MSGFAIL|NRETURN|RETHROW|RETURN|SWITCH|TRY|UDF|UNTIL|WHILE)\b/,number:/[+-]?\b(?:NaN|Infinity|\d+(?:\.\d*)?(?:[Ee][+-]?\d+)?|0x[\da-fA-F]+|0b[01]+)\b/,boolean:/\b(?:F|T|false|true)\b/,punctuation:/<%|%>|[{}[\]()]/,operator:/==|&&?|\|\|?|\*\*?|>>>?|<<|[<>!~]=?|[-/%^]|\+!?|\b(?:AND|NOT|OR)\b/}},5774:function(){Prism.languages.wasm={comment:[/\(;[\s\S]*?;\)/,{pattern:/;;.*/,greedy:!0}],string:{pattern:/"(?:\\[\s\S]|[^"\\])*"/,greedy:!0},keyword:[{pattern:/\b(?:align|offset)=/,inside:{operator:/=/}},{pattern:/\b(?:(?:f32|f64|i32|i64)(?:\.(?:abs|add|and|ceil|clz|const|convert_[su]\/i(?:32|64)|copysign|ctz|demote\/f64|div(?:_[su])?|eqz?|extend_[su]\/i32|floor|ge(?:_[su])?|gt(?:_[su])?|le(?:_[su])?|load(?:(?:8|16|32)_[su])?|lt(?:_[su])?|max|min|mul|neg?|nearest|or|popcnt|promote\/f32|reinterpret\/[fi](?:32|64)|rem_[su]|rot[lr]|shl|shr_[su]|sqrt|store(?:8|16|32)?|sub|trunc(?:_[su]\/f(?:32|64))?|wrap\/i64|xor))?|memory\.(?:grow|size))\b/,inside:{punctuation:/\./}},/\b(?:anyfunc|block|br(?:_if|_table)?|call(?:_indirect)?|data|drop|elem|else|end|export|func|get_(?:global|local)|global|if|import|local|loop|memory|module|mut|nop|offset|param|result|return|select|set_(?:global|local)|start|table|tee_local|then|type|unreachable)\b/],variable:/\$[\w!#$%&'*+\-./:<=>?@\\^`|~]+/,number:/[+-]?\b(?:\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:[eE][+-]?\d(?:_?\d)*)?|0x[\da-fA-F](?:_?[\da-fA-F])*(?:\.[\da-fA-F](?:_?[\da-fA-D])*)?(?:[pP][+-]?\d(?:_?\d)*)?)\b|\binf\b|\bnan(?::0x[\da-fA-F](?:_?[\da-fA-D])*)?\b/,punctuation:/[()]/}},4040:function(){(function(e){var t=/(?:\B-|\b_|\b)[A-Za-z][\w-]*(?![\w-])/.source,n="(?:"+/\b(?:unsigned\s+)?long\s+long(?![\w-])/.source+"|"+/\b(?:unrestricted|unsigned)\s+[a-z]+(?![\w-])/.source+"|"+/(?!(?:unrestricted|unsigned)\b)/.source+t+/(?:\s*<(?:[^<>]|<[^<>]*>)*>)?/.source+")"+/(?:\s*\?)?/.source,r={};for(var i in e.languages["web-idl"]={comment:{pattern:/\/\/.*|\/\*[\s\S]*?\*\//,greedy:!0},string:{pattern:/"[^"]*"/,greedy:!0},namespace:{pattern:RegExp(/(\bnamespace\s+)/.source+t),lookbehind:!0},"class-name":[{pattern:/(^|[^\w-])(?:iterable|maplike|setlike)\s*<(?:[^<>]|<[^<>]*>)*>/,lookbehind:!0,inside:r},{pattern:RegExp(/(\b(?:attribute|const|deleter|getter|optional|setter)\s+)/.source+n),lookbehind:!0,inside:r},{pattern:RegExp("("+/\bcallback\s+/.source+t+/\s*=\s*/.source+")"+n),lookbehind:!0,inside:r},{pattern:RegExp(/(\btypedef\b\s*)/.source+n),lookbehind:!0,inside:r},{pattern:RegExp(/(\b(?:callback|dictionary|enum|interface(?:\s+mixin)?)\s+)(?!(?:interface|mixin)\b)/.source+t),lookbehind:!0},{pattern:RegExp(/(:\s*)/.source+t),lookbehind:!0},RegExp(t+/(?=\s+(?:implements|includes)\b)/.source),{pattern:RegExp(/(\b(?:implements|includes)\s+)/.source+t),lookbehind:!0},{pattern:RegExp(n+"(?="+/\s*(?:\.{3}\s*)?/.source+t+/\s*[(),;=]/.source+")"),inside:r}],builtin:/\b(?:ArrayBuffer|BigInt64Array|BigUint64Array|ByteString|DOMString|DataView|Float32Array|Float64Array|FrozenArray|Int16Array|Int32Array|Int8Array|ObservableArray|Promise|USVString|Uint16Array|Uint32Array|Uint8Array|Uint8ClampedArray)\b/,keyword:[/\b(?:async|attribute|callback|const|constructor|deleter|dictionary|enum|getter|implements|includes|inherit|interface|mixin|namespace|null|optional|or|partial|readonly|required|setter|static|stringifier|typedef|unrestricted)\b/,/\b(?:any|bigint|boolean|byte|double|float|iterable|long|maplike|object|octet|record|sequence|setlike|short|symbol|undefined|unsigned|void)\b/],boolean:/\b(?:false|true)\b/,number:{pattern:/(^|[^\w-])-?(?:0x[0-9a-f]+|(?:\d+(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?|NaN|Infinity)(?![\w-])/i,lookbehind:!0},operator:/\.{3}|[=:?<>-]/,punctuation:/[(){}[\].,;]/},e.languages["web-idl"])"class-name"!==i&&(r[i]=e.languages["web-idl"][i]);e.languages["webidl"]=e.languages["web-idl"]})(Prism)},230:function(){Prism.languages.wgsl={comment:{pattern:/\/\/.*|\/\*[\s\S]*?(?:\*\/|$)/,greedy:!0},"builtin-attribute":{pattern:/(@)builtin\(.*?\)/,lookbehind:!0,inside:{attribute:{pattern:/^builtin/,alias:"attr-name"},punctuation:/[(),]/,"built-in-values":{pattern:/\b(?:frag_depth|front_facing|global_invocation_id|instance_index|local_invocation_id|local_invocation_index|num_workgroups|position|sample_index|sample_mask|vertex_index|workgroup_id)\b/,alias:"attr-value"}}},attributes:{pattern:/(@)(?:align|binding|compute|const|fragment|group|id|interpolate|invariant|location|size|vertex|workgroup_size)/i,lookbehind:!0,alias:"attr-name"},functions:{pattern:/\b(fn\s+)[_a-zA-Z]\w*(?=[(<])/,lookbehind:!0,alias:"function"},keyword:/\b(?:bitcast|break|case|const|continue|continuing|default|discard|else|enable|fallthrough|fn|for|function|if|let|loop|private|return|storage|struct|switch|type|uniform|var|while|workgroup)\b/,builtin:/\b(?:abs|acos|acosh|all|any|array|asin|asinh|atan|atan2|atanh|atomic|atomicAdd|atomicAnd|atomicCompareExchangeWeak|atomicExchange|atomicLoad|atomicMax|atomicMin|atomicOr|atomicStore|atomicSub|atomicXor|bool|ceil|clamp|cos|cosh|countLeadingZeros|countOneBits|countTrailingZeros|cross|degrees|determinant|distance|dot|dpdx|dpdxCoarse|dpdxFine|dpdy|dpdyCoarse|dpdyFine|exp|exp2|extractBits|f32|f64|faceForward|firstLeadingBit|floor|fma|fract|frexp|fwidth|fwidthCoarse|fwidthFine|i32|i64|insertBits|inverseSqrt|ldexp|length|log|log2|mat[2-4]x[2-4]|max|min|mix|modf|normalize|override|pack2x16float|pack2x16snorm|pack2x16unorm|pack4x8snorm|pack4x8unorm|pow|ptr|quantizeToF16|radians|reflect|refract|reverseBits|round|sampler|sampler_comparison|select|shiftLeft|shiftRight|sign|sin|sinh|smoothstep|sqrt|staticAssert|step|storageBarrier|tan|tanh|textureDimensions|textureGather|textureGatherCompare|textureLoad|textureNumLayers|textureNumLevels|textureNumSamples|textureSample|textureSampleBias|textureSampleCompare|textureSampleCompareLevel|textureSampleGrad|textureSampleLevel|textureStore|texture_1d|texture_2d|texture_2d_array|texture_3d|texture_cube|texture_cube_array|texture_depth_2d|texture_depth_2d_array|texture_depth_cube|texture_depth_cube_array|texture_depth_multisampled_2d|texture_multisampled_2d|texture_storage_1d|texture_storage_2d|texture_storage_2d_array|texture_storage_3d|transpose|trunc|u32|u64|unpack2x16float|unpack2x16snorm|unpack2x16unorm|unpack4x8snorm|unpack4x8unorm|vec[2-4]|workgroupBarrier)\b/,"function-calls":{pattern:/\b[_a-z]\w*(?=\()/i,alias:"function"},"class-name":/\b(?:[A-Z][A-Za-z0-9]*)\b/,"bool-literal":{pattern:/\b(?:false|true)\b/,alias:"boolean"},"hex-int-literal":{pattern:/\b0[xX][0-9a-fA-F]+[iu]?\b(?![.pP])/,alias:"number"},"hex-float-literal":{pattern:/\b0[xX][0-9a-fA-F]*(?:\.[0-9a-fA-F]*)?(?:[pP][+-]?\d+[fh]?)?/,alias:"number"},"decimal-float-literal":[{pattern:/\d*\.\d+(?:[eE](?:\+|-)?\d+)?[fh]?/,alias:"number"},{pattern:/\d+\.\d*(?:[eE](?:\+|-)?\d+)?[fh]?/,alias:"number"},{pattern:/\d+[eE](?:\+|-)?\d+[fh]?/,alias:"number"},{pattern:/\b\d+[fh]\b/,alias:"number"}],"int-literal":{pattern:/\b\d+[iu]?\b/,alias:"number"},operator:[{pattern:/(?:\^|~|\|(?!\|)|\|\||&&|<<|>>|!)(?!=)/},{pattern:/&(?![&=])/},{pattern:/(?:\+=|-=|\*=|\/=|%=|\^=|&=|\|=|<<=|>>=)/},{pattern:/(^|[^<>=!])=(?![=>])/,lookbehind:!0},{pattern:/(?:==|!=|<=|\+\+|--|(^|[^=])>=)/,lookbehind:!0},{pattern:/(?:(?:[+%]|(?:\*(?!\w)))(?!=))|(?:-(?!>))|(?:\/(?!\/))/},{pattern:/->/}],punctuation:/[@(){}[\],;<>:.]/}},1693:function(){Prism.languages.wiki=Prism.languages.extend("markup",{"block-comment":{pattern:/(^|[^\\])\/\*[\s\S]*?\*\//,lookbehind:!0,alias:"comment"},heading:{pattern:/^(=+)[^=\r\n].*?\1/m,inside:{punctuation:/^=+|=+$/,important:/.+/}},emphasis:{pattern:/('{2,5}).+?\1/,inside:{"bold-italic":{pattern:/(''''').+?(?=\1)/,lookbehind:!0,alias:["bold","italic"]},bold:{pattern:/(''')[^'](?:.*?[^'])?(?=\1)/,lookbehind:!0},italic:{pattern:/('')[^'](?:.*?[^'])?(?=\1)/,lookbehind:!0},punctuation:/^''+|''+$/}},hr:{pattern:/^-{4,}/m,alias:"punctuation"},url:[/ISBN +(?:97[89][ -]?)?(?:\d[ -]?){9}[\dx]\b|(?:PMID|RFC) +\d+/i,/\[\[.+?\]\]|\[.+?\]/],variable:[/__[A-Z]+__/,/\{{3}.+?\}{3}/,/\{\{.+?\}\}/],symbol:[/^#redirect/im,/~{3,5}/],"table-tag":{pattern:/((?:^|[|!])[|!])[^|\r\n]+\|(?!\|)/m,lookbehind:!0,inside:{"table-bar":{pattern:/\|$/,alias:"punctuation"},rest:Prism.languages.markup["tag"].inside}},punctuation:/^(?:\{\||\|\}|\|-|[*#:;!|])|\|\||!!/m}),Prism.languages.insertBefore("wiki","tag",{nowiki:{pattern:/<(nowiki|pre|source)\b[^>]*>[\s\S]*?<\/\1>/i,inside:{tag:{pattern:/<(?:nowiki|pre|source)\b[^>]*>|<\/(?:nowiki|pre|source)>/i,inside:Prism.languages.markup["tag"].inside}}}})},9729:function(){Prism.languages.wolfram={comment:/\(\*(?:\(\*(?:[^*]|\*(?!\)))*\*\)|(?!\(\*)[\s\S])*?\*\)/,string:{pattern:/"(?:\\.|[^"\\\r\n])*"/,greedy:!0},keyword:/\b(?:Abs|AbsArg|Accuracy|Block|Do|For|Function|If|Manipulate|Module|Nest|NestList|None|Return|Switch|Table|Which|While)\b/,context:{pattern:/\b\w+`+\w*/,alias:"class-name"},blank:{pattern:/\b\w+_\b/,alias:"regex"},"global-variable":{pattern:/\$\w+/,alias:"variable"},boolean:/\b(?:False|True)\b/,number:/(?:\b(?=\d)|\B(?=\.))(?:0[bo])?(?:(?:\d|0x[\da-f])[\da-f]*(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?j?\b/i,operator:/\/\.|;|=\.|\^=|\^:=|:=|<<|>>|<\||\|>|:>|\|->|->|<-|@@@|@@|@|\/@|=!=|===|==|=|\+|-|\[\/-+%=\]=?|!=|\*\*?=?|\/\/?=?|<[<=>]?|>[=>]?|[&|^~]/,punctuation:/[{}[\];(),.:]/},Prism.languages.mathematica=Prism.languages.wolfram,Prism.languages.wl=Prism.languages.wolfram,Prism.languages.nb=Prism.languages.wolfram},5682:function(){Prism.languages.wren={comment:[{pattern:/\/\*(?:[^*/]|\*(?!\/)|\/(?!\*)|\/\*(?:[^*/]|\*(?!\/)|\/(?!\*)|\/\*(?:[^*/]|\*(?!\/)|\/(?!\*))*\*\/)*\*\/)*\*\//,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],"triple-quoted-string":{pattern:/"""[\s\S]*?"""/,greedy:!0,alias:"string"},"string-literal":null,hashbang:{pattern:/^#!\/.+/,greedy:!0,alias:"comment"},attribute:{pattern:/#!?[ \t\u3000]*\w+/,alias:"keyword"},"class-name":[{pattern:/(\bclass\s+)\w+/,lookbehind:!0},/\b[A-Z][a-z\d_]*\b/],constant:/\b[A-Z][A-Z\d_]*\b/,null:{pattern:/\bnull\b/,alias:"keyword"},keyword:/\b(?:as|break|class|construct|continue|else|for|foreign|if|import|in|is|return|static|super|this|var|while)\b/,boolean:/\b(?:false|true)\b/,number:/\b(?:0x[\da-f]+|\d+(?:\.\d+)?(?:e[+-]?\d+)?)\b/i,function:/\b[a-z_]\w*(?=\s*[({])/i,operator:/<<|>>|[=!<>]=?|&&|\|\||[-+*/%~^&|?:]|\.{2,3}/,punctuation:/[\[\](){}.,;]/},Prism.languages.wren["string-literal"]={pattern:/(^|[^\\"])"(?:[^\\"%]|\\[\s\S]|%(?!\()|%\((?:[^()]|\((?:[^()]|\([^)]*\))*\))*\))*"/,lookbehind:!0,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)%\((?:[^()]|\((?:[^()]|\([^)]*\))*\))*\)/,lookbehind:!0,inside:{expression:{pattern:/^(%\()[\s\S]+(?=\)$)/,lookbehind:!0,inside:Prism.languages.wren},"interpolation-punctuation":{pattern:/^%\(|\)$/,alias:"punctuation"}}},string:/[\s\S]+/}}},504:function(){(function(e){e.languages.xeora=e.languages.extend("markup",{constant:{pattern:/\$(?:DomainContents|PageRenderDuration)\$/,inside:{punctuation:{pattern:/\$/}}},variable:{pattern:/\$@?(?:#+|[-+*~=^])?[\w.]+\$/,inside:{punctuation:{pattern:/[$.]/},operator:{pattern:/#+|[-+*~=^@]/}}},"function-inline":{pattern:/\$F:[-\w.]+\?[-\w.]+(?:,(?:(?:@[-#]*\w+\.[\w+.]\.*)*\|)*(?:(?:[\w+]|[-#*.~^]+[\w+]|=\S)(?:[^$=]|=+[^=])*=*|(?:@[-#]*\w+\.[\w+.]\.*)+(?:(?:[\w+]|[-#*~^][-#*.~^]*[\w+]|=\S)(?:[^$=]|=+[^=])*=*)?)?)?\$/,inside:{variable:{pattern:/(?:[,|])@?(?:#+|[-+*~=^])?[\w.]+/,inside:{punctuation:{pattern:/[,.|]/},operator:{pattern:/#+|[-+*~=^@]/}}},punctuation:{pattern:/\$\w:|[$:?.,|]/}},alias:"function"},"function-block":{pattern:/\$XF:\{[-\w.]+\?[-\w.]+(?:,(?:(?:@[-#]*\w+\.[\w+.]\.*)*\|)*(?:(?:[\w+]|[-#*.~^]+[\w+]|=\S)(?:[^$=]|=+[^=])*=*|(?:@[-#]*\w+\.[\w+.]\.*)+(?:(?:[\w+]|[-#*~^][-#*.~^]*[\w+]|=\S)(?:[^$=]|=+[^=])*=*)?)?)?\}:XF\$/,inside:{punctuation:{pattern:/[$:{}?.,|]/}},alias:"function"},"directive-inline":{pattern:/\$\w(?:#\d+\+?)?(?:\[[-\w.]+\])?:[-\/\w.]+\$/,inside:{punctuation:{pattern:/\$(?:\w:|C(?:\[|#\d))?|[:{[\]]/,inside:{tag:{pattern:/#\d/}}}},alias:"function"},"directive-block-open":{pattern:/\$\w+:\{|\$\w(?:#\d+\+?)?(?:\[[-\w.]+\])?:[-\w.]+:\{(?:![A-Z]+)?/,inside:{punctuation:{pattern:/\$(?:\w:|C(?:\[|#\d))?|[:{[\]]/,inside:{tag:{pattern:/#\d/}}},attribute:{pattern:/![A-Z]+$/,inside:{punctuation:{pattern:/!/}},alias:"keyword"}},alias:"function"},"directive-block-separator":{pattern:/\}:[-\w.]+:\{/,inside:{punctuation:{pattern:/[:{}]/}},alias:"function"},"directive-block-close":{pattern:/\}:[-\w.]+\$/,inside:{punctuation:{pattern:/[:{}$]/}},alias:"function"}}),e.languages.insertBefore("inside","punctuation",{variable:e.languages.xeora["function-inline"].inside["variable"]},e.languages.xeora["function-block"]),e.languages.xeoracube=e.languages.xeora})(Prism)},2349:function(){(function(e){function t(t,n){e.languages[t]&&e.languages.insertBefore(t,"comment",{"doc-comment":n})}var n=e.languages.markup.tag,r={pattern:/\/\/\/.*/,greedy:!0,alias:"comment",inside:{tag:n}},i={pattern:/'''.*/,greedy:!0,alias:"comment",inside:{tag:n}};t("csharp",r),t("fsharp",r),t("vbnet",i)})(Prism)},2449:function(){Prism.languages.xojo={comment:{pattern:/(?:'|\/\/|Rem\b).+/i,greedy:!0},string:{pattern:/"(?:""|[^"])*"/,greedy:!0},number:[/(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:E[+-]?\d+)?/i,/&[bchou][a-z\d]+/i],directive:{pattern:/#(?:Else|ElseIf|Endif|If|Pragma)\b/i,alias:"property"},keyword:/\b(?:AddHandler|App|Array|As(?:signs)?|Auto|Boolean|Break|By(?:Ref|Val)|Byte|Call|Case|Catch|CFStringRef|CGFloat|Class|Color|Const|Continue|CString|Currency|CurrentMethodName|Declare|Delegate|Dim|Do(?:uble|wnTo)?|Each|Else(?:If)?|End|Enumeration|Event|Exception|Exit|Extends|False|Finally|For|Function|Get|GetTypeInfo|Global|GOTO|If|Implements|In|Inherits|Int(?:8|16|32|64|eger|erface)?|Lib|Loop|Me|Module|Next|Nil|Object|Optional|OSType|ParamArray|Private|Property|Protected|PString|Ptr|Raise(?:Event)?|ReDim|RemoveHandler|Return|Select(?:or)?|Self|Set|Shared|Short|Single|Soft|Static|Step|String|Sub|Super|Text|Then|To|True|Try|Ubound|UInt(?:8|16|32|64|eger)?|Until|Using|Var(?:iant)?|Wend|While|WindowPtr|WString)\b/i,operator:/<[=>]?|>=?|[+\-*\/\\^=]|\b(?:AddressOf|And|Ctype|IsA?|Mod|New|Not|Or|WeakAddressOf|Xor)\b/i,punctuation:/[.,;:()]/}},9938:function(){(function(e){e.languages.xquery=e.languages.extend("markup",{"xquery-comment":{pattern:/\(:[\s\S]*?:\)/,greedy:!0,alias:"comment"},string:{pattern:/(["'])(?:\1\1|(?!\1)[\s\S])*\1/,greedy:!0},extension:{pattern:/\(#.+?#\)/,alias:"symbol"},variable:/\$[-\w:]+/,axis:{pattern:/(^|[^-])(?:ancestor(?:-or-self)?|attribute|child|descendant(?:-or-self)?|following(?:-sibling)?|parent|preceding(?:-sibling)?|self)(?=::)/,lookbehind:!0,alias:"operator"},"keyword-operator":{pattern:/(^|[^:-])\b(?:and|castable as|div|eq|except|ge|gt|idiv|instance of|intersect|is|le|lt|mod|ne|or|union)\b(?=$|[^:-])/,lookbehind:!0,alias:"operator"},keyword:{pattern:/(^|[^:-])\b(?:as|ascending|at|base-uri|boundary-space|case|cast as|collation|construction|copy-namespaces|declare|default|descending|else|empty (?:greatest|least)|encoding|every|external|for|function|if|import|in|inherit|lax|let|map|module|namespace|no-inherit|no-preserve|option|order(?: by|ed|ing)?|preserve|return|satisfies|schema|some|stable|strict|strip|then|to|treat as|typeswitch|unordered|validate|variable|version|where|xquery)\b(?=$|[^:-])/,lookbehind:!0},function:/[\w-]+(?::[\w-]+)*(?=\s*\()/,"xquery-element":{pattern:/(element\s+)[\w-]+(?::[\w-]+)*/,lookbehind:!0,alias:"tag"},"xquery-attribute":{pattern:/(attribute\s+)[\w-]+(?::[\w-]+)*/,lookbehind:!0,alias:"attr-name"},builtin:{pattern:/(^|[^:-])\b(?:attribute|comment|document|element|processing-instruction|text|xs:(?:ENTITIES|ENTITY|ID|IDREFS?|NCName|NMTOKENS?|NOTATION|Name|QName|anyAtomicType|anyType|anyURI|base64Binary|boolean|byte|date|dateTime|dayTimeDuration|decimal|double|duration|float|gDay|gMonth|gMonthDay|gYear|gYearMonth|hexBinary|int|integer|language|long|negativeInteger|nonNegativeInteger|nonPositiveInteger|normalizedString|positiveInteger|short|string|time|token|unsigned(?:Byte|Int|Long|Short)|untyped(?:Atomic)?|yearMonthDuration))\b(?=$|[^:-])/,lookbehind:!0},number:/\b\d+(?:\.\d+)?(?:E[+-]?\d+)?/,operator:[/[+*=?|@]|\.\.?|:=|!=|<[=<]?|>[=>]?/,{pattern:/(\s)-(?=\s)/,lookbehind:!0}],punctuation:/[[\](){},;:/]/}),e.languages.xquery.tag.pattern=/<\/?(?!\d)[^\s>\/=$<%]+(?:\s+[^\s>\/=]+(?:=(?:("|')(?:\\[\s\S]|\{(?!\{)(?:\{(?:\{[^{}]*\}|[^{}])*\}|[^{}])+\}|(?!\1)[^\\])*\1|[^\s'">=]+))?)*\s*\/?>/,e.languages.xquery["tag"].inside["attr-value"].pattern=/=(?:("|')(?:\\[\s\S]|\{(?!\{)(?:\{(?:\{[^{}]*\}|[^{}])*\}|[^{}])+\}|(?!\1)[^\\])*\1|[^\s'">=]+)/,e.languages.xquery["tag"].inside["attr-value"].inside["punctuation"]=/^="|"$/,e.languages.xquery["tag"].inside["attr-value"].inside["expression"]={pattern:/\{(?!\{)(?:\{(?:\{[^{}]*\}|[^{}])*\}|[^{}])+\}/,inside:e.languages.xquery,alias:"language-xquery"};var t=function(e){return"string"===typeof e?e:"string"===typeof e.content?e.content:e.content.map(t).join("")},n=function(r){for(var i=[],s=0;s0&&i[i.length-1].tagName===t(o.content[0].content[1])&&i.pop():"/>"===o.content[o.content.length-1].content||i.push({tagName:t(o.content[0].content[1]),openedBraces:0}):!(i.length>0&&"punctuation"===o.type&&"{"===o.content)||r[s+1]&&"punctuation"===r[s+1].type&&"{"===r[s+1].content||r[s-1]&&"plain-text"===r[s-1].type&&"{"===r[s-1].content?i.length>0&&i[i.length-1].openedBraces>0&&"punctuation"===o.type&&"}"===o.content?i[i.length-1].openedBraces--:"comment"!==o.type&&(a=!0):i[i.length-1].openedBraces++),(a||"string"===typeof o)&&i.length>0&&0===i[i.length-1].openedBraces){var l=t(o);s0&&("string"===typeof r[s-1]||"plain-text"===r[s-1].type)&&(l=t(r[s-1])+l,r.splice(s-1,1),s--),/^\s+$/.test(l)?r[s]=l:r[s]=new e.Token("plain-text",l,null,l)}o.content&&"string"!==typeof o.content&&n(o.content)}};e.hooks.add("after-tokenize",(function(e){"xquery"===e.language&&n(e.tokens)}))})(Prism)},3358:function(){(function(e){var t=/[*&][^\s[\]{},]+/,n=/!(?:<[\w\-%#;/?:@&=+$,.!~*'()[\]]+>|(?:[a-zA-Z\d-]*!)?[\w\-%#;/?:@&=+$.~*'()]+)?/,r="(?:"+n.source+"(?:[ \t]+"+t.source+")?|"+t.source+"(?:[ \t]+"+n.source+")?)",i=/(?:[^\s\x00-\x08\x0e-\x1f!"#%&'*,\-:>?@[\]`{|}\x7f-\x84\x86-\x9f\ud800-\udfff\ufffe\uffff]|[?:-])(?:[ \t]*(?:(?![#:])|:))*/.source.replace(//g,(function(){return/[^\s\x00-\x08\x0e-\x1f,[\]{}\x7f-\x84\x86-\x9f\ud800-\udfff\ufffe\uffff]/.source})),s=/"(?:[^"\\\r\n]|\\.)*"|'(?:[^'\\\r\n]|\\.)*'/.source;function o(e,t){t=(t||"").replace(/m/g,"")+"m";var n=/([:\-,[{]\s*(?:\s<>[ \t]+)?)(?:<>)(?=[ \t]*(?:$|,|\]|\}|(?:[\r\n]\s*)?#))/.source.replace(/<>/g,(function(){return r})).replace(/<>/g,(function(){return e}));return RegExp(n,t)}e.languages.yaml={scalar:{pattern:RegExp(/([\-:]\s*(?:\s<>[ \t]+)?[|>])[ \t]*(?:((?:\r?\n|\r)[ \t]+)\S[^\r\n]*(?:\2[^\r\n]+)*)/.source.replace(/<>/g,(function(){return r}))),lookbehind:!0,alias:"string"},comment:/#.*/,key:{pattern:RegExp(/((?:^|[:\-,[{\r\n?])[ \t]*(?:<>[ \t]+)?)<>(?=\s*:\s)/.source.replace(/<>/g,(function(){return r})).replace(/<>/g,(function(){return"(?:"+i+"|"+s+")"}))),lookbehind:!0,greedy:!0,alias:"atrule"},directive:{pattern:/(^[ \t]*)%.+/m,lookbehind:!0,alias:"important"},datetime:{pattern:o(/\d{4}-\d\d?-\d\d?(?:[tT]|[ \t]+)\d\d?:\d{2}:\d{2}(?:\.\d*)?(?:[ \t]*(?:Z|[-+]\d\d?(?::\d{2})?))?|\d{4}-\d{2}-\d{2}|\d\d?:\d{2}(?::\d{2}(?:\.\d*)?)?/.source),lookbehind:!0,alias:"number"},boolean:{pattern:o(/false|true/.source,"i"),lookbehind:!0,alias:"important"},null:{pattern:o(/null|~/.source,"i"),lookbehind:!0,alias:"important"},string:{pattern:o(s),lookbehind:!0,greedy:!0},number:{pattern:o(/[+-]?(?:0x[\da-f]+|0o[0-7]+|(?:\d+(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?|\.inf|\.nan)/.source,"i"),lookbehind:!0},tag:n,important:t,punctuation:/---|[:[\]{}\-,|>?]|\.\.\./},e.languages.yml=e.languages.yaml})(Prism)},2982:function(){Prism.languages.yang={comment:/\/\*[\s\S]*?\*\/|\/\/.*/,string:{pattern:/"(?:[^\\"]|\\.)*"|'[^']*'/,greedy:!0},keyword:{pattern:/(^|[{};\r\n][ \t]*)[a-z_][\w.-]*/i,lookbehind:!0},namespace:{pattern:/(\s)[a-z_][\w.-]*(?=:)/i,lookbehind:!0},boolean:/\b(?:false|true)\b/,operator:/\+/,punctuation:/[{};:]/}},857:function(){(function(e){function t(e){return function(){return e}}var n=/\b(?:align|allowzero|and|anyframe|anytype|asm|async|await|break|cancel|catch|comptime|const|continue|defer|else|enum|errdefer|error|export|extern|fn|for|if|inline|linksection|nakedcc|noalias|nosuspend|null|or|orelse|packed|promise|pub|resume|return|stdcallcc|struct|suspend|switch|test|threadlocal|try|undefined|union|unreachable|usingnamespace|var|volatile|while)\b/,r="\\b(?!"+n.source+")(?!\\d)\\w+\\b",i=/align\s*\((?:[^()]|\([^()]*\))*\)/.source,s=/(?:\?|\bpromise->|(?:\[[^[\]]*\]|\*(?!\*)|\*\*)(?:\s*|\s*const\b|\s*volatile\b|\s*allowzero\b)*)/.source.replace(//g,t(i)),o=/(?:\bpromise\b|(?:\berror\.)?(?:\.)*(?!\s+))/.source.replace(//g,t(r)),a="(?!\\s)(?:!?\\s*(?:"+s+"\\s*)*"+o+")+";e.languages.zig={comment:[{pattern:/\/\/[/!].*/,alias:"doc-comment"},/\/{2}.*/],string:[{pattern:/(^|[^\\@])c?"(?:[^"\\\r\n]|\\.)*"/,lookbehind:!0,greedy:!0},{pattern:/([\r\n])([ \t]+c?\\{2}).*(?:(?:\r\n?|\n)\2.*)*/,lookbehind:!0,greedy:!0}],char:{pattern:/(^|[^\\])'(?:[^'\\\r\n]|[\uD800-\uDFFF]{2}|\\(?:.|x[a-fA-F\d]{2}|u\{[a-fA-F\d]{1,6}\}))'/,lookbehind:!0,greedy:!0},builtin:/\B@(?!\d)\w+(?=\s*\()/,label:{pattern:/(\b(?:break|continue)\s*:\s*)\w+\b|\b(?!\d)\w+\b(?=\s*:\s*(?:\{|while\b))/,lookbehind:!0},"class-name":[/\b(?!\d)\w+(?=\s*=\s*(?:(?:extern|packed)\s+)?(?:enum|struct|union)\s*[({])/,{pattern:RegExp(/(:\s*)(?=\s*(?:\s*)?[=;,)])|(?=\s*(?:\s*)?\{)/.source.replace(//g,t(a)).replace(//g,t(i))),lookbehind:!0,inside:null},{pattern:RegExp(/(\)\s*)(?=\s*(?:\s*)?;)/.source.replace(//g,t(a)).replace(//g,t(i))),lookbehind:!0,inside:null}],"builtin-type":{pattern:/\b(?:anyerror|bool|c_u?(?:int|long|longlong|short)|c_longdouble|c_void|comptime_(?:float|int)|f(?:16|32|64|128)|[iu](?:8|16|32|64|128|size)|noreturn|type|void)\b/,alias:"keyword"},keyword:n,function:/\b(?!\d)\w+(?=\s*\()/,number:/\b(?:0b[01]+|0o[0-7]+|0x[a-fA-F\d]+(?:\.[a-fA-F\d]*)?(?:[pP][+-]?[a-fA-F\d]+)?|\d+(?:\.\d*)?(?:[eE][+-]?\d+)?)\b/,boolean:/\b(?:false|true)\b/,operator:/\.[*?]|\.{2,3}|[-=]>|\*\*|\+\+|\|\||(?:<<|>>|[-+*]%|[-+*/%^&|<>!=])=?|[?~]/,punctuation:/[.:,;(){}[\]]/},e.languages.zig["class-name"].forEach((function(t){null===t.inside&&(t.inside=e.languages.zig)}))})(Prism)},2587:function(e){"use strict";function t(e,t){return Object.prototype.hasOwnProperty.call(e,t)}e.exports=function(e,n,r,i){n=n||"&",r=r||"=";var s={};if("string"!==typeof e||0===e.length)return s;var o=/\+/g;e=e.split(n);var a=1e3;i&&"number"===typeof i.maxKeys&&(a=i.maxKeys);var l=e.length;a>0&&l>a&&(l=a);for(var c=0;c=0?(u=f.substr(0,g),d=f.substr(g+1)):(u=f,d=""),h=decodeURIComponent(u),p=decodeURIComponent(d),t(s,h)?Array.isArray(s[h])?s[h].push(p):s[h]=[s[h],p]:s[h]=p}return s}},2361:function(e){"use strict";var t=function(e){switch(typeof e){case"string":return e;case"boolean":return e?"true":"false";case"number":return isFinite(e)?e:"";default:return""}};e.exports=function(e,n,r,i){return n=n||"&",r=r||"=",null===e&&(e=void 0),"object"===typeof e?Object.keys(e).map((function(i){var s=encodeURIComponent(t(i))+r;return Array.isArray(e[i])?e[i].map((function(e){return s+encodeURIComponent(t(e))})).join(n):s+encodeURIComponent(t(e[i]))})).join(n):i?encodeURIComponent(t(i))+r+encodeURIComponent(t(e)):""}},7673:function(e,t,n){"use strict";t.decode=t.parse=n(2587),t.encode=t.stringify=n(2361)},1742:function(e){e.exports=function(){var e=document.getSelection();if(!e.rangeCount)return function(){};for(var t=document.activeElement,n=[],r=0;r= 0x80 (not a basic code point)","invalid-input":"Invalid input"},v=l-c,E=Math.floor,x=String.fromCharCode;function S(e){throw RangeError(y[e])}function w(e,t){var n=e.length,r=[];while(n--)r[n]=t(e[n]);return r}function T(e,t){var n=e.split("@"),r="";n.length>1&&(r=n[0]+"@",e=n[1]),e=e.replace(_,".");var i=e.split("."),s=w(i,t).join(".");return r+s}function A(e){var t,n,r=[],i=0,s=e.length;while(i=55296&&t<=56319&&i65535&&(e-=65536,t+=x(e>>>10&1023|55296),e=56320|1023&e),t+=x(e),t})).join("")}function I(e){return e-48<10?e-22:e-65<26?e-65:e-97<26?e-97:l}function R(e,t){return e+22+75*(e<26)-((0!=t)<<5)}function k(e,t,n){var r=0;for(e=n?E(e/h):e>>1,e+=E(e/t);e>v*u>>1;r+=l)e=E(e/v);return E(r+(v+1)*e/(e+d))}function P(e){var t,n,r,i,s,o,d,h,m,b,_=[],y=e.length,v=0,x=f,w=p;for(n=e.lastIndexOf(g),n<0&&(n=0),r=0;r=128&&S("not-basic"),_.push(e.charCodeAt(r));for(i=n>0?n+1:0;i=y&&S("invalid-input"),h=I(e.charCodeAt(i++)),(h>=l||h>E((a-v)/o))&&S("overflow"),v+=h*o,m=d<=w?c:d>=w+u?u:d-w,hE(a/b)&&S("overflow"),o*=b}t=_.length+1,w=k(v-s,t,0==s),E(v/t)>a-x&&S("overflow"),x+=E(v/t),v%=t,_.splice(v++,0,x)}return C(_)}function O(e){var t,n,r,i,s,o,d,h,m,b,_,y,v,w,T,C=[];for(e=A(e),y=e.length,t=f,n=0,s=p,o=0;o=t&&_E((a-n)/v)&&S("overflow"),n+=(d-t)*v,t=d,o=0;oa&&S("overflow"),_==t){for(h=n,m=l;;m+=l){if(b=m<=s?c:m>=s+u?u:m-s,h",'"',"`"," ","\r","\n","\t"],u=["{","}","|","\\","^","`"].concat(c),d=["'"].concat(u),h=["%","/","?",";","#"].concat(d),p=["/","?","#"],f=255,g=/^[+a-z0-9A-Z_-]{0,63}$/,m=/^([+a-z0-9A-Z_-]{0,63})(.*)$/,b={javascript:!0,"javascript:":!0},_={javascript:!0,"javascript:":!0},y={http:!0,https:!0,ftp:!0,gopher:!0,file:!0,"http:":!0,"https:":!0,"ftp:":!0,"gopher:":!0,"file:":!0},v=n(7673);function E(e,t,n){if(e&&i.isObject(e)&&e instanceof s)return e;var r=new s;return r.parse(e,t,n),r}function x(e){return i.isString(e)&&(e=E(e)),e instanceof s?e.format():s.prototype.format.call(e)}function S(e,t){return E(e,!1,!0).resolve(t)}function w(e,t){return e?E(e,!1,!0).resolveObject(t):t}s.prototype.parse=function(e,t,n){if(!i.isString(e))throw new TypeError("Parameter 'url' must be a string, not "+typeof e);var s=e.indexOf("?"),a=-1!==s&&s127?D+="x":D+=M[L];if(!D.match(g)){var B=O.slice(0,R),U=O.slice(R+1),G=M.match(m);G&&(B.push(G[1]),U.unshift(G[2])),U.length&&(E="/"+U.join(".")+E),this.hostname=B.join(".");break}}}this.hostname.length>f?this.hostname="":this.hostname=this.hostname.toLowerCase(),P||(this.hostname=r.toASCII(this.hostname));var $=this.port?":"+this.port:"",z=this.hostname||"";this.host=z+$,this.href+=this.host,P&&(this.hostname=this.hostname.substr(1,this.hostname.length-2),"/"!==E[0]&&(E="/"+E))}if(!b[w])for(R=0,N=d.length;R0)&&n.host.split("@");T&&(n.auth=T.shift(),n.host=n.hostname=T.shift())}return n.search=e.search,n.query=e.query,i.isNull(n.pathname)&&i.isNull(n.search)||(n.path=(n.pathname?n.pathname:"")+(n.search?n.search:"")),n.href=n.format(),n}if(!S.length)return n.pathname=null,n.search?n.path="/"+n.search:n.path=null,n.href=n.format(),n;for(var A=S.slice(-1)[0],C=(n.host||e.host||S.length>1)&&("."===A||".."===A)||""===A,I=0,R=S.length;R>=0;R--)A=S[R],"."===A?S.splice(R,1):".."===A?(S.splice(R,1),I++):I&&(S.splice(R,1),I--);if(!E&&!x)for(;I--;I)S.unshift("..");!E||""===S[0]||S[0]&&"/"===S[0].charAt(0)||S.unshift(""),C&&"/"!==S.join("/").substr(-1)&&S.push("");var k=""===S[0]||S[0]&&"/"===S[0].charAt(0);if(w){n.hostname=n.host=k?"":S.length?S.shift():"";T=!!(n.host&&n.host.indexOf("@")>0)&&n.host.split("@");T&&(n.auth=T.shift(),n.host=n.hostname=T.shift())}return E=E||n.host&&S.length,E&&!k&&S.unshift(""),S.length?n.pathname=S.join("/"):(n.pathname=null,n.path=null),i.isNull(n.pathname)&&i.isNull(n.search)||(n.path=(n.pathname?n.pathname:"")+(n.search?n.search:"")),n.auth=e.auth||n.auth,n.slashes=n.slashes||e.slashes,n.href=n.format(),n},s.prototype.parseHost=function(){var e=this.host,t=a.exec(e);t&&(t=t[0],":"!==t&&(this.port=t.substr(1)),e=e.substr(0,e.length-t.length)),e&&(this.hostname=e)}},2502:function(e){"use strict";e.exports={isString:function(e){return"string"===typeof e},isObject:function(e){return"object"===typeof e&&null!==e},isNull:function(e){return null===e},isNullOrUndefined:function(e){return null==e}}},3744:function(e,t){"use strict";t.Z=(e,t)=>{const n=e.__vccOpts||e;for(const[r,i]of t)n[r]=i;return n}},821:function(e,t,n){"use strict";n.r(t),n.d(t,{BaseTransition:function(){return jr},BaseTransitionPropsValidators:function(){return Hr},Comment:function(){return po},EffectScope:function(){return _e},Fragment:function(){return uo},KeepAlive:function(){return ii},ReactiveEffect:function(){return De},Static:function(){return fo},Suspense:function(){return yr},Teleport:function(){return lo},Text:function(){return ho},Transition:function(){return ul},TransitionGroup:function(){return kl},VueElement:function(){return rl},assertNumber:function(){return Tn},callWithAsyncErrorHandling:function(){return Cn},callWithErrorHandling:function(){return An},camelize:function(){return D},capitalize:function(){return B},cloneVNode:function(){return Do},compatUtils:function(){return Aa},compile:function(){return jp},computed:function(){return ga},createApp:function(){return dc},createBlock:function(){return wo},createCommentVNode:function(){return Bo},createElementBlock:function(){return So},createElementVNode:function(){return Po},createHydrationRenderer:function(){return Ks},createPropsRestProxy:function(){return ns},createRenderer:function(){return Ys},createSSRApp:function(){return hc},createSlots:function(){return Ni},createStaticVNode:function(){return Fo},createTextVNode:function(){return Lo},createVNode:function(){return Oo},customRef:function(){return mn},defineAsyncComponent:function(){return ei},defineComponent:function(){return Qr},defineCustomElement:function(){return el},defineEmits:function(){return Hi},defineExpose:function(){return Vi},defineModel:function(){return qi},defineOptions:function(){return ji},defineProps:function(){return zi},defineSSRCustomElement:function(){return tl},defineSlots:function(){return Wi},devtools:function(){return Kn},effect:function(){return Fe},effectScope:function(){return ye},getCurrentInstance:function(){return Xo},getCurrentScope:function(){return Ee},getTransitionRawChildren:function(){return Zr},guardReactiveProps:function(){return Mo},h:function(){return ma},handleError:function(){return In},hasInjectionContext:function(){return Ts},hydrate:function(){return uc},initCustomFormatter:function(){return ya},initDirectivesForSSR:function(){return gc},inject:function(){return ws},isMemoSame:function(){return Ea},isProxy:function(){return Zt},isReactive:function(){return Xt},isReadonly:function(){return Yt},isRef:function(){return sn},isRuntimeOnly:function(){return la},isShallow:function(){return Kt},isVNode:function(){return To},markRaw:function(){return Jt},mergeDefaults:function(){return es},mergeModels:function(){return ts},mergeProps:function(){return zo},nextTick:function(){return Un},normalizeClass:function(){return te},normalizeProps:function(){return ne},normalizeStyle:function(){return K},onActivated:function(){return oi},onBeforeMount:function(){return fi},onBeforeUnmount:function(){return _i},onBeforeUpdate:function(){return mi},onDeactivated:function(){return ai},onErrorCaptured:function(){return Si},onMounted:function(){return gi},onRenderTracked:function(){return xi},onRenderTriggered:function(){return Ei},onScopeDispose:function(){return xe},onServerPrefetch:function(){return vi},onUnmounted:function(){return yi},onUpdated:function(){return bi},openBlock:function(){return bo},popScopeId:function(){return ar},provide:function(){return Ss},proxyRefs:function(){return fn},pushScopeId:function(){return or},queuePostFlushCb:function(){return Vn},reactive:function(){return Ht},readonly:function(){return jt},ref:function(){return on},registerRuntimeCompiler:function(){return aa},render:function(){return cc},renderList:function(){return Oi},renderSlot:function(){return Mi},resolveComponent:function(){return Ai},resolveDirective:function(){return Ri},resolveDynamicComponent:function(){return Ii},resolveFilter:function(){return Ta},resolveTransitionHooks:function(){return qr},setBlockTracking:function(){return Eo},setDevtoolsHook:function(){return Jn},setTransitionHooks:function(){return Kr},shallowReactive:function(){return Vt},shallowReadonly:function(){return Wt},shallowRef:function(){return an},ssrContextKey:function(){return ba},ssrUtils:function(){return wa},stop:function(){return Be},toDisplayString:function(){return ge},toHandlerKey:function(){return U},toHandlers:function(){return Li},toRaw:function(){return Qt},toRef:function(){return vn},toRefs:function(){return bn},toValue:function(){return hn},transformVNodeArgs:function(){return Co},triggerRef:function(){return un},unref:function(){return dn},useAttrs:function(){return Ki},useCssModule:function(){return il},useCssVars:function(){return sl},useModel:function(){return Zi},useSSRContext:function(){return _a},useSlots:function(){return Yi},useTransitionState:function(){return $r},vModelCheckbox:function(){return Ul},vModelDynamic:function(){return Wl},vModelRadio:function(){return $l},vModelSelect:function(){return zl},vModelText:function(){return Bl},vShow:function(){return tc},version:function(){return xa},warn:function(){return wn},watch:function(){return Mr},watchEffect:function(){return kr},watchPostEffect:function(){return Pr},watchSyncEffect:function(){return Or},withAsyncContext:function(){return rs},withCtx:function(){return cr},withDefaults:function(){return Xi},withDirectives:function(){return Ur},withKeys:function(){return ec},withMemo:function(){return va},withModifiers:function(){return Ql},withScopeId:function(){return lr}});var r={};function i(e,t){const n=Object.create(null),r=e.split(",");for(let i=0;i!!n[e.toLowerCase()]:e=>!!n[e]}n.r(r),n.d(r,{BaseTransition:function(){return jr},BaseTransitionPropsValidators:function(){return Hr},Comment:function(){return po},EffectScope:function(){return _e},Fragment:function(){return uo},KeepAlive:function(){return ii},ReactiveEffect:function(){return De},Static:function(){return fo},Suspense:function(){return yr},Teleport:function(){return lo},Text:function(){return ho},Transition:function(){return ul},TransitionGroup:function(){return kl},VueElement:function(){return rl},assertNumber:function(){return Tn},callWithAsyncErrorHandling:function(){return Cn},callWithErrorHandling:function(){return An},camelize:function(){return D},capitalize:function(){return B},cloneVNode:function(){return Do},compatUtils:function(){return Aa},computed:function(){return ga},createApp:function(){return dc},createBlock:function(){return wo},createCommentVNode:function(){return Bo},createElementBlock:function(){return So},createElementVNode:function(){return Po},createHydrationRenderer:function(){return Ks},createPropsRestProxy:function(){return ns},createRenderer:function(){return Ys},createSSRApp:function(){return hc},createSlots:function(){return Ni},createStaticVNode:function(){return Fo},createTextVNode:function(){return Lo},createVNode:function(){return Oo},customRef:function(){return mn},defineAsyncComponent:function(){return ei},defineComponent:function(){return Qr},defineCustomElement:function(){return el},defineEmits:function(){return Hi},defineExpose:function(){return Vi},defineModel:function(){return qi},defineOptions:function(){return ji},defineProps:function(){return zi},defineSSRCustomElement:function(){return tl},defineSlots:function(){return Wi},devtools:function(){return Kn},effect:function(){return Fe},effectScope:function(){return ye},getCurrentInstance:function(){return Xo},getCurrentScope:function(){return Ee},getTransitionRawChildren:function(){return Zr},guardReactiveProps:function(){return Mo},h:function(){return ma},handleError:function(){return In},hasInjectionContext:function(){return Ts},hydrate:function(){return uc},initCustomFormatter:function(){return ya},initDirectivesForSSR:function(){return gc},inject:function(){return ws},isMemoSame:function(){return Ea},isProxy:function(){return Zt},isReactive:function(){return Xt},isReadonly:function(){return Yt},isRef:function(){return sn},isRuntimeOnly:function(){return la},isShallow:function(){return Kt},isVNode:function(){return To},markRaw:function(){return Jt},mergeDefaults:function(){return es},mergeModels:function(){return ts},mergeProps:function(){return zo},nextTick:function(){return Un},normalizeClass:function(){return te},normalizeProps:function(){return ne},normalizeStyle:function(){return K},onActivated:function(){return oi},onBeforeMount:function(){return fi},onBeforeUnmount:function(){return _i},onBeforeUpdate:function(){return mi},onDeactivated:function(){return ai},onErrorCaptured:function(){return Si},onMounted:function(){return gi},onRenderTracked:function(){return xi},onRenderTriggered:function(){return Ei},onScopeDispose:function(){return xe},onServerPrefetch:function(){return vi},onUnmounted:function(){return yi},onUpdated:function(){return bi},openBlock:function(){return bo},popScopeId:function(){return ar},provide:function(){return Ss},proxyRefs:function(){return fn},pushScopeId:function(){return or},queuePostFlushCb:function(){return Vn},reactive:function(){return Ht},readonly:function(){return jt},ref:function(){return on},registerRuntimeCompiler:function(){return aa},render:function(){return cc},renderList:function(){return Oi},renderSlot:function(){return Mi},resolveComponent:function(){return Ai},resolveDirective:function(){return Ri},resolveDynamicComponent:function(){return Ii},resolveFilter:function(){return Ta},resolveTransitionHooks:function(){return qr},setBlockTracking:function(){return Eo},setDevtoolsHook:function(){return Jn},setTransitionHooks:function(){return Kr},shallowReactive:function(){return Vt},shallowReadonly:function(){return Wt},shallowRef:function(){return an},ssrContextKey:function(){return ba},ssrUtils:function(){return wa},stop:function(){return Be},toDisplayString:function(){return ge},toHandlerKey:function(){return U},toHandlers:function(){return Li},toRaw:function(){return Qt},toRef:function(){return vn},toRefs:function(){return bn},toValue:function(){return hn},transformVNodeArgs:function(){return Co},triggerRef:function(){return un},unref:function(){return dn},useAttrs:function(){return Ki},useCssModule:function(){return il},useCssVars:function(){return sl},useModel:function(){return Zi},useSSRContext:function(){return _a},useSlots:function(){return Yi},useTransitionState:function(){return $r},vModelCheckbox:function(){return Ul},vModelDynamic:function(){return Wl},vModelRadio:function(){return $l},vModelSelect:function(){return zl},vModelText:function(){return Bl},vShow:function(){return tc},version:function(){return xa},warn:function(){return wn},watch:function(){return Mr},watchEffect:function(){return kr},watchPostEffect:function(){return Pr},watchSyncEffect:function(){return Or},withAsyncContext:function(){return rs},withCtx:function(){return cr},withDefaults:function(){return Xi},withDirectives:function(){return Ur},withKeys:function(){return ec},withMemo:function(){return va},withModifiers:function(){return Ql},withScopeId:function(){return lr}});const s={},o=[],a=()=>{},l=()=>!1,c=/^on[^a-z]/,u=e=>c.test(e),d=e=>e.startsWith("onUpdate:"),h=Object.assign,p=(e,t)=>{const n=e.indexOf(t);n>-1&&e.splice(n,1)},f=Object.prototype.hasOwnProperty,g=(e,t)=>f.call(e,t),m=Array.isArray,b=e=>"[object Map]"===C(e),_=e=>"[object Set]"===C(e),y=e=>"[object Date]"===C(e),v=e=>"[object RegExp]"===C(e),E=e=>"function"===typeof e,x=e=>"string"===typeof e,S=e=>"symbol"===typeof e,w=e=>null!==e&&"object"===typeof e,T=e=>w(e)&&E(e.then)&&E(e.catch),A=Object.prototype.toString,C=e=>A.call(e),I=e=>C(e).slice(8,-1),R=e=>"[object Object]"===C(e),k=e=>x(e)&&"NaN"!==e&&"-"!==e[0]&&""+parseInt(e,10)===e,P=i(",key,ref,ref_for,ref_key,onVnodeBeforeMount,onVnodeMounted,onVnodeBeforeUpdate,onVnodeUpdated,onVnodeBeforeUnmount,onVnodeUnmounted"),O=i("bind,cloak,else-if,else,for,html,if,model,on,once,pre,show,slot,text,memo"),N=e=>{const t=Object.create(null);return n=>{const r=t[n];return r||(t[n]=e(n))}},M=/-(\w)/g,D=N((e=>e.replace(M,((e,t)=>t?t.toUpperCase():"")))),L=/\B([A-Z])/g,F=N((e=>e.replace(L,"-$1").toLowerCase())),B=N((e=>e.charAt(0).toUpperCase()+e.slice(1))),U=N((e=>e?`on${B(e)}`:"")),G=(e,t)=>!Object.is(e,t),$=(e,t)=>{for(let n=0;n{Object.defineProperty(e,t,{configurable:!0,enumerable:!1,value:n})},H=e=>{const t=parseFloat(e);return isNaN(t)?e:t},V=e=>{const t=x(e)?Number(e):NaN;return isNaN(t)?e:t};let j;const W=()=>j||(j="undefined"!==typeof globalThis?globalThis:"undefined"!==typeof self?self:"undefined"!==typeof window?window:"undefined"!==typeof n.g?n.g:{});const q={[1]:"TEXT",[2]:"CLASS",[4]:"STYLE",[8]:"PROPS",[16]:"FULL_PROPS",[32]:"HYDRATE_EVENTS",[64]:"STABLE_FRAGMENT",[128]:"KEYED_FRAGMENT",[256]:"UNKEYED_FRAGMENT",[512]:"NEED_PATCH",[1024]:"DYNAMIC_SLOTS",[2048]:"DEV_ROOT_FRAGMENT",[-1]:"HOISTED",[-2]:"BAIL"},X="Infinity,undefined,NaN,isFinite,isNaN,parseFloat,parseInt,decodeURI,decodeURIComponent,encodeURI,encodeURIComponent,Math,Number,Date,Array,Object,Boolean,String,RegExp,Map,Set,JSON,Intl,BigInt,console",Y=i(X);function K(e){if(m(e)){const t={};for(let n=0;n{if(e){const n=e.split(Q);n.length>1&&(t[n[0].trim()]=n[1].trim())}})),t}function te(e){let t="";if(x(e))t=e;else if(m(e))for(let n=0;npe(e,t)))}const ge=e=>x(e)?e:null==e?"":m(e)||w(e)&&(e.toString===A||!E(e.toString))?JSON.stringify(e,me,2):String(e),me=(e,t)=>t&&t.__v_isRef?me(e,t.value):b(t)?{[`Map(${t.size})`]:[...t.entries()].reduce(((e,[t,n])=>(e[`${t} =>`]=n,e)),{})}:_(t)?{[`Set(${t.size})`]:[...t.values()]}:!w(t)||m(t)||R(t)?t:String(t);let be;class _e{constructor(e=!1){this.detached=e,this._active=!0,this.effects=[],this.cleanups=[],this.parent=be,!e&&be&&(this.index=(be.scopes||(be.scopes=[])).push(this)-1)}get active(){return this._active}run(e){if(this._active){const t=be;try{return be=this,e()}finally{be=t}}else 0}on(){be=this}off(){be=this.parent}stop(e){if(this._active){let t,n;for(t=0,n=this.effects.length;t{const t=new Set(e);return t.w=0,t.n=0,t},we=e=>(e.w&ke)>0,Te=e=>(e.n&ke)>0,Ae=({deps:e})=>{if(e.length)for(let t=0;t{const{deps:t}=e;if(t.length){let n=0;for(let r=0;r{("length"===n||n>=e)&&a.push(t)}))}else switch(void 0!==n&&a.push(o.get(n)),t){case"add":m(e)?k(n)&&a.push(o.get("length")):(a.push(o.get(Ne)),b(e)&&a.push(o.get(Me)));break;case"delete":m(e)||(a.push(o.get(Ne)),b(e)&&a.push(o.get(Me)));break;case"set":b(e)&&a.push(o.get(Ne));break}if(1===a.length)a[0]&&We(a[0]);else{const e=[];for(const t of a)t&&e.push(...t);We(Se(e))}}function We(e,t){const n=m(e)?e:[...e];for(const r of n)r.computed&&qe(r,t);for(const r of n)r.computed||qe(r,t)}function qe(e,t){(e!==Oe||e.allowRecurse)&&(e.scheduler?e.scheduler():e.run())}function Xe(e,t){var n;return null==(n=Ie.get(e))?void 0:n.get(t)}const Ye=i("__proto__,__v_isRef,__isVue"),Ke=new Set(Object.getOwnPropertyNames(Symbol).filter((e=>"arguments"!==e&&"caller"!==e)).map((e=>Symbol[e])).filter(S)),Ze=it(),Qe=it(!1,!0),Je=it(!0),et=it(!0,!0),tt=nt();function nt(){const e={};return["includes","indexOf","lastIndexOf"].forEach((t=>{e[t]=function(...e){const n=Qt(this);for(let t=0,i=this.length;t{e[t]=function(...e){$e();const n=Qt(this)[t].apply(this,e);return ze(),n}})),e}function rt(e){const t=Qt(this);return He(t,"has",e),t.hasOwnProperty(e)}function it(e=!1,t=!1){return function(n,r,i){if("__v_isReactive"===r)return!e;if("__v_isReadonly"===r)return e;if("__v_isShallow"===r)return t;if("__v_raw"===r&&i===(e?t?Gt:Ut:t?Bt:Ft).get(n))return n;const s=m(n);if(!e){if(s&&g(tt,r))return Reflect.get(tt,r,i);if("hasOwnProperty"===r)return rt}const o=Reflect.get(n,r,i);return(S(r)?Ke.has(r):Ye(r))?o:(e||He(n,"get",r),t?o:sn(o)?s&&k(r)?o:o.value:w(o)?e?jt(o):Ht(o):o)}}const st=at(),ot=at(!0);function at(e=!1){return function(t,n,r,i){let s=t[n];if(Yt(s)&&sn(s)&&!sn(r))return!1;if(!e&&(Kt(r)||Yt(r)||(s=Qt(s),r=Qt(r)),!m(t)&&sn(s)&&!sn(r)))return s.value=r,!0;const o=m(t)&&k(n)?Number(n)e,mt=e=>Reflect.getPrototypeOf(e);function bt(e,t,n=!1,r=!1){e=e["__v_raw"];const i=Qt(e),s=Qt(t);n||(t!==s&&He(i,"get",t),He(i,"get",s));const{has:o}=mt(i),a=r?gt:n?tn:en;return o.call(i,t)?a(e.get(t)):o.call(i,s)?a(e.get(s)):void(e!==i&&e.get(t))}function _t(e,t=!1){const n=this["__v_raw"],r=Qt(n),i=Qt(e);return t||(e!==i&&He(r,"has",e),He(r,"has",i)),e===i?n.has(e):n.has(e)||n.has(i)}function yt(e,t=!1){return e=e["__v_raw"],!t&&He(Qt(e),"iterate",Ne),Reflect.get(e,"size",e)}function vt(e){e=Qt(e);const t=Qt(this),n=mt(t),r=n.has.call(t,e);return r||(t.add(e),je(t,"add",e,e)),this}function Et(e,t){t=Qt(t);const n=Qt(this),{has:r,get:i}=mt(n);let s=r.call(n,e);s||(e=Qt(e),s=r.call(n,e));const o=i.call(n,e);return n.set(e,t),s?G(t,o)&&je(n,"set",e,t,o):je(n,"add",e,t),this}function xt(e){const t=Qt(this),{has:n,get:r}=mt(t);let i=n.call(t,e);i||(e=Qt(e),i=n.call(t,e));const s=r?r.call(t,e):void 0,o=t.delete(e);return i&&je(t,"delete",e,void 0,s),o}function St(){const e=Qt(this),t=0!==e.size,n=void 0,r=e.clear();return t&&je(e,"clear",void 0,void 0,n),r}function wt(e,t){return function(n,r){const i=this,s=i["__v_raw"],o=Qt(s),a=t?gt:e?tn:en;return!e&&He(o,"iterate",Ne),s.forEach(((e,t)=>n.call(r,a(e),a(t),i)))}}function Tt(e,t,n){return function(...r){const i=this["__v_raw"],s=Qt(i),o=b(s),a="entries"===e||e===Symbol.iterator&&o,l="keys"===e&&o,c=i[e](...r),u=n?gt:t?tn:en;return!t&&He(s,"iterate",l?Me:Ne),{next(){const{value:e,done:t}=c.next();return t?{value:e,done:t}:{value:a?[u(e[0]),u(e[1])]:u(e),done:t}},[Symbol.iterator](){return this}}}}function At(e){return function(...t){return"delete"!==e&&this}}function Ct(){const e={get(e){return bt(this,e)},get size(){return yt(this)},has:_t,add:vt,set:Et,delete:xt,clear:St,forEach:wt(!1,!1)},t={get(e){return bt(this,e,!1,!0)},get size(){return yt(this)},has:_t,add:vt,set:Et,delete:xt,clear:St,forEach:wt(!1,!0)},n={get(e){return bt(this,e,!0)},get size(){return yt(this,!0)},has(e){return _t.call(this,e,!0)},add:At("add"),set:At("set"),delete:At("delete"),clear:At("clear"),forEach:wt(!0,!1)},r={get(e){return bt(this,e,!0,!0)},get size(){return yt(this,!0)},has(e){return _t.call(this,e,!0)},add:At("add"),set:At("set"),delete:At("delete"),clear:At("clear"),forEach:wt(!0,!0)},i=["keys","values","entries",Symbol.iterator];return i.forEach((i=>{e[i]=Tt(i,!1,!1),n[i]=Tt(i,!0,!1),t[i]=Tt(i,!1,!0),r[i]=Tt(i,!0,!0)})),[e,n,t,r]}const[It,Rt,kt,Pt]=Ct();function Ot(e,t){const n=t?e?Pt:kt:e?Rt:It;return(t,r,i)=>"__v_isReactive"===r?!e:"__v_isReadonly"===r?e:"__v_raw"===r?t:Reflect.get(g(n,r)&&r in t?n:t,r,i)}const Nt={get:Ot(!1,!1)},Mt={get:Ot(!1,!0)},Dt={get:Ot(!0,!1)},Lt={get:Ot(!0,!0)};const Ft=new WeakMap,Bt=new WeakMap,Ut=new WeakMap,Gt=new WeakMap;function $t(e){switch(e){case"Object":case"Array":return 1;case"Map":case"Set":case"WeakMap":case"WeakSet":return 2;default:return 0}}function zt(e){return e["__v_skip"]||!Object.isExtensible(e)?0:$t(I(e))}function Ht(e){return Yt(e)?e:qt(e,!1,dt,Nt,Ft)}function Vt(e){return qt(e,!1,pt,Mt,Bt)}function jt(e){return qt(e,!0,ht,Dt,Ut)}function Wt(e){return qt(e,!0,ft,Lt,Gt)}function qt(e,t,n,r,i){if(!w(e))return e;if(e["__v_raw"]&&(!t||!e["__v_isReactive"]))return e;const s=i.get(e);if(s)return s;const o=zt(e);if(0===o)return e;const a=new Proxy(e,2===o?r:n);return i.set(e,a),a}function Xt(e){return Yt(e)?Xt(e["__v_raw"]):!(!e||!e["__v_isReactive"])}function Yt(e){return!(!e||!e["__v_isReadonly"])}function Kt(e){return!(!e||!e["__v_isShallow"])}function Zt(e){return Xt(e)||Yt(e)}function Qt(e){const t=e&&e["__v_raw"];return t?Qt(t):e}function Jt(e){return z(e,"__v_skip",!0),e}const en=e=>w(e)?Ht(e):e,tn=e=>w(e)?jt(e):e;function nn(e){Ue&&Oe&&(e=Qt(e),Ve(e.dep||(e.dep=Se())))}function rn(e,t){e=Qt(e);const n=e.dep;n&&We(n)}function sn(e){return!(!e||!0!==e.__v_isRef)}function on(e){return ln(e,!1)}function an(e){return ln(e,!0)}function ln(e,t){return sn(e)?e:new cn(e,t)}class cn{constructor(e,t){this.__v_isShallow=t,this.dep=void 0,this.__v_isRef=!0,this._rawValue=t?e:Qt(e),this._value=t?e:en(e)}get value(){return nn(this),this._value}set value(e){const t=this.__v_isShallow||Kt(e)||Yt(e);e=t?e:Qt(e),G(e,this._rawValue)&&(this._rawValue=e,this._value=t?e:en(e),rn(this,e))}}function un(e){rn(e,void 0)}function dn(e){return sn(e)?e.value:e}function hn(e){return E(e)?e():dn(e)}const pn={get:(e,t,n)=>dn(Reflect.get(e,t,n)),set:(e,t,n,r)=>{const i=e[t];return sn(i)&&!sn(n)?(i.value=n,!0):Reflect.set(e,t,n,r)}};function fn(e){return Xt(e)?e:new Proxy(e,pn)}class gn{constructor(e){this.dep=void 0,this.__v_isRef=!0;const{get:t,set:n}=e((()=>nn(this)),(()=>rn(this)));this._get=t,this._set=n}get value(){return this._get()}set value(e){this._set(e)}}function mn(e){return new gn(e)}function bn(e){const t=m(e)?new Array(e.length):{};for(const n in e)t[n]=En(e,n);return t}class _n{constructor(e,t,n){this._object=e,this._key=t,this._defaultValue=n,this.__v_isRef=!0}get value(){const e=this._object[this._key];return void 0===e?this._defaultValue:e}set value(e){this._object[this._key]=e}get dep(){return Xe(Qt(this._object),this._key)}}class yn{constructor(e){this._getter=e,this.__v_isRef=!0,this.__v_isReadonly=!0}get value(){return this._getter()}}function vn(e,t,n){return sn(e)?e:E(e)?new yn(e):w(e)&&arguments.length>1?En(e,t,n):on(e)}function En(e,t,n){const r=e[t];return sn(r)?r:new _n(e,t,n)}class xn{constructor(e,t,n,r){this._setter=t,this.dep=void 0,this.__v_isRef=!0,this["__v_isReadonly"]=!1,this._dirty=!0,this.effect=new De(e,(()=>{this._dirty||(this._dirty=!0,rn(this))})),this.effect.computed=this,this.effect.active=this._cacheable=!r,this["__v_isReadonly"]=n}get value(){const e=Qt(this);return nn(e),!e._dirty&&e._cacheable||(e._dirty=!1,e._value=e.effect.run()),e._value}set value(e){this._setter(e)}}function Sn(e,t,n=!1){let r,i;const s=E(e);s?(r=e,i=a):(r=e.get,i=e.set);const o=new xn(r,i,s||!i,n);return o}function wn(e,...t){}function Tn(e,t){}function An(e,t,n,r){let i;try{i=r?e(...r):e()}catch(s){In(s,t,n)}return i}function Cn(e,t,n,r){if(E(e)){const i=An(e,t,n,r);return i&&T(i)&&i.catch((e=>{In(e,t,n)})),i}const i=[];for(let s=0;s>>1,i=qn(On[r]);iNn&&On.splice(t,1)}function Vn(e){m(e)?Mn.push(...e):Dn&&Dn.includes(e,e.allowRecurse?Ln+1:Ln)||Mn.push(e),zn()}function jn(e,t=(kn?Nn+1:0)){for(0;tqn(e)-qn(t))),Ln=0;Lnnull==e.id?1/0:e.id,Xn=(e,t)=>{const n=qn(e)-qn(t);if(0===n){if(e.pre&&!t.pre)return-1;if(t.pre&&!e.pre)return 1}return n};function Yn(e){Pn=!1,kn=!0,On.sort(Xn);try{for(Nn=0;NnKn.emit(e,...t))),Zn=[];else if("undefined"!==typeof window&&window.HTMLElement&&!(null==(r=null==(n=window.navigator)?void 0:n.userAgent)?void 0:r.includes("jsdom"))){const e=t.__VUE_DEVTOOLS_HOOK_REPLAY__=t.__VUE_DEVTOOLS_HOOK_REPLAY__||[];e.push((e=>{Jn(e,t)})),setTimeout((()=>{Kn||(t.__VUE_DEVTOOLS_HOOK_REPLAY__=null,Qn=!0,Zn=[])}),3e3)}else Qn=!0,Zn=[]}function er(e,t,...n){if(e.isUnmounted)return;const r=e.vnode.props||s;let i=n;const o=t.startsWith("update:"),a=o&&t.slice(7);if(a&&a in r){const e=`${"modelValue"===a?"model":a}Modifiers`,{number:t,trim:o}=r[e]||s;o&&(i=n.map((e=>x(e)?e.trim():e))),t&&(i=n.map(H))}let l;let c=r[l=U(t)]||r[l=U(D(t))];!c&&o&&(c=r[l=U(F(t))]),c&&Cn(c,e,6,i);const u=r[l+"Once"];if(u){if(e.emitted){if(e.emitted[l])return}else e.emitted={};e.emitted[l]=!0,Cn(u,e,6,i)}}function tr(e,t,n=!1){const r=t.emitsCache,i=r.get(e);if(void 0!==i)return i;const s=e.emits;let o={},a=!1;if(!E(e)){const r=e=>{const n=tr(e,t,!0);n&&(a=!0,h(o,n))};!n&&t.mixins.length&&t.mixins.forEach(r),e.extends&&r(e.extends),e.mixins&&e.mixins.forEach(r)}return s||a?(m(s)?s.forEach((e=>o[e]=null)):h(o,s),w(e)&&r.set(e,o),o):(w(e)&&r.set(e,null),null)}function nr(e,t){return!(!e||!u(t))&&(t=t.slice(2).replace(/Once$/,""),g(e,t[0].toLowerCase()+t.slice(1))||g(e,F(t))||g(e,t))}let rr=null,ir=null;function sr(e){const t=rr;return rr=e,ir=e&&e.type.__scopeId||null,t}function or(e){ir=e}function ar(){ir=null}const lr=e=>cr;function cr(e,t=rr,n){if(!t)return e;if(e._n)return e;const r=(...n)=>{r._d&&Eo(-1);const i=sr(t);let s;try{s=e(...n)}finally{sr(i),r._d&&Eo(1)}return s};return r._n=!0,r._c=!0,r._d=!0,r}function ur(e){const{type:t,vnode:n,proxy:r,withProxy:i,props:s,propsOptions:[o],slots:a,attrs:l,emit:c,render:u,renderCache:h,data:p,setupState:f,ctx:g,inheritAttrs:m}=e;let b,_;const y=sr(e);try{if(4&n.shapeFlag){const e=i||r;b=Uo(u.call(e,e,h,s,f,p,g)),_=l}else{const e=t;0,b=Uo(e.length>1?e(s,{attrs:l,slots:a,emit:c}):e(s,null)),_=t.props?l:hr(l)}}catch(E){go.length=0,In(E,e,1),b=Oo(po)}let v=b;if(_&&!1!==m){const e=Object.keys(_),{shapeFlag:t}=v;e.length&&7&t&&(o&&e.some(d)&&(_=pr(_,o)),v=Do(v,_))}return n.dirs&&(v=Do(v),v.dirs=v.dirs?v.dirs.concat(n.dirs):n.dirs),n.transition&&(v.transition=n.transition),b=v,sr(y),b}function dr(e){let t;for(let n=0;n{let t;for(const n in e)("class"===n||"style"===n||u(n))&&((t||(t={}))[n]=e[n]);return t},pr=(e,t)=>{const n={};for(const r in e)d(r)&&r.slice(9)in t||(n[r]=e[r]);return n};function fr(e,t,n){const{props:r,children:i,component:s}=e,{props:o,children:a,patchFlag:l}=t,c=s.emitsOptions;if(t.dirs||t.transition)return!0;if(!(n&&l>=0))return!(!i&&!a||a&&a.$stable)||r!==o&&(r?!o||gr(r,o,c):!!o);if(1024&l)return!0;if(16&l)return r?gr(r,o,c):!!o;if(8&l){const e=t.dynamicProps;for(let t=0;te.__isSuspense,_r={name:"Suspense",__isSuspense:!0,process(e,t,n,r,i,s,o,a,l,c){null==e?Er(t,n,r,i,s,o,a,l,c):xr(e,t,n,r,i,o,a,l,c)},hydrate:wr,create:Sr,normalize:Tr},yr=_r;function vr(e,t){const n=e.props&&e.props[t];E(n)&&n()}function Er(e,t,n,r,i,s,o,a,l){const{p:c,o:{createElement:u}}=l,d=u("div"),h=e.suspense=Sr(e,i,r,t,d,n,s,o,a,l);c(null,h.pendingBranch=e.ssContent,d,null,r,h,s,o),h.deps>0?(vr(e,"onPending"),vr(e,"onFallback"),c(null,e.ssFallback,t,n,r,null,s,o),Ir(h,e.ssFallback)):h.resolve(!1,!0)}function xr(e,t,n,r,i,s,o,a,{p:l,um:c,o:{createElement:u}}){const d=t.suspense=e.suspense;d.vnode=t,t.el=e.el;const h=t.ssContent,p=t.ssFallback,{activeBranch:f,pendingBranch:g,isInFallback:m,isHydrating:b}=d;if(g)d.pendingBranch=h,Ao(h,g)?(l(g,h,d.hiddenContainer,null,i,d,s,o,a),d.deps<=0?d.resolve():m&&(l(f,p,n,r,i,null,s,o,a),Ir(d,p))):(d.pendingId++,b?(d.isHydrating=!1,d.activeBranch=g):c(g,i,d),d.deps=0,d.effects.length=0,d.hiddenContainer=u("div"),m?(l(null,h,d.hiddenContainer,null,i,d,s,o,a),d.deps<=0?d.resolve():(l(f,p,n,r,i,null,s,o,a),Ir(d,p))):f&&Ao(h,f)?(l(f,h,n,r,i,d,s,o,a),d.resolve(!0)):(l(null,h,d.hiddenContainer,null,i,d,s,o,a),d.deps<=0&&d.resolve()));else if(f&&Ao(h,f))l(f,h,n,r,i,d,s,o,a),Ir(d,h);else if(vr(t,"onPending"),d.pendingBranch=h,d.pendingId++,l(null,h,d.hiddenContainer,null,i,d,s,o,a),d.deps<=0)d.resolve();else{const{timeout:e,pendingId:t}=d;e>0?setTimeout((()=>{d.pendingId===t&&d.fallback(p)}),e):0===e&&d.fallback(p)}}function Sr(e,t,n,r,i,s,o,a,l,c,u=!1){const{p:d,m:h,um:p,n:f,o:{parentNode:g,remove:m}}=c;let b;const _=Rr(e);_&&(null==t?void 0:t.pendingBranch)&&(b=t.pendingId,t.deps++);const y=e.props?V(e.props.timeout):void 0;const v={vnode:e,parent:t,parentComponent:n,isSVG:o,container:r,hiddenContainer:i,anchor:s,deps:0,pendingId:0,timeout:"number"===typeof y?y:-1,activeBranch:null,pendingBranch:null,isInFallback:!0,isHydrating:u,isUnmounted:!1,effects:[],resolve(e=!1,n=!1){const{vnode:r,activeBranch:i,pendingBranch:s,pendingId:o,effects:a,parentComponent:l,container:c}=v;if(v.isHydrating)v.isHydrating=!1;else if(!e){const e=i&&s.transition&&"out-in"===s.transition.mode;e&&(i.transition.afterLeave=()=>{o===v.pendingId&&h(s,c,t,0)});let{anchor:t}=v;i&&(t=f(i),p(i,l,v,!0)),e||h(s,c,t,0)}Ir(v,s),v.pendingBranch=null,v.isInFallback=!1;let u=v.parent,d=!1;while(u){if(u.pendingBranch){u.effects.push(...a),d=!0;break}u=u.parent}d||Vn(a),v.effects=[],_&&t&&t.pendingBranch&&b===t.pendingId&&(t.deps--,0!==t.deps||n||t.resolve()),vr(r,"onResolve")},fallback(e){if(!v.pendingBranch)return;const{vnode:t,activeBranch:n,parentComponent:r,container:i,isSVG:s}=v;vr(t,"onFallback");const o=f(n),c=()=>{v.isInFallback&&(d(null,e,i,o,r,null,s,a,l),Ir(v,e))},u=e.transition&&"out-in"===e.transition.mode;u&&(n.transition.afterLeave=c),v.isInFallback=!0,p(n,r,null,!0),u||c()},move(e,t,n){v.activeBranch&&h(v.activeBranch,e,t,n),v.container=e},next(){return v.activeBranch&&f(v.activeBranch)},registerDep(e,t){const n=!!v.pendingBranch;n&&v.deps++;const r=e.vnode.el;e.asyncDep.catch((t=>{In(t,e,0)})).then((i=>{if(e.isUnmounted||v.isUnmounted||v.pendingId!==e.suspenseId)return;e.asyncResolved=!0;const{vnode:s}=e;oa(e,i,!1),r&&(s.el=r);const a=!r&&e.subTree.el;t(e,s,g(r||e.subTree.el),r?null:f(e.subTree),v,o,l),a&&m(a),mr(e,s.el),n&&0===--v.deps&&v.resolve()}))},unmount(e,t){v.isUnmounted=!0,v.activeBranch&&p(v.activeBranch,n,e,t),v.pendingBranch&&p(v.pendingBranch,n,e,t)}};return v}function wr(e,t,n,r,i,s,o,a,l){const c=t.suspense=Sr(t,r,n,e.parentNode,document.createElement("div"),null,i,s,o,a,!0),u=l(e,c.pendingBranch=t.ssContent,n,c,s,o);return 0===c.deps&&c.resolve(!1,!0),u}function Tr(e){const{shapeFlag:t,children:n}=e,r=32&t;e.ssContent=Ar(r?n.default:n),e.ssFallback=r?Ar(n.fallback):Oo(po)}function Ar(e){let t;if(E(e)){const n=vo&&e._c;n&&(e._d=!1,bo()),e=e(),n&&(e._d=!0,t=mo,_o())}if(m(e)){const t=dr(e);0,e=t}return e=Uo(e),t&&!e.dynamicChildren&&(e.dynamicChildren=t.filter((t=>t!==e))),e}function Cr(e,t){t&&t.pendingBranch?m(e)?t.effects.push(...e):t.effects.push(e):Vn(e)}function Ir(e,t){e.activeBranch=t;const{vnode:n,parentComponent:r}=e,i=n.el=t.el;r&&r.subTree===n&&(r.vnode.el=i,mr(r,i))}function Rr(e){var t;return null!=(null==(t=e.props)?void 0:t.suspensible)&&!1!==e.props.suspensible}function kr(e,t){return Dr(e,null,t)}function Pr(e,t){return Dr(e,null,{flush:"post"})}function Or(e,t){return Dr(e,null,{flush:"sync"})}const Nr={};function Mr(e,t,n){return Dr(e,t,n)}function Dr(e,t,{immediate:n,deep:r,flush:i,onTrack:o,onTrigger:l}=s){var c;const u=Ee()===(null==(c=qo)?void 0:c.scope)?qo:null;let d,h,f=!1,g=!1;if(sn(e)?(d=()=>e.value,f=Kt(e)):Xt(e)?(d=()=>e,r=!0):m(e)?(g=!0,f=e.some((e=>Xt(e)||Kt(e))),d=()=>e.map((e=>sn(e)?e.value:Xt(e)?Br(e):E(e)?An(e,u,2):void 0))):d=E(e)?t?()=>An(e,u,2):()=>{if(!u||!u.isUnmounted)return h&&h(),Cn(e,u,3,[_])}:a,t&&r){const e=d;d=()=>Br(e())}let b,_=e=>{h=S.onStop=()=>{An(e,u,4)}};if(ra){if(_=a,t?n&&Cn(t,u,3,[d(),g?[]:void 0,_]):d(),"sync"!==i)return a;{const e=_a();b=e.__watcherHandles||(e.__watcherHandles=[])}}let y=g?new Array(e.length).fill(Nr):Nr;const v=()=>{if(S.active)if(t){const e=S.run();(r||f||(g?e.some(((e,t)=>G(e,y[t]))):G(e,y)))&&(h&&h(),Cn(t,u,3,[e,y===Nr?void 0:g&&y[0]===Nr?[]:y,_]),y=e)}else S.run()};let x;v.allowRecurse=!!t,"sync"===i?x=v:"post"===i?x=()=>Xs(v,u&&u.suspense):(v.pre=!0,u&&(v.id=u.uid),x=()=>$n(v));const S=new De(d,x);t?n?v():y=S.run():"post"===i?Xs(S.run.bind(S),u&&u.suspense):S.run();const w=()=>{S.stop(),u&&u.scope&&p(u.scope.effects,S)};return b&&b.push(w),w}function Lr(e,t,n){const r=this.proxy,i=x(e)?e.includes(".")?Fr(r,e):()=>r[e]:e.bind(r,r);let s;E(t)?s=t:(s=t.handler,n=t);const o=qo;Qo(this);const a=Dr(i,s.bind(r),n);return o?Qo(o):Jo(),a}function Fr(e,t){const n=t.split(".");return()=>{let t=e;for(let e=0;e{Br(e,t)}));else if(R(e))for(const n in e)Br(e[n],t);return e}function Ur(e,t){const n=rr;if(null===n)return e;const r=ha(n)||n.proxy,i=e.dirs||(e.dirs=[]);for(let o=0;o{e.isMounted=!0})),_i((()=>{e.isUnmounting=!0})),e}const zr=[Function,Array],Hr={mode:String,appear:Boolean,persisted:Boolean,onBeforeEnter:zr,onEnter:zr,onAfterEnter:zr,onEnterCancelled:zr,onBeforeLeave:zr,onLeave:zr,onAfterLeave:zr,onLeaveCancelled:zr,onBeforeAppear:zr,onAppear:zr,onAfterAppear:zr,onAppearCancelled:zr},Vr={name:"BaseTransition",props:Hr,setup(e,{slots:t}){const n=Xo(),r=$r();let i;return()=>{const s=t.default&&Zr(t.default(),!0);if(!s||!s.length)return;let o=s[0];if(s.length>1){let e=!1;for(const t of s)if(t.type!==po){0,o=t,e=!0;break}}const a=Qt(e),{mode:l}=a;if(r.isLeaving)return Xr(o);const c=Yr(o);if(!c)return Xr(o);const u=qr(c,a,r,n);Kr(c,u);const d=n.subTree,h=d&&Yr(d);let p=!1;const{getTransitionKey:f}=c.type;if(f){const e=f();void 0===i?i=e:e!==i&&(i=e,p=!0)}if(h&&h.type!==po&&(!Ao(c,h)||p)){const e=qr(h,a,r,n);if(Kr(h,e),"out-in"===l)return r.isLeaving=!0,e.afterLeave=()=>{r.isLeaving=!1,!1!==n.update.active&&n.update()},Xr(o);"in-out"===l&&c.type!==po&&(e.delayLeave=(e,t,n)=>{const i=Wr(r,h);i[String(h.key)]=h,e._leaveCb=()=>{t(),e._leaveCb=void 0,delete u.delayedLeave},u.delayedLeave=n})}return o}}},jr=Vr;function Wr(e,t){const{leavingVNodes:n}=e;let r=n.get(t.type);return r||(r=Object.create(null),n.set(t.type,r)),r}function qr(e,t,n,r){const{appear:i,mode:s,persisted:o=!1,onBeforeEnter:a,onEnter:l,onAfterEnter:c,onEnterCancelled:u,onBeforeLeave:d,onLeave:h,onAfterLeave:p,onLeaveCancelled:f,onBeforeAppear:g,onAppear:b,onAfterAppear:_,onAppearCancelled:y}=t,v=String(e.key),E=Wr(n,e),x=(e,t)=>{e&&Cn(e,r,9,t)},S=(e,t)=>{const n=t[1];x(e,t),m(e)?e.every((e=>e.length<=1))&&n():e.length<=1&&n()},w={mode:s,persisted:o,beforeEnter(t){let r=a;if(!n.isMounted){if(!i)return;r=g||a}t._leaveCb&&t._leaveCb(!0);const s=E[v];s&&Ao(e,s)&&s.el._leaveCb&&s.el._leaveCb(),x(r,[t])},enter(e){let t=l,r=c,s=u;if(!n.isMounted){if(!i)return;t=b||l,r=_||c,s=y||u}let o=!1;const a=e._enterCb=t=>{o||(o=!0,x(t?s:r,[e]),w.delayedLeave&&w.delayedLeave(),e._enterCb=void 0)};t?S(t,[e,a]):a()},leave(t,r){const i=String(e.key);if(t._enterCb&&t._enterCb(!0),n.isUnmounting)return r();x(d,[t]);let s=!1;const o=t._leaveCb=n=>{s||(s=!0,r(),x(n?f:p,[t]),t._leaveCb=void 0,E[i]===e&&delete E[i])};E[i]=e,h?S(h,[t,o]):o()},clone(e){return qr(e,t,n,r)}};return w}function Xr(e){if(ni(e))return e=Do(e),e.children=null,e}function Yr(e){return ni(e)?e.children?e.children[0]:void 0:e}function Kr(e,t){6&e.shapeFlag&&e.component?Kr(e.component.subTree,t):128&e.shapeFlag?(e.ssContent.transition=t.clone(e.ssContent),e.ssFallback.transition=t.clone(e.ssFallback)):e.transition=t}function Zr(e,t=!1,n){let r=[],i=0;for(let s=0;s1)for(let s=0;sh({name:e.name},t,{setup:e}))():e}const Jr=e=>!!e.type.__asyncLoader;function ei(e){E(e)&&(e={loader:e});const{loader:t,loadingComponent:n,errorComponent:r,delay:i=200,timeout:s,suspensible:o=!0,onError:a}=e;let l,c=null,u=0;const d=()=>(u++,c=null,h()),h=()=>{let e;return c||(e=c=t().catch((e=>{if(e=e instanceof Error?e:new Error(String(e)),a)return new Promise(((t,n)=>{const r=()=>t(d()),i=()=>n(e);a(e,r,i,u+1)}));throw e})).then((t=>e!==c&&c?c:(t&&(t.__esModule||"Module"===t[Symbol.toStringTag])&&(t=t.default),l=t,t))))};return Qr({name:"AsyncComponentWrapper",__asyncLoader:h,get __asyncResolved(){return l},setup(){const e=qo;if(l)return()=>ti(l,e);const t=t=>{c=null,In(t,e,13,!r)};if(o&&e.suspense||ra)return h().then((t=>()=>ti(t,e))).catch((e=>(t(e),()=>r?Oo(r,{error:e}):null)));const a=on(!1),u=on(),d=on(!!i);return i&&setTimeout((()=>{d.value=!1}),i),null!=s&&setTimeout((()=>{if(!a.value&&!u.value){const e=new Error(`Async component timed out after ${s}ms.`);t(e),u.value=e}}),s),h().then((()=>{a.value=!0,e.parent&&ni(e.parent.vnode)&&$n(e.parent.update)})).catch((e=>{t(e),u.value=e})),()=>a.value&&l?ti(l,e):u.value&&r?Oo(r,{error:u.value}):n&&!d.value?Oo(n):void 0}})}function ti(e,t){const{ref:n,props:r,children:i,ce:s}=t.vnode,o=Oo(e,r,i);return o.ref=n,o.ce=s,delete t.vnode.ce,o}const ni=e=>e.type.__isKeepAlive,ri={name:"KeepAlive",__isKeepAlive:!0,props:{include:[String,RegExp,Array],exclude:[String,RegExp,Array],max:[String,Number]},setup(e,{slots:t}){const n=Xo(),r=n.ctx;if(!r.renderer)return()=>{const e=t.default&&t.default();return e&&1===e.length?e[0]:e};const i=new Map,s=new Set;let o=null;const a=n.suspense,{renderer:{p:l,m:c,um:u,o:{createElement:d}}}=r,h=d("div");function p(e){ui(e),u(e,n,a,!0)}function f(e){i.forEach(((t,n)=>{const r=pa(t.type);!r||e&&e(r)||g(n)}))}function g(e){const t=i.get(e);o&&Ao(t,o)?o&&ui(o):p(t),i.delete(e),s.delete(e)}r.activate=(e,t,n,r,i)=>{const s=e.component;c(e,t,n,0,a),l(s.vnode,e,t,n,s,a,r,e.slotScopeIds,i),Xs((()=>{s.isDeactivated=!1,s.a&&$(s.a);const t=e.props&&e.props.onVnodeMounted;t&&Ho(t,s.parent,e)}),a)},r.deactivate=e=>{const t=e.component;c(e,h,null,1,a),Xs((()=>{t.da&&$(t.da);const n=e.props&&e.props.onVnodeUnmounted;n&&Ho(n,t.parent,e),t.isDeactivated=!0}),a)},Mr((()=>[e.include,e.exclude]),(([e,t])=>{e&&f((t=>si(e,t))),t&&f((e=>!si(t,e)))}),{flush:"post",deep:!0});let m=null;const b=()=>{null!=m&&i.set(m,di(n.subTree))};return gi(b),bi(b),_i((()=>{i.forEach((e=>{const{subTree:t,suspense:r}=n,i=di(t);if(e.type!==i.type||e.key!==i.key)p(e);else{ui(i);const e=i.component.da;e&&Xs(e,r)}}))})),()=>{if(m=null,!t.default)return null;const n=t.default(),r=n[0];if(n.length>1)return o=null,n;if(!To(r)||!(4&r.shapeFlag)&&!(128&r.shapeFlag))return o=null,r;let a=di(r);const l=a.type,c=pa(Jr(a)?a.type.__asyncResolved||{}:l),{include:u,exclude:d,max:h}=e;if(u&&(!c||!si(u,c))||d&&c&&si(d,c))return o=a,r;const p=null==a.key?l:a.key,f=i.get(p);return a.el&&(a=Do(a),128&r.shapeFlag&&(r.ssContent=a)),m=p,f?(a.el=f.el,a.component=f.component,a.transition&&Kr(a,a.transition),a.shapeFlag|=512,s.delete(p),s.add(p)):(s.add(p),h&&s.size>parseInt(h,10)&&g(s.values().next().value)),a.shapeFlag|=256,o=a,br(r.type)?r:a}}},ii=ri;function si(e,t){return m(e)?e.some((e=>si(e,t))):x(e)?e.split(",").includes(t):!!v(e)&&e.test(t)}function oi(e,t){li(e,"a",t)}function ai(e,t){li(e,"da",t)}function li(e,t,n=qo){const r=e.__wdc||(e.__wdc=()=>{let t=n;while(t){if(t.isDeactivated)return;t=t.parent}return e()});if(hi(t,r,n),n){let e=n.parent;while(e&&e.parent)ni(e.parent.vnode)&&ci(r,t,n,e),e=e.parent}}function ci(e,t,n,r){const i=hi(t,e,r,!0);yi((()=>{p(r[t],i)}),n)}function ui(e){e.shapeFlag&=-257,e.shapeFlag&=-513}function di(e){return 128&e.shapeFlag?e.ssContent:e}function hi(e,t,n=qo,r=!1){if(n){const i=n[e]||(n[e]=[]),s=t.__weh||(t.__weh=(...r)=>{if(n.isUnmounted)return;$e(),Qo(n);const i=Cn(t,n,e,r);return Jo(),ze(),i});return r?i.unshift(s):i.push(s),s}}const pi=e=>(t,n=qo)=>(!ra||"sp"===e)&&hi(e,((...e)=>t(...e)),n),fi=pi("bm"),gi=pi("m"),mi=pi("bu"),bi=pi("u"),_i=pi("bum"),yi=pi("um"),vi=pi("sp"),Ei=pi("rtg"),xi=pi("rtc");function Si(e,t=qo){hi("ec",e,t)}const wi="components",Ti="directives";function Ai(e,t){return ki(wi,e,!0,t)||e}const Ci=Symbol.for("v-ndc");function Ii(e){return x(e)?ki(wi,e,!1)||e:e||Ci}function Ri(e){return ki(Ti,e)}function ki(e,t,n=!0,r=!1){const i=rr||qo;if(i){const n=i.type;if(e===wi){const e=pa(n,!1);if(e&&(e===t||e===D(t)||e===B(D(t))))return n}const s=Pi(i[e]||n[e],t)||Pi(i.appContext[e],t);return!s&&r?n:s}}function Pi(e,t){return e&&(e[t]||e[D(t)]||e[B(D(t))])}function Oi(e,t,n,r){let i;const s=n&&n[r];if(m(e)||x(e)){i=new Array(e.length);for(let n=0,r=e.length;nt(e,n,void 0,s&&s[n])));else{const n=Object.keys(e);i=new Array(n.length);for(let r=0,o=n.length;r{const t=r.fn(...e);return t&&(t.key=r.key),t}:r.fn)}return e}function Mi(e,t,n={},r,i){if(rr.isCE||rr.parent&&Jr(rr.parent)&&rr.parent.isCE)return"default"!==t&&(n.name=t),Oo("slot",n,r&&r());let s=e[t];s&&s._c&&(s._d=!1),bo();const o=s&&Di(s(n)),a=wo(uo,{key:n.key||o&&o.key||`_${t}`},o||(r?r():[]),o&&1===e._?64:-2);return!i&&a.scopeId&&(a.slotScopeIds=[a.scopeId+"-s"]),s&&s._c&&(s._d=!0),a}function Di(e){return e.some((e=>!To(e)||e.type!==po&&!(e.type===uo&&!Di(e.children))))?e:null}function Li(e,t){const n={};for(const r in e)n[t&&/[A-Z]/.test(r)?`on:${r}`:U(r)]=e[r];return n}const Fi=e=>e?ea(e)?ha(e)||e.proxy:Fi(e.parent):null,Bi=h(Object.create(null),{$:e=>e,$el:e=>e.vnode.el,$data:e=>e.data,$props:e=>e.props,$attrs:e=>e.attrs,$slots:e=>e.slots,$refs:e=>e.refs,$parent:e=>Fi(e.parent),$root:e=>Fi(e.root),$emit:e=>e.emit,$options:e=>cs(e),$forceUpdate:e=>e.f||(e.f=()=>$n(e.update)),$nextTick:e=>e.n||(e.n=Un.bind(e.proxy)),$watch:e=>Lr.bind(e)}),Ui=(e,t)=>e!==s&&!e.__isScriptSetup&&g(e,t),Gi={get({_:e},t){const{ctx:n,setupState:r,data:i,props:o,accessCache:a,type:l,appContext:c}=e;let u;if("$"!==t[0]){const l=a[t];if(void 0!==l)switch(l){case 1:return r[t];case 2:return i[t];case 4:return n[t];case 3:return o[t]}else{if(Ui(r,t))return a[t]=1,r[t];if(i!==s&&g(i,t))return a[t]=2,i[t];if((u=e.propsOptions[0])&&g(u,t))return a[t]=3,o[t];if(n!==s&&g(n,t))return a[t]=4,n[t];is&&(a[t]=0)}}const d=Bi[t];let h,p;return d?("$attrs"===t&&He(e,"get",t),d(e)):(h=l.__cssModules)&&(h=h[t])?h:n!==s&&g(n,t)?(a[t]=4,n[t]):(p=c.config.globalProperties,g(p,t)?p[t]:void 0)},set({_:e},t,n){const{data:r,setupState:i,ctx:o}=e;return Ui(i,t)?(i[t]=n,!0):r!==s&&g(r,t)?(r[t]=n,!0):!g(e.props,t)&&(("$"!==t[0]||!(t.slice(1)in e))&&(o[t]=n,!0))},has({_:{data:e,setupState:t,accessCache:n,ctx:r,appContext:i,propsOptions:o}},a){let l;return!!n[a]||e!==s&&g(e,a)||Ui(t,a)||(l=o[0])&&g(l,a)||g(r,a)||g(Bi,a)||g(i.config.globalProperties,a)},defineProperty(e,t,n){return null!=n.get?e._.accessCache[t]=0:g(n,"value")&&this.set(e,t,n.value,null),Reflect.defineProperty(e,t,n)}};const $i=h({},Gi,{get(e,t){if(t!==Symbol.unscopables)return Gi.get(e,t,e)},has(e,t){const n="_"!==t[0]&&!Y(t);return n}});function zi(){return null}function Hi(){return null}function Vi(e){0}function ji(e){0}function Wi(){return null}function qi(){0}function Xi(e,t){return null}function Yi(){return Qi().slots}function Ki(){return Qi().attrs}function Zi(e,t,n){const r=Xo();if(n&&n.local){const n=on(e[t]);return Mr((()=>e[t]),(e=>n.value=e)),Mr(n,(n=>{n!==e[t]&&r.emit(`update:${t}`,n)})),n}return{__v_isRef:!0,get value(){return e[t]},set value(e){r.emit(`update:${t}`,e)}}}function Qi(){const e=Xo();return e.setupContext||(e.setupContext=da(e))}function Ji(e){return m(e)?e.reduce(((e,t)=>(e[t]=null,e)),{}):e}function es(e,t){const n=Ji(e);for(const r in t){if(r.startsWith("__skip"))continue;let e=n[r];e?m(e)||E(e)?e=n[r]={type:e,default:t[r]}:e.default=t[r]:null===e&&(e=n[r]={default:t[r]}),e&&t[`__skip_${r}`]&&(e.skipFactory=!0)}return n}function ts(e,t){return e&&t?m(e)&&m(t)?e.concat(t):h({},Ji(e),Ji(t)):e||t}function ns(e,t){const n={};for(const r in e)t.includes(r)||Object.defineProperty(n,r,{enumerable:!0,get:()=>e[r]});return n}function rs(e){const t=Xo();let n=e();return Jo(),T(n)&&(n=n.catch((e=>{throw Qo(t),e}))),[n,()=>Qo(t)]}let is=!0;function ss(e){const t=cs(e),n=e.proxy,r=e.ctx;is=!1,t.beforeCreate&&as(t.beforeCreate,e,"bc");const{data:i,computed:s,methods:o,watch:l,provide:c,inject:u,created:d,beforeMount:h,mounted:p,beforeUpdate:f,updated:g,activated:b,deactivated:_,beforeDestroy:y,beforeUnmount:v,destroyed:x,unmounted:S,render:T,renderTracked:A,renderTriggered:C,errorCaptured:I,serverPrefetch:R,expose:k,inheritAttrs:P,components:O,directives:N,filters:M}=t,D=null;if(u&&os(u,r,D),o)for(const a in o){const e=o[a];E(e)&&(r[a]=e.bind(n))}if(i){0;const t=i.call(n,n);0,w(t)&&(e.data=Ht(t))}if(is=!0,s)for(const m in s){const e=s[m],t=E(e)?e.bind(n,n):E(e.get)?e.get.bind(n,n):a;0;const i=!E(e)&&E(e.set)?e.set.bind(n):a,o=ga({get:t,set:i});Object.defineProperty(r,m,{enumerable:!0,configurable:!0,get:()=>o.value,set:e=>o.value=e})}if(l)for(const a in l)ls(l[a],r,n,a);if(c){const e=E(c)?c.call(n):c;Reflect.ownKeys(e).forEach((t=>{Ss(t,e[t])}))}function L(e,t){m(t)?t.forEach((t=>e(t.bind(n)))):t&&e(t.bind(n))}if(d&&as(d,e,"c"),L(fi,h),L(gi,p),L(mi,f),L(bi,g),L(oi,b),L(ai,_),L(Si,I),L(xi,A),L(Ei,C),L(_i,v),L(yi,S),L(vi,R),m(k))if(k.length){const t=e.exposed||(e.exposed={});k.forEach((e=>{Object.defineProperty(t,e,{get:()=>n[e],set:t=>n[e]=t})}))}else e.exposed||(e.exposed={});T&&e.render===a&&(e.render=T),null!=P&&(e.inheritAttrs=P),O&&(e.components=O),N&&(e.directives=N)}function os(e,t,n=a){m(e)&&(e=fs(e));for(const r in e){const n=e[r];let i;i=w(n)?"default"in n?ws(n.from||r,n.default,!0):ws(n.from||r):ws(n),sn(i)?Object.defineProperty(t,r,{enumerable:!0,configurable:!0,get:()=>i.value,set:e=>i.value=e}):t[r]=i}}function as(e,t,n){Cn(m(e)?e.map((e=>e.bind(t.proxy))):e.bind(t.proxy),t,n)}function ls(e,t,n,r){const i=r.includes(".")?Fr(n,r):()=>n[r];if(x(e)){const n=t[e];E(n)&&Mr(i,n)}else if(E(e))Mr(i,e.bind(n));else if(w(e))if(m(e))e.forEach((e=>ls(e,t,n,r)));else{const r=E(e.handler)?e.handler.bind(n):t[e.handler];E(r)&&Mr(i,r,e)}else 0}function cs(e){const t=e.type,{mixins:n,extends:r}=t,{mixins:i,optionsCache:s,config:{optionMergeStrategies:o}}=e.appContext,a=s.get(t);let l;return a?l=a:i.length||n||r?(l={},i.length&&i.forEach((e=>us(l,e,o,!0))),us(l,t,o)):l=t,w(t)&&s.set(t,l),l}function us(e,t,n,r=!1){const{mixins:i,extends:s}=t;s&&us(e,s,n,!0),i&&i.forEach((t=>us(e,t,n,!0)));for(const o in t)if(r&&"expose"===o);else{const r=ds[o]||n&&n[o];e[o]=r?r(e[o],t[o]):t[o]}return e}const ds={data:hs,props:bs,emits:bs,methods:ms,computed:ms,beforeCreate:gs,created:gs,beforeMount:gs,mounted:gs,beforeUpdate:gs,updated:gs,beforeDestroy:gs,beforeUnmount:gs,destroyed:gs,unmounted:gs,activated:gs,deactivated:gs,errorCaptured:gs,serverPrefetch:gs,components:ms,directives:ms,watch:_s,provide:hs,inject:ps};function hs(e,t){return t?e?function(){return h(E(e)?e.call(this,this):e,E(t)?t.call(this,this):t)}:t:e}function ps(e,t){return ms(fs(e),fs(t))}function fs(e){if(m(e)){const t={};for(let n=0;n1)return n&&E(t)?t.call(r&&r.proxy):t}else 0}function Ts(){return!!(qo||rr||xs)}function As(e,t,n,r=!1){const i={},s={};z(s,Io,1),e.propsDefaults=Object.create(null),Is(e,t,i,s);for(const o in e.propsOptions[0])o in i||(i[o]=void 0);n?e.props=r?i:Vt(i):e.type.props?e.props=i:e.props=s,e.attrs=s}function Cs(e,t,n,r){const{props:i,attrs:s,vnode:{patchFlag:o}}=e,a=Qt(i),[l]=e.propsOptions;let c=!1;if(!(r||o>0)||16&o){let r;Is(e,t,i,s)&&(c=!0);for(const s in a)t&&(g(t,s)||(r=F(s))!==s&&g(t,r))||(l?!n||void 0===n[s]&&void 0===n[r]||(i[s]=Rs(l,a,s,void 0,e,!0)):delete i[s]);if(s!==a)for(const e in s)t&&g(t,e)||(delete s[e],c=!0)}else if(8&o){const n=e.vnode.dynamicProps;for(let r=0;r{u=!0;const[n,r]=ks(e,t,!0);h(l,n),r&&c.push(...r)};!n&&t.mixins.length&&t.mixins.forEach(r),e.extends&&r(e.extends),e.mixins&&e.mixins.forEach(r)}if(!a&&!u)return w(e)&&r.set(e,o),o;if(m(a))for(let o=0;o-1,r[1]=n<0||e-1||g(r,"default"))&&c.push(t)}}}}const d=[l,c];return w(e)&&r.set(e,d),d}function Ps(e){return"$"!==e[0]}function Os(e){const t=e&&e.toString().match(/^\s*(function|class) (\w+)/);return t?t[2]:null===e?"null":""}function Ns(e,t){return Os(e)===Os(t)}function Ms(e,t){return m(t)?t.findIndex((t=>Ns(t,e))):E(t)&&Ns(t,e)?0:-1}const Ds=e=>"_"===e[0]||"$stable"===e,Ls=e=>m(e)?e.map(Uo):[Uo(e)],Fs=(e,t,n)=>{if(t._n)return t;const r=cr(((...e)=>Ls(t(...e))),n);return r._c=!1,r},Bs=(e,t,n)=>{const r=e._ctx;for(const i in e){if(Ds(i))continue;const n=e[i];if(E(n))t[i]=Fs(i,n,r);else if(null!=n){0;const e=Ls(n);t[i]=()=>e}}},Us=(e,t)=>{const n=Ls(t);e.slots.default=()=>n},Gs=(e,t)=>{if(32&e.vnode.shapeFlag){const n=t._;n?(e.slots=Qt(t),z(t,"_",n)):Bs(t,e.slots={})}else e.slots={},t&&Us(e,t);z(e.slots,Io,1)},$s=(e,t,n)=>{const{vnode:r,slots:i}=e;let o=!0,a=s;if(32&r.shapeFlag){const e=t._;e?n&&1===e?o=!1:(h(i,t),n||1!==e||delete i._):(o=!t.$stable,Bs(t,i)),a=t}else t&&(Us(e,t),a={default:1});if(o)for(const s in i)Ds(s)||s in a||delete i[s]};function zs(e,t,n,r,i=!1){if(m(e))return void e.forEach(((e,s)=>zs(e,t&&(m(t)?t[s]:t),n,r,i)));if(Jr(r)&&!i)return;const o=4&r.shapeFlag?ha(r.component)||r.component.proxy:r.el,a=i?null:o,{i:l,r:c}=e;const u=t&&t.r,d=l.refs===s?l.refs={}:l.refs,h=l.setupState;if(null!=u&&u!==c&&(x(u)?(d[u]=null,g(h,u)&&(h[u]=null)):sn(u)&&(u.value=null)),E(c))An(c,l,12,[a,d]);else{const t=x(c),r=sn(c);if(t||r){const s=()=>{if(e.f){const n=t?g(h,c)?h[c]:d[c]:c.value;i?m(n)&&p(n,o):m(n)?n.includes(o)||n.push(o):t?(d[c]=[o],g(h,c)&&(h[c]=d[c])):(c.value=[o],e.k&&(d[e.k]=c.value))}else t?(d[c]=a,g(h,c)&&(h[c]=a)):r&&(c.value=a,e.k&&(d[e.k]=a))};a?(s.id=-1,Xs(s,n)):s()}else 0}}let Hs=!1;const Vs=e=>/svg/.test(e.namespaceURI)&&"foreignObject"!==e.tagName,js=e=>8===e.nodeType;function Ws(e){const{mt:t,p:n,o:{patchProp:r,createText:i,nextSibling:s,parentNode:o,remove:a,insert:l,createComment:c}}=e,d=(e,t)=>{if(!t.hasChildNodes())return n(null,e,t),Wn(),void(t._vnode=e);Hs=!1,h(t.firstChild,e,null,null,null),Wn(),t._vnode=e,Hs&&console.error("Hydration completed but contains mismatches.")},h=(n,r,a,c,u,d=!1)=>{const _=js(n)&&"["===n.data,y=()=>m(n,r,a,c,u,_),{type:v,ref:E,shapeFlag:x,patchFlag:S}=r;let w=n.nodeType;r.el=n,-2===S&&(d=!1,r.dynamicChildren=null);let T=null;switch(v){case ho:3!==w?""===r.children?(l(r.el=i(""),o(n),n),T=n):T=y():(n.data!==r.children&&(Hs=!0,n.data=r.children),T=s(n));break;case po:T=8!==w||_?y():s(n);break;case fo:if(_&&(n=s(n),w=n.nodeType),1===w||3===w){T=n;const e=!r.children.length;for(let t=0;t{o=o||!!t.dynamicChildren;const{type:l,props:c,patchFlag:d,shapeFlag:h,dirs:p}=t,g="input"===l&&p||"option"===l;if(g||-1!==d){if(p&&Gr(t,null,n,"created"),c)if(g||!o||48&d)for(const t in c)(g&&t.endsWith("value")||u(t)&&!P(t))&&r(e,t,null,c[t],!1,void 0,n);else c.onClick&&r(e,"onClick",null,c.onClick,!1,void 0,n);let l;if((l=c&&c.onVnodeBeforeMount)&&Ho(l,n,t),p&&Gr(t,null,n,"beforeMount"),((l=c&&c.onVnodeMounted)||p)&&Cr((()=>{l&&Ho(l,n,t),p&&Gr(t,null,n,"mounted")}),i),16&h&&(!c||!c.innerHTML&&!c.textContent)){let r=f(e.firstChild,t,e,n,i,s,o);while(r){Hs=!0;const e=r;r=r.nextSibling,a(e)}}else 8&h&&e.textContent!==t.children&&(Hs=!0,e.textContent=t.children)}return e.nextSibling},f=(e,t,r,i,s,o,a)=>{a=a||!!t.dynamicChildren;const l=t.children,c=l.length;for(let u=0;u{const{slotScopeIds:u}=t;u&&(i=i?i.concat(u):u);const d=o(e),h=f(s(e),t,d,n,r,i,a);return h&&js(h)&&"]"===h.data?s(t.anchor=h):(Hs=!0,l(t.anchor=c("]"),d,h),h)},m=(e,t,r,i,l,c)=>{if(Hs=!0,t.el=null,c){const t=b(e);while(1){const n=s(e);if(!n||n===t)break;a(n)}}const u=s(e),d=o(e);return a(e),n(null,t,d,u,r,i,Vs(d),l),u},b=e=>{let t=0;while(e)if(e=s(e),e&&js(e)&&("["===e.data&&t++,"]"===e.data)){if(0===t)return s(e);t--}return e};return[d,h]}function qs(){}const Xs=Cr;function Ys(e){return Zs(e)}function Ks(e){return Zs(e,Ws)}function Zs(e,t){qs();const n=W();n.__VUE__=!0;const{insert:r,remove:i,patchProp:l,createElement:c,createText:u,createComment:d,setText:h,setElementText:p,parentNode:f,nextSibling:g,setScopeId:m=a,insertStaticContent:b}=e,_=(e,t,n,r=null,i=null,s=null,o=!1,a=null,l=!!t.dynamicChildren)=>{if(e===t)return;e&&!Ao(e,t)&&(r=Y(e),H(e,i,s,!0),e=null),-2===t.patchFlag&&(l=!1,t.dynamicChildren=null);const{type:c,ref:u,shapeFlag:d}=t;switch(c){case ho:y(e,t,n,r);break;case po:v(e,t,n,r);break;case fo:null==e&&E(t,n,r,o);break;case uo:O(e,t,n,r,i,s,o,a,l);break;default:1&d?w(e,t,n,r,i,s,o,a,l):6&d?N(e,t,n,r,i,s,o,a,l):(64&d||128&d)&&c.process(e,t,n,r,i,s,o,a,l,Z)}null!=u&&i&&zs(u,e&&e.ref,s,t||e,!t)},y=(e,t,n,i)=>{if(null==e)r(t.el=u(t.children),n,i);else{const n=t.el=e.el;t.children!==e.children&&h(n,t.children)}},v=(e,t,n,i)=>{null==e?r(t.el=d(t.children||""),n,i):t.el=e.el},E=(e,t,n,r)=>{[e.el,e.anchor]=b(e.children,t,n,r,e.el,e.anchor)},x=({el:e,anchor:t},n,i)=>{let s;while(e&&e!==t)s=g(e),r(e,n,i),e=s;r(t,n,i)},S=({el:e,anchor:t})=>{let n;while(e&&e!==t)n=g(e),i(e),e=n;i(t)},w=(e,t,n,r,i,s,o,a,l)=>{o=o||"svg"===t.type,null==e?T(t,n,r,i,s,o,a,l):I(e,t,i,s,o,a,l)},T=(e,t,n,i,s,o,a,u)=>{let d,h;const{type:f,props:g,shapeFlag:m,transition:b,dirs:_}=e;if(d=e.el=c(e.type,o,g&&g.is,g),8&m?p(d,e.children):16&m&&C(e.children,d,null,i,s,o&&"foreignObject"!==f,a,u),_&&Gr(e,null,i,"created"),A(d,e,e.scopeId,a,i),g){for(const t in g)"value"===t||P(t)||l(d,t,null,g[t],o,e.children,i,s,X);"value"in g&&l(d,"value",null,g.value),(h=g.onVnodeBeforeMount)&&Ho(h,i,e)}_&&Gr(e,null,i,"beforeMount");const y=(!s||s&&!s.pendingBranch)&&b&&!b.persisted;y&&b.beforeEnter(d),r(d,t,n),((h=g&&g.onVnodeMounted)||y||_)&&Xs((()=>{h&&Ho(h,i,e),y&&b.enter(d),_&&Gr(e,null,i,"mounted")}),s)},A=(e,t,n,r,i)=>{if(n&&m(e,n),r)for(let s=0;s{for(let c=l;c{const c=t.el=e.el;let{patchFlag:u,dynamicChildren:d,dirs:h}=t;u|=16&e.patchFlag;const f=e.props||s,g=t.props||s;let m;n&&Qs(n,!1),(m=g.onVnodeBeforeUpdate)&&Ho(m,n,t,e),h&&Gr(t,e,n,"beforeUpdate"),n&&Qs(n,!0);const b=i&&"foreignObject"!==t.type;if(d?R(e.dynamicChildren,d,c,n,r,b,o):a||B(e,t,c,null,n,r,b,o,!1),u>0){if(16&u)k(c,t,f,g,n,r,i);else if(2&u&&f.class!==g.class&&l(c,"class",null,g.class,i),4&u&&l(c,"style",f.style,g.style,i),8&u){const s=t.dynamicProps;for(let t=0;t{m&&Ho(m,n,t,e),h&&Gr(t,e,n,"updated")}),r)},R=(e,t,n,r,i,s,o)=>{for(let a=0;a{if(n!==r){if(n!==s)for(const s in n)P(s)||s in r||l(e,s,n[s],null,a,t.children,i,o,X);for(const s in r){if(P(s))continue;const c=r[s],u=n[s];c!==u&&"value"!==s&&l(e,s,u,c,a,t.children,i,o,X)}"value"in r&&l(e,"value",n.value,r.value)}},O=(e,t,n,i,s,o,a,l,c)=>{const d=t.el=e?e.el:u(""),h=t.anchor=e?e.anchor:u("");let{patchFlag:p,dynamicChildren:f,slotScopeIds:g}=t;g&&(l=l?l.concat(g):g),null==e?(r(d,n,i),r(h,n,i),C(t.children,n,h,s,o,a,l,c)):p>0&&64&p&&f&&e.dynamicChildren?(R(e.dynamicChildren,f,n,s,o,a,l),(null!=t.key||s&&t===s.subTree)&&Js(e,t,!0)):B(e,t,n,h,s,o,a,l,c)},N=(e,t,n,r,i,s,o,a,l)=>{t.slotScopeIds=a,null==e?512&t.shapeFlag?i.ctx.activate(t,n,r,o,l):M(t,n,r,i,s,o,l):D(e,t,l)},M=(e,t,n,r,i,s,o)=>{const a=e.component=Wo(e,r,i);if(ni(e)&&(a.ctx.renderer=Z),ia(a),a.asyncDep){if(i&&i.registerDep(a,L),!e.el){const e=a.subTree=Oo(po);v(null,e,t,n)}}else L(a,e,t,n,i,s,o)},D=(e,t,n)=>{const r=t.component=e.component;if(fr(e,t,n)){if(r.asyncDep&&!r.asyncResolved)return void F(r,t,n);r.next=t,Hn(r.update),r.update()}else t.el=e.el,r.vnode=t},L=(e,t,n,r,i,s,o)=>{const a=()=>{if(e.isMounted){let t,{next:n,bu:r,u:a,parent:l,vnode:c}=e,u=n;0,Qs(e,!1),n?(n.el=c.el,F(e,n,o)):n=c,r&&$(r),(t=n.props&&n.props.onVnodeBeforeUpdate)&&Ho(t,l,n,c),Qs(e,!0);const d=ur(e);0;const h=e.subTree;e.subTree=d,_(h,d,f(h.el),Y(h),e,i,s),n.el=d.el,null===u&&mr(e,d.el),a&&Xs(a,i),(t=n.props&&n.props.onVnodeUpdated)&&Xs((()=>Ho(t,l,n,c)),i)}else{let o;const{el:a,props:l}=t,{bm:c,m:u,parent:d}=e,h=Jr(t);if(Qs(e,!1),c&&$(c),!h&&(o=l&&l.onVnodeBeforeMount)&&Ho(o,d,t),Qs(e,!0),a&&J){const n=()=>{e.subTree=ur(e),J(a,e.subTree,e,i,null)};h?t.type.__asyncLoader().then((()=>!e.isUnmounted&&n())):n()}else{0;const o=e.subTree=ur(e);0,_(null,o,n,r,e,i,s),t.el=o.el}if(u&&Xs(u,i),!h&&(o=l&&l.onVnodeMounted)){const e=t;Xs((()=>Ho(o,d,e)),i)}(256&t.shapeFlag||d&&Jr(d.vnode)&&256&d.vnode.shapeFlag)&&e.a&&Xs(e.a,i),e.isMounted=!0,t=n=r=null}},l=e.effect=new De(a,(()=>$n(c)),e.scope),c=e.update=()=>l.run();c.id=e.uid,Qs(e,!0),c()},F=(e,t,n)=>{t.component=e;const r=e.vnode.props;e.vnode=t,e.next=null,Cs(e,t.props,r,n),$s(e,t.children,n),$e(),jn(),ze()},B=(e,t,n,r,i,s,o,a,l=!1)=>{const c=e&&e.children,u=e?e.shapeFlag:0,d=t.children,{patchFlag:h,shapeFlag:f}=t;if(h>0){if(128&h)return void G(c,d,n,r,i,s,o,a,l);if(256&h)return void U(c,d,n,r,i,s,o,a,l)}8&f?(16&u&&X(c,i,s),d!==c&&p(n,d)):16&u?16&f?G(c,d,n,r,i,s,o,a,l):X(c,i,s,!0):(8&u&&p(n,""),16&f&&C(d,n,r,i,s,o,a,l))},U=(e,t,n,r,i,s,a,l,c)=>{e=e||o,t=t||o;const u=e.length,d=t.length,h=Math.min(u,d);let p;for(p=0;pd?X(e,i,s,!0,!1,h):C(t,n,r,i,s,a,l,c,h)},G=(e,t,n,r,i,s,a,l,c)=>{let u=0;const d=t.length;let h=e.length-1,p=d-1;while(u<=h&&u<=p){const r=e[u],o=t[u]=c?Go(t[u]):Uo(t[u]);if(!Ao(r,o))break;_(r,o,n,null,i,s,a,l,c),u++}while(u<=h&&u<=p){const r=e[h],o=t[p]=c?Go(t[p]):Uo(t[p]);if(!Ao(r,o))break;_(r,o,n,null,i,s,a,l,c),h--,p--}if(u>h){if(u<=p){const e=p+1,o=ep)while(u<=h)H(e[u],i,s,!0),u++;else{const f=u,g=u,m=new Map;for(u=g;u<=p;u++){const e=t[u]=c?Go(t[u]):Uo(t[u]);null!=e.key&&m.set(e.key,u)}let b,y=0;const v=p-g+1;let E=!1,x=0;const S=new Array(v);for(u=0;u=v){H(r,i,s,!0);continue}let o;if(null!=r.key)o=m.get(r.key);else for(b=g;b<=p;b++)if(0===S[b-g]&&Ao(r,t[b])){o=b;break}void 0===o?H(r,i,s,!0):(S[o-g]=u+1,o>=x?x=o:E=!0,_(r,t[o],n,null,i,s,a,l,c),y++)}const w=E?eo(S):o;for(b=w.length-1,u=v-1;u>=0;u--){const e=g+u,o=t[e],h=e+1{const{el:o,type:a,transition:l,children:c,shapeFlag:u}=e;if(6&u)return void z(e.component.subTree,t,n,i);if(128&u)return void e.suspense.move(t,n,i);if(64&u)return void a.move(e,t,n,Z);if(a===uo){r(o,t,n);for(let e=0;el.enter(o)),s);else{const{leave:e,delayLeave:i,afterLeave:s}=l,a=()=>r(o,t,n),c=()=>{e(o,(()=>{a(),s&&s()}))};i?i(o,a,c):c()}else r(o,t,n)},H=(e,t,n,r=!1,i=!1)=>{const{type:s,props:o,ref:a,children:l,dynamicChildren:c,shapeFlag:u,patchFlag:d,dirs:h}=e;if(null!=a&&zs(a,null,n,e,!0),256&u)return void t.ctx.deactivate(e);const p=1&u&&h,f=!Jr(e);let g;if(f&&(g=o&&o.onVnodeBeforeUnmount)&&Ho(g,t,e),6&u)q(e.component,n,r);else{if(128&u)return void e.suspense.unmount(n,r);p&&Gr(e,null,t,"beforeUnmount"),64&u?e.type.remove(e,t,n,i,Z,r):c&&(s!==uo||d>0&&64&d)?X(c,t,n,!1,!0):(s===uo&&384&d||!i&&16&u)&&X(l,t,n),r&&V(e)}(f&&(g=o&&o.onVnodeUnmounted)||p)&&Xs((()=>{g&&Ho(g,t,e),p&&Gr(e,null,t,"unmounted")}),n)},V=e=>{const{type:t,el:n,anchor:r,transition:s}=e;if(t===uo)return void j(n,r);if(t===fo)return void S(e);const o=()=>{i(n),s&&!s.persisted&&s.afterLeave&&s.afterLeave()};if(1&e.shapeFlag&&s&&!s.persisted){const{leave:t,delayLeave:r}=s,i=()=>t(n,o);r?r(e.el,o,i):i()}else o()},j=(e,t)=>{let n;while(e!==t)n=g(e),i(e),e=n;i(t)},q=(e,t,n)=>{const{bum:r,scope:i,update:s,subTree:o,um:a}=e;r&&$(r),i.stop(),s&&(s.active=!1,H(o,e,t,n)),a&&Xs(a,t),Xs((()=>{e.isUnmounted=!0}),t),t&&t.pendingBranch&&!t.isUnmounted&&e.asyncDep&&!e.asyncResolved&&e.suspenseId===t.pendingId&&(t.deps--,0===t.deps&&t.resolve())},X=(e,t,n,r=!1,i=!1,s=0)=>{for(let o=s;o6&e.shapeFlag?Y(e.component.subTree):128&e.shapeFlag?e.suspense.next():g(e.anchor||e.el),K=(e,t,n)=>{null==e?t._vnode&&H(t._vnode,null,null,!0):_(t._vnode||null,e,t,null,null,null,n),jn(),Wn(),t._vnode=e},Z={p:_,um:H,m:z,r:V,mt:M,mc:C,pc:B,pbc:R,n:Y,o:e};let Q,J;return t&&([Q,J]=t(Z)),{render:K,hydrate:Q,createApp:Es(K,Q)}}function Qs({effect:e,update:t},n){e.allowRecurse=t.allowRecurse=n}function Js(e,t,n=!1){const r=e.children,i=t.children;if(m(r)&&m(i))for(let s=0;s>1,e[n[a]]0&&(t[r]=n[s-1]),n[s]=r)}}s=n.length,o=n[s-1];while(s-- >0)n[s]=o,o=t[o];return n}const to=e=>e.__isTeleport,no=e=>e&&(e.disabled||""===e.disabled),ro=e=>"undefined"!==typeof SVGElement&&e instanceof SVGElement,io=(e,t)=>{const n=e&&e.to;if(x(n)){if(t){const e=t(n);return e}return null}return n},so={__isTeleport:!0,process(e,t,n,r,i,s,o,a,l,c){const{mc:u,pc:d,pbc:h,o:{insert:p,querySelector:f,createText:g,createComment:m}}=c,b=no(t.props);let{shapeFlag:_,children:y,dynamicChildren:v}=t;if(null==e){const e=t.el=g(""),c=t.anchor=g("");p(e,n,r),p(c,n,r);const d=t.target=io(t.props,f),h=t.targetAnchor=g("");d&&(p(h,d),o=o||ro(d));const m=(e,t)=>{16&_&&u(y,e,t,i,s,o,a,l)};b?m(n,c):d&&m(d,h)}else{t.el=e.el;const r=t.anchor=e.anchor,u=t.target=e.target,p=t.targetAnchor=e.targetAnchor,g=no(e.props),m=g?n:u,_=g?r:p;if(o=o||ro(u),v?(h(e.dynamicChildren,v,m,i,s,o,a),Js(e,t,!0)):l||d(e,t,m,_,i,s,o,a,!1),b)g||oo(t,n,r,c,1);else if((t.props&&t.props.to)!==(e.props&&e.props.to)){const e=t.target=io(t.props,f);e&&oo(t,e,null,c,0)}else g&&oo(t,u,p,c,1)}co(t)},remove(e,t,n,r,{um:i,o:{remove:s}},o){const{shapeFlag:a,children:l,anchor:c,targetAnchor:u,target:d,props:h}=e;if(d&&s(u),(o||!no(h))&&(s(c),16&a))for(let p=0;p0?mo||o:null,_o(),vo>0&&mo&&mo.push(e),e}function So(e,t,n,r,i,s){return xo(Po(e,t,n,r,i,s,!0))}function wo(e,t,n,r,i){return xo(Oo(e,t,n,r,i,!0))}function To(e){return!!e&&!0===e.__v_isVNode}function Ao(e,t){return e.type===t.type&&e.key===t.key}function Co(e){yo=e}const Io="__vInternal",Ro=({key:e})=>null!=e?e:null,ko=({ref:e,ref_key:t,ref_for:n})=>("number"===typeof e&&(e=""+e),null!=e?x(e)||sn(e)||E(e)?{i:rr,r:e,k:t,f:!!n}:e:null);function Po(e,t=null,n=null,r=0,i=null,s=(e===uo?0:1),o=!1,a=!1){const l={__v_isVNode:!0,__v_skip:!0,type:e,props:t,key:t&&Ro(t),ref:t&&ko(t),scopeId:ir,slotScopeIds:null,children:n,component:null,suspense:null,ssContent:null,ssFallback:null,dirs:null,transition:null,el:null,anchor:null,target:null,targetAnchor:null,staticCount:0,shapeFlag:s,patchFlag:r,dynamicProps:i,dynamicChildren:null,appContext:null,ctx:rr};return a?($o(l,n),128&s&&e.normalize(l)):n&&(l.shapeFlag|=x(n)?8:16),vo>0&&!o&&mo&&(l.patchFlag>0||6&s)&&32!==l.patchFlag&&mo.push(l),l}const Oo=No;function No(e,t=null,n=null,r=0,i=null,s=!1){if(e&&e!==Ci||(e=po),To(e)){const r=Do(e,t,!0);return n&&$o(r,n),vo>0&&!s&&mo&&(6&r.shapeFlag?mo[mo.indexOf(e)]=r:mo.push(r)),r.patchFlag|=-2,r}if(fa(e)&&(e=e.__vccOpts),t){t=Mo(t);let{class:e,style:n}=t;e&&!x(e)&&(t.class=te(e)),w(n)&&(Zt(n)&&!m(n)&&(n=h({},n)),t.style=K(n))}const o=x(e)?1:br(e)?128:to(e)?64:w(e)?4:E(e)?2:0;return Po(e,t,n,r,i,o,s,!0)}function Mo(e){return e?Zt(e)||Io in e?h({},e):e:null}function Do(e,t,n=!1){const{props:r,ref:i,patchFlag:s,children:o}=e,a=t?zo(r||{},t):r,l={__v_isVNode:!0,__v_skip:!0,type:e.type,props:a,key:a&&Ro(a),ref:t&&t.ref?n&&i?m(i)?i.concat(ko(t)):[i,ko(t)]:ko(t):i,scopeId:e.scopeId,slotScopeIds:e.slotScopeIds,children:o,target:e.target,targetAnchor:e.targetAnchor,staticCount:e.staticCount,shapeFlag:e.shapeFlag,patchFlag:t&&e.type!==uo?-1===s?16:16|s:s,dynamicProps:e.dynamicProps,dynamicChildren:e.dynamicChildren,appContext:e.appContext,dirs:e.dirs,transition:e.transition,component:e.component,suspense:e.suspense,ssContent:e.ssContent&&Do(e.ssContent),ssFallback:e.ssFallback&&Do(e.ssFallback),el:e.el,anchor:e.anchor,ctx:e.ctx,ce:e.ce};return l}function Lo(e=" ",t=0){return Oo(ho,null,e,t)}function Fo(e,t){const n=Oo(fo,null,e);return n.staticCount=t,n}function Bo(e="",t=!1){return t?(bo(),wo(po,null,e)):Oo(po,null,e)}function Uo(e){return null==e||"boolean"===typeof e?Oo(po):m(e)?Oo(uo,null,e.slice()):"object"===typeof e?Go(e):Oo(ho,null,String(e))}function Go(e){return null===e.el&&-1!==e.patchFlag||e.memo?e:Do(e)}function $o(e,t){let n=0;const{shapeFlag:r}=e;if(null==t)t=null;else if(m(t))n=16;else if("object"===typeof t){if(65&r){const n=t.default;return void(n&&(n._c&&(n._d=!1),$o(e,n()),n._c&&(n._d=!0)))}{n=32;const r=t._;r||Io in t?3===r&&rr&&(1===rr.slots._?t._=1:(t._=2,e.patchFlag|=1024)):t._ctx=rr}}else E(t)?(t={default:t,_ctx:rr},n=32):(t=String(t),64&r?(n=16,t=[Lo(t)]):n=8);e.children=t,e.shapeFlag|=n}function zo(...e){const t={};for(let n=0;nqo||rr;let Yo,Ko,Zo="__VUE_INSTANCE_SETTERS__";(Ko=W()[Zo])||(Ko=W()[Zo]=[]),Ko.push((e=>qo=e)),Yo=e=>{Ko.length>1?Ko.forEach((t=>t(e))):Ko[0](e)};const Qo=e=>{Yo(e),e.scope.on()},Jo=()=>{qo&&qo.scope.off(),Yo(null)};function ea(e){return 4&e.vnode.shapeFlag}let ta,na,ra=!1;function ia(e,t=!1){ra=t;const{props:n,children:r}=e.vnode,i=ea(e);As(e,n,i,t),Gs(e,r);const s=i?sa(e,t):void 0;return ra=!1,s}function sa(e,t){const n=e.type;e.accessCache=Object.create(null),e.proxy=Jt(new Proxy(e.ctx,Gi));const{setup:r}=n;if(r){const n=e.setupContext=r.length>1?da(e):null;Qo(e),$e();const i=An(r,e,0,[e.props,n]);if(ze(),Jo(),T(i)){if(i.then(Jo,Jo),t)return i.then((n=>{oa(e,n,t)})).catch((t=>{In(t,e,0)}));e.asyncDep=i}else oa(e,i,t)}else ca(e,t)}function oa(e,t,n){E(t)?e.type.__ssrInlineRender?e.ssrRender=t:e.render=t:w(t)&&(e.setupState=fn(t)),ca(e,n)}function aa(e){ta=e,na=e=>{e.render._rc&&(e.withProxy=new Proxy(e.ctx,$i))}}const la=()=>!ta;function ca(e,t,n){const r=e.type;if(!e.render){if(!t&&ta&&!r.render){const t=r.template||cs(e).template;if(t){0;const{isCustomElement:n,compilerOptions:i}=e.appContext.config,{delimiters:s,compilerOptions:o}=r,a=h(h({isCustomElement:n,delimiters:s},i),o);r.render=ta(t,a)}}e.render=r.render||a,na&&na(e)}Qo(e),$e(),ss(e),ze(),Jo()}function ua(e){return e.attrsProxy||(e.attrsProxy=new Proxy(e.attrs,{get(t,n){return He(e,"get","$attrs"),t[n]}}))}function da(e){const t=t=>{e.exposed=t||{}};return{get attrs(){return ua(e)},slots:e.slots,emit:e.emit,expose:t}}function ha(e){if(e.exposed)return e.exposeProxy||(e.exposeProxy=new Proxy(fn(Jt(e.exposed)),{get(t,n){return n in t?t[n]:n in Bi?Bi[n](e):void 0},has(e,t){return t in e||t in Bi}}))}function pa(e,t=!0){return E(e)?e.displayName||e.name:e.name||t&&e.__name}function fa(e){return E(e)&&"__vccOpts"in e}const ga=(e,t)=>Sn(e,t,ra);function ma(e,t,n){const r=arguments.length;return 2===r?w(t)&&!m(t)?To(t)?Oo(e,null,[t]):Oo(e,t):Oo(e,null,t):(r>3?n=Array.prototype.slice.call(arguments,2):3===r&&To(n)&&(n=[n]),Oo(e,t,n))}const ba=Symbol.for("v-scx"),_a=()=>{{const e=ws(ba);return e}};function ya(){return void 0}function va(e,t,n,r){const i=n[r];if(i&&Ea(i,e))return i;const s=t();return s.memo=e.slice(),n[r]=s}function Ea(e,t){const n=e.memo;if(n.length!=t.length)return!1;for(let r=0;r0&&mo&&mo.push(e),!0}const xa="3.3.4",Sa={createComponentInstance:Wo,setupComponent:ia,renderComponentRoot:ur,setCurrentRenderingInstance:sr,isVNode:To,normalizeVNode:Uo},wa=Sa,Ta=null,Aa=null,Ca="http://www.w3.org/2000/svg",Ia="undefined"!==typeof document?document:null,Ra=Ia&&Ia.createElement("template"),ka={insert:(e,t,n)=>{t.insertBefore(e,n||null)},remove:e=>{const t=e.parentNode;t&&t.removeChild(e)},createElement:(e,t,n,r)=>{const i=t?Ia.createElementNS(Ca,e):Ia.createElement(e,n?{is:n}:void 0);return"select"===e&&r&&null!=r.multiple&&i.setAttribute("multiple",r.multiple),i},createText:e=>Ia.createTextNode(e),createComment:e=>Ia.createComment(e),setText:(e,t)=>{e.nodeValue=t},setElementText:(e,t)=>{e.textContent=t},parentNode:e=>e.parentNode,nextSibling:e=>e.nextSibling,querySelector:e=>Ia.querySelector(e),setScopeId(e,t){e.setAttribute(t,"")},insertStaticContent(e,t,n,r,i,s){const o=n?n.previousSibling:t.lastChild;if(i&&(i===s||i.nextSibling)){while(1)if(t.insertBefore(i.cloneNode(!0),n),i===s||!(i=i.nextSibling))break}else{Ra.innerHTML=r?`${e}`:e;const i=Ra.content;if(r){const e=i.firstChild;while(e.firstChild)i.appendChild(e.firstChild);i.removeChild(e)}t.insertBefore(i,n)}return[o?o.nextSibling:t.firstChild,n?n.previousSibling:t.lastChild]}};function Pa(e,t,n){const r=e._vtc;r&&(t=(t?[t,...r]:[...r]).join(" ")),null==t?e.removeAttribute("class"):n?e.setAttribute("class",t):e.className=t}function Oa(e,t,n){const r=e.style,i=x(n);if(n&&!i){if(t&&!x(t))for(const e in t)null==n[e]&&Ma(r,e,"");for(const e in n)Ma(r,e,n[e])}else{const s=r.display;i?t!==n&&(r.cssText=n):t&&e.removeAttribute("style"),"_vod"in e&&(r.display=s)}}const Na=/\s*!important$/;function Ma(e,t,n){if(m(n))n.forEach((n=>Ma(e,t,n)));else if(null==n&&(n=""),t.startsWith("--"))e.setProperty(t,n);else{const r=Fa(e,t);Na.test(n)?e.setProperty(F(r),n.replace(Na,""),"important"):e[r]=n}}const Da=["Webkit","Moz","ms"],La={};function Fa(e,t){const n=La[t];if(n)return n;let r=D(t);if("filter"!==r&&r in e)return La[t]=r;r=B(r);for(let i=0;iWa||(qa.then((()=>Wa=0)),Wa=Date.now());function Ya(e,t){const n=e=>{if(e._vts){if(e._vts<=n.attached)return}else e._vts=Date.now();Cn(Ka(e,n.value),t,5,[e])};return n.value=e,n.attached=Xa(),n}function Ka(e,t){if(m(t)){const n=e.stopImmediatePropagation;return e.stopImmediatePropagation=()=>{n.call(e),e._stopped=!0},t.map((e=>t=>!t._stopped&&e&&e(t)))}return t}const Za=/^on[a-z]/,Qa=(e,t,n,r,i=!1,s,o,a,l)=>{"class"===t?Pa(e,r,i):"style"===t?Oa(e,n,r):u(t)?d(t)||Ha(e,t,n,r,o):("."===t[0]?(t=t.slice(1),1):"^"===t[0]?(t=t.slice(1),0):Ja(e,t,r,i))?Ga(e,t,r,s,o,a,l):("true-value"===t?e._trueValue=r:"false-value"===t&&(e._falseValue=r),Ua(e,t,r,i))};function Ja(e,t,n,r){return r?"innerHTML"===t||"textContent"===t||!!(t in e&&Za.test(t)&&E(n)):"spellcheck"!==t&&"draggable"!==t&&"translate"!==t&&("form"!==t&&(("list"!==t||"INPUT"!==e.tagName)&&(("type"!==t||"TEXTAREA"!==e.tagName)&&((!Za.test(t)||!x(n))&&t in e))))}function el(e,t){const n=Qr(e);class r extends rl{constructor(e){super(n,e,t)}}return r.def=n,r}const tl=e=>el(e,uc),nl="undefined"!==typeof HTMLElement?HTMLElement:class{};class rl extends nl{constructor(e,t={},n){super(),this._def=e,this._props=t,this._instance=null,this._connected=!1,this._resolved=!1,this._numberProps=null,this.shadowRoot&&n?n(this._createVNode(),this.shadowRoot):(this.attachShadow({mode:"open"}),this._def.__asyncLoader||this._resolveProps(this._def))}connectedCallback(){this._connected=!0,this._instance||(this._resolved?this._update():this._resolveDef())}disconnectedCallback(){this._connected=!1,Un((()=>{this._connected||(cc(null,this.shadowRoot),this._instance=null)}))}_resolveDef(){this._resolved=!0;for(let n=0;n{for(const t of e)this._setAttr(t.attributeName)})).observe(this,{attributes:!0});const e=(e,t=!1)=>{const{props:n,styles:r}=e;let i;if(n&&!m(n))for(const s in n){const e=n[s];(e===Number||e&&e.type===Number)&&(s in this._props&&(this._props[s]=V(this._props[s])),(i||(i=Object.create(null)))[D(s)]=!0)}this._numberProps=i,t&&this._resolveProps(e),this._applyStyles(r),this._update()},t=this._def.__asyncLoader;t?t().then((t=>e(t,!0))):e(this._def)}_resolveProps(e){const{props:t}=e,n=m(t)?t:Object.keys(t||{});for(const r of Object.keys(this))"_"!==r[0]&&n.includes(r)&&this._setProp(r,this[r],!0,!1);for(const r of n.map(D))Object.defineProperty(this,r,{get(){return this._getProp(r)},set(e){this._setProp(r,e)}})}_setAttr(e){let t=this.getAttribute(e);const n=D(e);this._numberProps&&this._numberProps[n]&&(t=V(t)),this._setProp(n,t,!1)}_getProp(e){return this._props[e]}_setProp(e,t,n=!0,r=!0){t!==this._props[e]&&(this._props[e]=t,r&&this._instance&&this._update(),n&&(!0===t?this.setAttribute(F(e),""):"string"===typeof t||"number"===typeof t?this.setAttribute(F(e),t+""):t||this.removeAttribute(F(e))))}_update(){cc(this._createVNode(),this.shadowRoot)}_createVNode(){const e=Oo(this._def,h({},this._props));return this._instance||(e.ce=e=>{this._instance=e,e.isCE=!0;const t=(e,t)=>{this.dispatchEvent(new CustomEvent(e,{detail:t}))};e.emit=(e,...n)=>{t(e,n),F(e)!==e&&t(F(e),n)};let n=this;while(n=n&&(n.parentNode||n.host))if(n instanceof rl){e.parent=n._instance,e.provides=n._instance.provides;break}}),e}_applyStyles(e){e&&e.forEach((e=>{const t=document.createElement("style");t.textContent=e,this.shadowRoot.appendChild(t)}))}}function il(e="$style"){{const t=Xo();if(!t)return s;const n=t.type.__cssModules;if(!n)return s;const r=n[e];return r||s}}function sl(e){const t=Xo();if(!t)return;const n=t.ut=(n=e(t.proxy))=>{Array.from(document.querySelectorAll(`[data-v-owner="${t.uid}"]`)).forEach((e=>al(e,n)))},r=()=>{const r=e(t.proxy);ol(t.subTree,r),n(r)};Pr(r),gi((()=>{const e=new MutationObserver(r);e.observe(t.subTree.el.parentNode,{childList:!0}),yi((()=>e.disconnect()))}))}function ol(e,t){if(128&e.shapeFlag){const n=e.suspense;e=n.activeBranch,n.pendingBranch&&!n.isHydrating&&n.effects.push((()=>{ol(n.activeBranch,t)}))}while(e.component)e=e.component.subTree;if(1&e.shapeFlag&&e.el)al(e.el,t);else if(e.type===uo)e.children.forEach((e=>ol(e,t)));else if(e.type===fo){let{el:n,anchor:r}=e;while(n){if(al(n,t),n===r)break;n=n.nextSibling}}}function al(e,t){if(1===e.nodeType){const n=e.style;for(const e in t)n.setProperty(`--${e}`,t[e])}}const ll="transition",cl="animation",ul=(e,{slots:t})=>ma(jr,gl(e),t);ul.displayName="Transition";const dl={name:String,type:String,css:{type:Boolean,default:!0},duration:[String,Number,Object],enterFromClass:String,enterActiveClass:String,enterToClass:String,appearFromClass:String,appearActiveClass:String,appearToClass:String,leaveFromClass:String,leaveActiveClass:String,leaveToClass:String},hl=ul.props=h({},Hr,dl),pl=(e,t=[])=>{m(e)?e.forEach((e=>e(...t))):e&&e(...t)},fl=e=>!!e&&(m(e)?e.some((e=>e.length>1)):e.length>1);function gl(e){const t={};for(const h in e)h in dl||(t[h]=e[h]);if(!1===e.css)return t;const{name:n="v",type:r,duration:i,enterFromClass:s=`${n}-enter-from`,enterActiveClass:o=`${n}-enter-active`,enterToClass:a=`${n}-enter-to`,appearFromClass:l=s,appearActiveClass:c=o,appearToClass:u=a,leaveFromClass:d=`${n}-leave-from`,leaveActiveClass:p=`${n}-leave-active`,leaveToClass:f=`${n}-leave-to`}=e,g=ml(i),m=g&&g[0],b=g&&g[1],{onBeforeEnter:_,onEnter:y,onEnterCancelled:v,onLeave:E,onLeaveCancelled:x,onBeforeAppear:S=_,onAppear:w=y,onAppearCancelled:T=v}=t,A=(e,t,n)=>{yl(e,t?u:a),yl(e,t?c:o),n&&n()},C=(e,t)=>{e._isLeaving=!1,yl(e,d),yl(e,f),yl(e,p),t&&t()},I=e=>(t,n)=>{const i=e?w:y,o=()=>A(t,e,n);pl(i,[t,o]),vl((()=>{yl(t,e?l:s),_l(t,e?u:a),fl(i)||xl(t,r,m,o)}))};return h(t,{onBeforeEnter(e){pl(_,[e]),_l(e,s),_l(e,o)},onBeforeAppear(e){pl(S,[e]),_l(e,l),_l(e,c)},onEnter:I(!1),onAppear:I(!0),onLeave(e,t){e._isLeaving=!0;const n=()=>C(e,t);_l(e,d),Al(),_l(e,p),vl((()=>{e._isLeaving&&(yl(e,d),_l(e,f),fl(E)||xl(e,r,b,n))})),pl(E,[e,n])},onEnterCancelled(e){A(e,!1),pl(v,[e])},onAppearCancelled(e){A(e,!0),pl(T,[e])},onLeaveCancelled(e){C(e),pl(x,[e])}})}function ml(e){if(null==e)return null;if(w(e))return[bl(e.enter),bl(e.leave)];{const t=bl(e);return[t,t]}}function bl(e){const t=V(e);return t}function _l(e,t){t.split(/\s+/).forEach((t=>t&&e.classList.add(t))),(e._vtc||(e._vtc=new Set)).add(t)}function yl(e,t){t.split(/\s+/).forEach((t=>t&&e.classList.remove(t)));const{_vtc:n}=e;n&&(n.delete(t),n.size||(e._vtc=void 0))}function vl(e){requestAnimationFrame((()=>{requestAnimationFrame(e)}))}let El=0;function xl(e,t,n,r){const i=e._endId=++El,s=()=>{i===e._endId&&r()};if(n)return setTimeout(s,n);const{type:o,timeout:a,propCount:l}=Sl(e,t);if(!o)return r();const c=o+"end";let u=0;const d=()=>{e.removeEventListener(c,h),s()},h=t=>{t.target===e&&++u>=l&&d()};setTimeout((()=>{u(n[e]||"").split(", "),i=r(`${ll}Delay`),s=r(`${ll}Duration`),o=wl(i,s),a=r(`${cl}Delay`),l=r(`${cl}Duration`),c=wl(a,l);let u=null,d=0,h=0;t===ll?o>0&&(u=ll,d=o,h=s.length):t===cl?c>0&&(u=cl,d=c,h=l.length):(d=Math.max(o,c),u=d>0?o>c?ll:cl:null,h=u?u===ll?s.length:l.length:0);const p=u===ll&&/\b(transform|all)(,|$)/.test(r(`${ll}Property`).toString());return{type:u,timeout:d,propCount:h,hasTransform:p}}function wl(e,t){while(e.lengthTl(t)+Tl(e[n]))))}function Tl(e){return 1e3*Number(e.slice(0,-1).replace(",","."))}function Al(){return document.body.offsetHeight}const Cl=new WeakMap,Il=new WeakMap,Rl={name:"TransitionGroup",props:h({},hl,{tag:String,moveClass:String}),setup(e,{slots:t}){const n=Xo(),r=$r();let i,s;return bi((()=>{if(!i.length)return;const t=e.moveClass||`${e.name||"v"}-move`;if(!Ml(i[0].el,n.vnode.el,t))return;i.forEach(Pl),i.forEach(Ol);const r=i.filter(Nl);Al(),r.forEach((e=>{const n=e.el,r=n.style;_l(n,t),r.transform=r.webkitTransform=r.transitionDuration="";const i=n._moveCb=e=>{e&&e.target!==n||e&&!/transform$/.test(e.propertyName)||(n.removeEventListener("transitionend",i),n._moveCb=null,yl(n,t))};n.addEventListener("transitionend",i)}))})),()=>{const o=Qt(e),a=gl(o);let l=o.tag||uo;i=s,s=t.default?Zr(t.default()):[];for(let e=0;e{e.split(/\s+/).forEach((e=>e&&r.classList.remove(e)))})),n.split(/\s+/).forEach((e=>e&&r.classList.add(e))),r.style.display="none";const i=1===t.nodeType?t:t.parentNode;i.appendChild(r);const{hasTransform:s}=Sl(r);return i.removeChild(r),s}const Dl=e=>{const t=e.props["onUpdate:modelValue"]||!1;return m(t)?e=>$(t,e):t};function Ll(e){e.target.composing=!0}function Fl(e){const t=e.target;t.composing&&(t.composing=!1,t.dispatchEvent(new Event("input")))}const Bl={created(e,{modifiers:{lazy:t,trim:n,number:r}},i){e._assign=Dl(i);const s=r||i.props&&"number"===i.props.type;$a(e,t?"change":"input",(t=>{if(t.target.composing)return;let r=e.value;n&&(r=r.trim()),s&&(r=H(r)),e._assign(r)})),n&&$a(e,"change",(()=>{e.value=e.value.trim()})),t||($a(e,"compositionstart",Ll),$a(e,"compositionend",Fl),$a(e,"change",Fl))},mounted(e,{value:t}){e.value=null==t?"":t},beforeUpdate(e,{value:t,modifiers:{lazy:n,trim:r,number:i}},s){if(e._assign=Dl(s),e.composing)return;if(document.activeElement===e&&"range"!==e.type){if(n)return;if(r&&e.value.trim()===t)return;if((i||"number"===e.type)&&H(e.value)===t)return}const o=null==t?"":t;e.value!==o&&(e.value=o)}},Ul={deep:!0,created(e,t,n){e._assign=Dl(n),$a(e,"change",(()=>{const t=e._modelValue,n=Vl(e),r=e.checked,i=e._assign;if(m(t)){const e=fe(t,n),s=-1!==e;if(r&&!s)i(t.concat(n));else if(!r&&s){const n=[...t];n.splice(e,1),i(n)}}else if(_(t)){const e=new Set(t);r?e.add(n):e.delete(n),i(e)}else i(jl(e,r))}))},mounted:Gl,beforeUpdate(e,t,n){e._assign=Dl(n),Gl(e,t,n)}};function Gl(e,{value:t,oldValue:n},r){e._modelValue=t,m(t)?e.checked=fe(t,r.props.value)>-1:_(t)?e.checked=t.has(r.props.value):t!==n&&(e.checked=pe(t,jl(e,!0)))}const $l={created(e,{value:t},n){e.checked=pe(t,n.props.value),e._assign=Dl(n),$a(e,"change",(()=>{e._assign(Vl(e))}))},beforeUpdate(e,{value:t,oldValue:n},r){e._assign=Dl(r),t!==n&&(e.checked=pe(t,r.props.value))}},zl={deep:!0,created(e,{value:t,modifiers:{number:n}},r){const i=_(t);$a(e,"change",(()=>{const t=Array.prototype.filter.call(e.options,(e=>e.selected)).map((e=>n?H(Vl(e)):Vl(e)));e._assign(e.multiple?i?new Set(t):t:t[0])})),e._assign=Dl(r)},mounted(e,{value:t}){Hl(e,t)},beforeUpdate(e,t,n){e._assign=Dl(n)},updated(e,{value:t}){Hl(e,t)}};function Hl(e,t){const n=e.multiple;if(!n||m(t)||_(t)){for(let r=0,i=e.options.length;r-1:i.selected=t.has(s);else if(pe(Vl(i),t))return void(e.selectedIndex!==r&&(e.selectedIndex=r))}n||-1===e.selectedIndex||(e.selectedIndex=-1)}}function Vl(e){return"_value"in e?e._value:e.value}function jl(e,t){const n=t?"_trueValue":"_falseValue";return n in e?e[n]:t}const Wl={created(e,t,n){Xl(e,t,n,null,"created")},mounted(e,t,n){Xl(e,t,n,null,"mounted")},beforeUpdate(e,t,n,r){Xl(e,t,n,r,"beforeUpdate")},updated(e,t,n,r){Xl(e,t,n,r,"updated")}};function ql(e,t){switch(e){case"SELECT":return zl;case"TEXTAREA":return Bl;default:switch(t){case"checkbox":return Ul;case"radio":return $l;default:return Bl}}}function Xl(e,t,n,r,i){const s=ql(e.tagName,n.props&&n.props.type),o=s[i];o&&o(e,t,n,r)}function Yl(){Bl.getSSRProps=({value:e})=>({value:e}),$l.getSSRProps=({value:e},t)=>{if(t.props&&pe(t.props.value,e))return{checked:!0}},Ul.getSSRProps=({value:e},t)=>{if(m(e)){if(t.props&&fe(e,t.props.value)>-1)return{checked:!0}}else if(_(e)){if(t.props&&e.has(t.props.value))return{checked:!0}}else if(e)return{checked:!0}},Wl.getSSRProps=(e,t)=>{if("string"!==typeof t.type)return;const n=ql(t.type.toUpperCase(),t.props&&t.props.type);return n.getSSRProps?n.getSSRProps(e,t):void 0}}const Kl=["ctrl","shift","alt","meta"],Zl={stop:e=>e.stopPropagation(),prevent:e=>e.preventDefault(),self:e=>e.target!==e.currentTarget,ctrl:e=>!e.ctrlKey,shift:e=>!e.shiftKey,alt:e=>!e.altKey,meta:e=>!e.metaKey,left:e=>"button"in e&&0!==e.button,middle:e=>"button"in e&&1!==e.button,right:e=>"button"in e&&2!==e.button,exact:(e,t)=>Kl.some((n=>e[`${n}Key`]&&!t.includes(n)))},Ql=(e,t)=>(n,...r)=>{for(let e=0;en=>{if(!("key"in n))return;const r=F(n.key);return t.some((e=>e===r||Jl[e]===r))?e(n):void 0},tc={beforeMount(e,{value:t},{transition:n}){e._vod="none"===e.style.display?"":e.style.display,n&&t?n.beforeEnter(e):nc(e,t)},mounted(e,{value:t},{transition:n}){n&&t&&n.enter(e)},updated(e,{value:t,oldValue:n},{transition:r}){!t!==!n&&(r?t?(r.beforeEnter(e),nc(e,!0),r.enter(e)):r.leave(e,(()=>{nc(e,!1)})):nc(e,t))},beforeUnmount(e,{value:t}){nc(e,t)}};function nc(e,t){e.style.display=t?e._vod:"none"}function rc(){tc.getSSRProps=({value:e})=>{if(!e)return{style:{display:"none"}}}}const ic=h({patchProp:Qa},ka);let sc,oc=!1;function ac(){return sc||(sc=Ys(ic))}function lc(){return sc=oc?sc:Ks(ic),oc=!0,sc}const cc=(...e)=>{ac().render(...e)},uc=(...e)=>{lc().hydrate(...e)},dc=(...e)=>{const t=ac().createApp(...e);const{mount:n}=t;return t.mount=e=>{const r=pc(e);if(!r)return;const i=t._component;E(i)||i.render||i.template||(i.template=r.innerHTML),r.innerHTML="";const s=n(r,!1,r instanceof SVGElement);return r instanceof Element&&(r.removeAttribute("v-cloak"),r.setAttribute("data-v-app","")),s},t},hc=(...e)=>{const t=lc().createApp(...e);const{mount:n}=t;return t.mount=e=>{const t=pc(e);if(t)return n(t,!0,t instanceof SVGElement)},t};function pc(e){if(x(e)){const t=document.querySelector(e);return t}return e}let fc=!1;const gc=()=>{fc||(fc=!0,Yl(),rc())};function mc(e){throw e}function bc(e){}function _c(e,t,n,r){const i=e,s=new SyntaxError(String(i));return s.code=e,s.loc=t,s}const yc=Symbol(""),vc=Symbol(""),Ec=Symbol(""),xc=Symbol(""),Sc=Symbol(""),wc=Symbol(""),Tc=Symbol(""),Ac=Symbol(""),Cc=Symbol(""),Ic=Symbol(""),Rc=Symbol(""),kc=Symbol(""),Pc=Symbol(""),Oc=Symbol(""),Nc=Symbol(""),Mc=Symbol(""),Dc=Symbol(""),Lc=Symbol(""),Fc=Symbol(""),Bc=Symbol(""),Uc=Symbol(""),Gc=Symbol(""),$c=Symbol(""),zc=Symbol(""),Hc=Symbol(""),Vc=Symbol(""),jc=Symbol(""),Wc=Symbol(""),qc=Symbol(""),Xc=Symbol(""),Yc=Symbol(""),Kc=Symbol(""),Zc=Symbol(""),Qc=Symbol(""),Jc=Symbol(""),eu=Symbol(""),tu=Symbol(""),nu=Symbol(""),ru=Symbol(""),iu={[yc]:"Fragment",[vc]:"Teleport",[Ec]:"Suspense",[xc]:"KeepAlive",[Sc]:"BaseTransition",[wc]:"openBlock",[Tc]:"createBlock",[Ac]:"createElementBlock",[Cc]:"createVNode",[Ic]:"createElementVNode",[Rc]:"createCommentVNode",[kc]:"createTextVNode",[Pc]:"createStaticVNode",[Oc]:"resolveComponent",[Nc]:"resolveDynamicComponent",[Mc]:"resolveDirective",[Dc]:"resolveFilter",[Lc]:"withDirectives",[Fc]:"renderList",[Bc]:"renderSlot",[Uc]:"createSlots",[Gc]:"toDisplayString",[$c]:"mergeProps",[zc]:"normalizeClass",[Hc]:"normalizeStyle",[Vc]:"normalizeProps",[jc]:"guardReactiveProps",[Wc]:"toHandlers",[qc]:"camelize",[Xc]:"capitalize",[Yc]:"toHandlerKey",[Kc]:"setBlockTracking",[Zc]:"pushScopeId",[Qc]:"popScopeId",[Jc]:"withCtx",[eu]:"unref",[tu]:"isRef",[nu]:"withMemo",[ru]:"isMemoSame"};function su(e){Object.getOwnPropertySymbols(e).forEach((t=>{iu[t]=e[t]}))}const ou={source:"",start:{line:1,column:1,offset:0},end:{line:1,column:1,offset:0}};function au(e,t=ou){return{type:0,children:e,helpers:new Set,components:[],directives:[],hoists:[],imports:[],cached:0,temps:0,codegenNode:void 0,loc:t}}function lu(e,t,n,r,i,s,o,a=!1,l=!1,c=!1,u=ou){return e&&(a?(e.helper(wc),e.helper(vu(e.inSSR,c))):e.helper(yu(e.inSSR,c)),o&&e.helper(Lc)),{type:13,tag:t,props:n,children:r,patchFlag:i,dynamicProps:s,directives:o,isBlock:a,disableTracking:l,isComponent:c,loc:u}}function cu(e,t=ou){return{type:17,loc:t,elements:e}}function uu(e,t=ou){return{type:15,loc:t,properties:e}}function du(e,t){return{type:16,loc:ou,key:x(e)?hu(e,!0):e,value:t}}function hu(e,t=!1,n=ou,r=0){return{type:4,loc:n,content:e,isStatic:t,constType:t?3:r}}function pu(e,t=ou){return{type:8,loc:t,children:e}}function fu(e,t=[],n=ou){return{type:14,loc:n,callee:e,arguments:t}}function gu(e,t=undefined,n=!1,r=!1,i=ou){return{type:18,params:e,returns:t,newline:n,isSlot:r,loc:i}}function mu(e,t,n,r=!0){return{type:19,test:e,consequent:t,alternate:n,newline:r,loc:ou}}function bu(e,t,n=!1){return{type:20,index:e,value:t,isVNode:n,loc:ou}}function _u(e){return{type:21,body:e,loc:ou}}function yu(e,t){return e||t?Cc:Ic}function vu(e,t){return e||t?Tc:Ac}function Eu(e,{helper:t,removeHelper:n,inSSR:r}){e.isBlock||(e.isBlock=!0,n(yu(r,e.isComponent)),t(wc),t(vu(r,e.isComponent)))}const xu=e=>4===e.type&&e.isStatic,Su=(e,t)=>e===t||e===F(t);function wu(e){return Su(e,"Teleport")?vc:Su(e,"Suspense")?Ec:Su(e,"KeepAlive")?xc:Su(e,"BaseTransition")?Sc:void 0}const Tu=/^\d|[^\$\w]/,Au=e=>!Tu.test(e),Cu=/[A-Za-z_$\xA0-\uFFFF]/,Iu=/[\.\?\w$\xA0-\uFFFF]/,Ru=/\s+[.[]\s*|\s*[.[]\s+/g,ku=e=>{e=e.trim().replace(Ru,(e=>e.trim()));let t=0,n=[],r=0,i=0,s=null;for(let o=0;o7===e.type&&"bind"===e.name&&(!e.arg||4!==e.arg.type||!e.arg.isStatic)))}function Uu(e){return 5===e.type||2===e.type}function Gu(e){return 7===e.type&&"slot"===e.name}function $u(e){return 1===e.type&&3===e.tagType}function zu(e){return 1===e.type&&2===e.tagType}const Hu=new Set([Vc,jc]);function Vu(e,t=[]){if(e&&!x(e)&&14===e.type){const n=e.callee;if(!x(n)&&Hu.has(n))return Vu(e.arguments[0],t.concat(e))}return[e,t]}function ju(e,t,n){let r,i,s=13===e.type?e.props:e.arguments[2],o=[];if(s&&!x(s)&&14===s.type){const e=Vu(s);s=e[0],o=e[1],i=o[o.length-1]}if(null==s||x(s))r=uu([t]);else if(14===s.type){const e=s.arguments[0];x(e)||15!==e.type?s.callee===Wc?r=fu(n.helper($c),[uu([t]),s]):s.arguments.unshift(uu([t])):Wu(t,e)||e.properties.unshift(t),!r&&(r=s)}else 15===s.type?(Wu(t,s)||s.properties.unshift(t),r=s):(r=fu(n.helper($c),[uu([t]),s]),i&&i.callee===jc&&(i=o[o.length-2]));13===e.type?i?i.arguments[0]=r:e.props=r:i?i.arguments[0]=r:e.arguments[2]=r}function Wu(e,t){let n=!1;if(4===e.key.type){const r=e.key.content;n=t.properties.some((e=>4===e.key.type&&e.key.content===r))}return n}function qu(e,t){return`_${t}_${e.replace(/[^\w]/g,((t,n)=>"-"===t?"_":e.charCodeAt(n).toString()))}`}function Xu(e){return 14===e.type&&e.callee===nu?e.arguments[1].returns:e}function Yu(e,t){const n=t.options?t.options.compatConfig:t.compatConfig,r=n&&n[e];return"MODE"===e?r||3:r}function Ku(e,t){const n=Yu("MODE",t),r=Yu(e,t);return 3===n?!0===r:!1!==r}function Zu(e,t,n,...r){const i=Ku(e,t);return i}const Qu=/&(gt|lt|amp|apos|quot);/g,Ju={gt:">",lt:"<",amp:"&",apos:"'",quot:'"'},ed={delimiters:["{{","}}"],getNamespace:()=>0,getTextMode:()=>0,isVoidTag:l,isPreTag:l,isCustomElement:l,decodeEntities:e=>e.replace(Qu,((e,t)=>Ju[t])),onError:mc,onWarn:bc,comments:!1};function td(e,t={}){const n=nd(e,t),r=yd(n);return au(rd(n,0,[]),vd(n,r))}function nd(e,t){const n=h({},ed);let r;for(r in t)n[r]=void 0===t[r]?ed[r]:t[r];return{options:n,column:1,line:1,offset:0,originalSource:e,source:e,inPre:!1,inVPre:!1,onWarn:n.onWarn}}function rd(e,t,n){const r=Ed(n),i=r?r.ns:0,s=[];while(!Cd(e,t,n)){const o=e.source;let a;if(0===t||1===t)if(!e.inVPre&&xd(o,e.options.delimiters[0]))a=md(e,t);else if(0===t&&"<"===o[0])if(1===o.length)Ad(e,5,1);else if("!"===o[1])xd(o,"\x3c!--")?a=od(e):xd(o,""===o[2]){Ad(e,14,2),Sd(e,3);continue}if(/[a-z]/i.test(o[2])){Ad(e,23),dd(e,cd.End,r);continue}Ad(e,12,2),a=ad(e)}else/[a-z]/i.test(o[1])?(a=ld(e,n),Ku("COMPILER_NATIVE_TEMPLATE",e)&&a&&"template"===a.tag&&!a.props.some((e=>7===e.type&&ud(e.name)))&&(a=a.children)):"?"===o[1]?(Ad(e,21,1),a=ad(e)):Ad(e,12,1);if(a||(a=bd(e,t)),m(a))for(let e=0;e/.exec(e.source);if(r){r.index<=3&&Ad(e,0),r[1]&&Ad(e,10),n=e.source.slice(4,r.index);const t=e.source.slice(0,r.index);let i=1,s=0;while(-1!==(s=t.indexOf("\x3c!--",i)))Sd(e,s-i+1),s+4");return-1===i?(r=e.source.slice(n),Sd(e,e.source.length)):(r=e.source.slice(n,i),Sd(e,i+1)),{type:3,content:r,loc:vd(e,t)}}function ld(e,t){const n=e.inPre,r=e.inVPre,i=Ed(t),s=dd(e,cd.Start,i),o=e.inPre&&!n,a=e.inVPre&&!r;if(s.isSelfClosing||e.options.isVoidTag(s.tag))return o&&(e.inPre=!1),a&&(e.inVPre=!1),s;t.push(s);const l=e.options.getTextMode(s,i),c=rd(e,l,t);t.pop();{const t=s.props.find((e=>6===e.type&&"inline-template"===e.name));if(t&&Zu("COMPILER_INLINE_TEMPLATE",e,t.loc)){const n=vd(e,s.loc.end);t.value={type:2,content:n.source,loc:n}}}if(s.children=c,Id(e.source,s.tag))dd(e,cd.End,i);else if(Ad(e,24,0,s.loc.start),0===e.source.length&&"script"===s.tag.toLowerCase()){const t=c[0];t&&xd(t.loc.source,"\x3c!--")&&Ad(e,8)}return s.loc=vd(e,s.loc.start),o&&(e.inPre=!1),a&&(e.inVPre=!1),s}var cd=(e=>(e[e["Start"]=0]="Start",e[e["End"]=1]="End",e))(cd||{});const ud=i("if,else,else-if,for,slot");function dd(e,t,n){const r=yd(e),i=/^<\/?([a-z][^\t\r\n\f />]*)/i.exec(e.source),s=i[1],o=e.options.getNamespace(s,n);Sd(e,i[0].length),wd(e);const a=yd(e),l=e.source;e.options.isPreTag(s)&&(e.inPre=!0);let c=pd(e,t);0===t&&!e.inVPre&&c.some((e=>7===e.type&&"pre"===e.name))&&(e.inVPre=!0,h(e,a),e.source=l,c=pd(e,t).filter((e=>"v-pre"!==e.name)));let u=!1;if(0===e.source.length?Ad(e,9):(u=xd(e.source,"/>"),1===t&&u&&Ad(e,4),Sd(e,u?2:1)),1===t)return;let d=0;return e.inVPre||("slot"===s?d=2:"template"===s?c.some((e=>7===e.type&&ud(e.name)))&&(d=3):hd(s,c,e)&&(d=1)),{type:1,ns:o,tag:s,tagType:d,props:c,isSelfClosing:u,children:[],loc:vd(e,r),codegenNode:void 0}}function hd(e,t,n){const r=n.options;if(r.isCustomElement(e))return!1;if("component"===e||/^[A-Z]/.test(e)||wu(e)||r.isBuiltInComponent&&r.isBuiltInComponent(e)||r.isNativeTag&&!r.isNativeTag(e))return!0;for(let i=0;i0&&!xd(e.source,">")&&!xd(e.source,"/>")){if(xd(e.source,"/")){Ad(e,22),Sd(e,1),wd(e);continue}1===t&&Ad(e,3);const i=fd(e,r);6===i.type&&i.value&&"class"===i.name&&(i.value.content=i.value.content.replace(/\s+/g," ").trim()),0===t&&n.push(i),/^[^\t\r\n\f />]/.test(e.source)&&Ad(e,15),wd(e)}return n}function fd(e,t){var n;const r=yd(e),i=/^[^\t\r\n\f />][^\t\r\n\f />=]*/.exec(e.source),s=i[0];t.has(s)&&Ad(e,2),t.add(s),"="===s[0]&&Ad(e,19);{const t=/["'<]/g;let n;while(n=t.exec(s))Ad(e,17,n.index)}let o;Sd(e,s.length),/^[\t\r\n\f ]*=/.test(e.source)&&(wd(e),Sd(e,1),wd(e),o=gd(e),o||Ad(e,13));const a=vd(e,r);if(!e.inVPre&&/^(v-[A-Za-z0-9-]|:|\.|@|#)/.test(s)){const t=/(?:^v-([a-z0-9-]+))?(?:(?::|^\.|^@|^#)(\[[^\]]+\]|[^\.]+))?(.+)?$/i.exec(s);let i,l=xd(s,"."),c=t[1]||(l||xd(s,":")?"bind":xd(s,"@")?"on":"slot");if(t[2]){const o="slot"===c,a=s.lastIndexOf(t[2],s.length-((null==(n=t[3])?void 0:n.length)||0)),l=vd(e,Td(e,r,a),Td(e,r,a+t[2].length+(o&&t[3]||"").length));let u=t[2],d=!0;u.startsWith("[")?(d=!1,u.endsWith("]")?u=u.slice(1,u.length-1):(Ad(e,27),u=u.slice(1))):o&&(u+=t[3]||""),i={type:4,content:u,isStatic:d,constType:d?3:0,loc:l}}if(o&&o.isQuoted){const e=o.loc;e.start.offset++,e.start.column++,e.end=Nu(e.start,o.content),e.source=e.source.slice(1,-1)}const u=t[3]?t[3].slice(1).split("."):[];return l&&u.push("prop"),"bind"===c&&i&&u.includes("sync")&&Zu("COMPILER_V_BIND_SYNC",e,a,i.loc.source)&&(c="model",u.splice(u.indexOf("sync"),1)),{type:7,name:c,exp:o&&{type:4,content:o.content,isStatic:!1,constType:0,loc:o.loc},arg:i,modifiers:u,loc:a}}return!e.inVPre&&xd(s,"v-")&&Ad(e,26),{type:6,name:s,value:o&&{type:2,content:o.content,loc:o.loc},loc:a}}function gd(e){const t=yd(e);let n;const r=e.source[0],i='"'===r||"'"===r;if(i){Sd(e,1);const t=e.source.indexOf(r);-1===t?n=_d(e,e.source.length,4):(n=_d(e,t,4),Sd(e,1))}else{const t=/^[^\t\r\n\f >]+/.exec(e.source);if(!t)return;const r=/["'<=`]/g;let i;while(i=r.exec(t[0]))Ad(e,18,i.index);n=_d(e,t[0].length,4)}return{content:n,isQuoted:i,loc:vd(e,t)}}function md(e,t){const[n,r]=e.options.delimiters,i=e.source.indexOf(r,n.length);if(-1===i)return void Ad(e,25);const s=yd(e);Sd(e,n.length);const o=yd(e),a=yd(e),l=i-n.length,c=e.source.slice(0,l),u=_d(e,l,t),d=u.trim(),h=u.indexOf(d);h>0&&Mu(o,c,h);const p=l-(u.length-d.length-h);return Mu(a,c,p),Sd(e,r.length),{type:5,content:{type:4,isStatic:!1,constType:0,content:d,loc:vd(e,o,a)},loc:vd(e,s)}}function bd(e,t){const n=3===t?["]]>"]:["<",e.options.delimiters[0]];let r=e.source.length;for(let o=0;ot&&(r=t)}const i=yd(e),s=_d(e,r,t);return{type:2,content:s,loc:vd(e,i)}}function _d(e,t,n){const r=e.source.slice(0,t);return Sd(e,t),2!==n&&3!==n&&r.includes("&")?e.options.decodeEntities(r,4===n):r}function yd(e){const{column:t,line:n,offset:r}=e;return{column:t,line:n,offset:r}}function vd(e,t,n){return n=n||yd(e),{start:t,end:n,source:e.originalSource.slice(t.offset,n.offset)}}function Ed(e){return e[e.length-1]}function xd(e,t){return e.startsWith(t)}function Sd(e,t){const{source:n}=e;Mu(e,n,t),e.source=n.slice(t)}function wd(e){const t=/^[\t\r\n\f ]+/.exec(e.source);t&&Sd(e,t[0].length)}function Td(e,t,n){return Nu(t,e.originalSource.slice(t.offset,n),n)}function Ad(e,t,n,r=yd(e)){n&&(r.offset+=n,r.column+=n),e.options.onError(_c(t,{start:r,end:r,source:""}))}function Cd(e,t,n){const r=e.source;switch(t){case 0:if(xd(r,"=0;--e)if(Id(r,n[e].tag))return!0;break;case 1:case 2:{const e=Ed(n);if(e&&Id(r,e.tag))return!0;break}case 3:if(xd(r,"]]>"))return!0;break}return!r}function Id(e,t){return xd(e,"]/.test(e[2+t.length]||">")}function Rd(e,t){Pd(e,t,kd(e,e.children[0]))}function kd(e,t){const{children:n}=e;return 1===n.length&&1===t.type&&!zu(t)}function Pd(e,t,n=!1){const{children:r}=e,i=r.length;let s=0;for(let o=0;o0){if(r>=2){e.codegenNode.patchFlag="-1",e.codegenNode=t.hoist(e.codegenNode),s++;continue}}else{const n=e.codegenNode;if(13===n.type){const r=Fd(n);if((!r||512===r||1===r)&&Dd(e,t)>=2){const r=Ld(e);r&&(n.props=t.hoist(r))}n.dynamicProps&&(n.dynamicProps=t.hoist(n.dynamicProps))}}}if(1===e.type){const n=1===e.tagType;n&&t.scopes.vSlot++,Pd(e,t),n&&t.scopes.vSlot--}else if(11===e.type)Pd(e,t,1===e.children.length);else if(9===e.type)for(let n=0;n1)for(let i=0;in&&(A.childIndex--,A.onNodeRemoved()):(A.currentNode=null,A.onNodeRemoved()),A.parent.children.splice(n,1)},onNodeRemoved:()=>{},addIdentifiers(e){},removeIdentifiers(e){},hoist(e){x(e)&&(e=hu(e)),A.hoists.push(e);const t=hu(`_hoisted_${A.hoists.length}`,!1,e.loc,2);return t.hoisted=e,t},cache(e,t=!1){return bu(A.cached++,e,t)}};return A.filters=new Set,A}function Ud(e,t){const n=Bd(e,t);zd(e,n),t.hoistStatic&&Rd(e,n),t.ssr||Gd(e,n),e.helpers=new Set([...n.helpers.keys()]),e.components=[...n.components],e.directives=[...n.directives],e.imports=n.imports,e.hoists=n.hoists,e.temps=n.temps,e.cached=n.cached,e.filters=[...n.filters]}function Gd(e,t){const{helper:n}=t,{children:r}=e;if(1===r.length){const n=r[0];if(kd(e,n)&&n.codegenNode){const r=n.codegenNode;13===r.type&&Eu(r,t),e.codegenNode=r}else e.codegenNode=n}else if(r.length>1){let r=64;q[64];0,e.codegenNode=lu(t,n(yc),void 0,e.children,r+"",void 0,void 0,!0,void 0,!1)}}function $d(e,t){let n=0;const r=()=>{n--};for(;nt===e:t=>e.test(t);return(e,r)=>{if(1===e.type){const{props:i}=e;if(3===e.tagType&&i.some(Gu))return;const s=[];for(let o=0;o`${iu[e]}: _${iu[e]}`;function Wd(e,{mode:t="function",prefixIdentifiers:n="module"===t,sourceMap:r=!1,filename:i="template.vue.html",scopeId:s=null,optimizeImports:o=!1,runtimeGlobalName:a="Vue",runtimeModuleName:l="vue",ssrRuntimeModuleName:c="vue/server-renderer",ssr:u=!1,isTS:d=!1,inSSR:h=!1}){const p={mode:t,prefixIdentifiers:n,sourceMap:r,filename:i,scopeId:s,optimizeImports:o,runtimeGlobalName:a,runtimeModuleName:l,ssrRuntimeModuleName:c,ssr:u,isTS:d,inSSR:h,source:e.loc.source,code:"",column:1,line:1,offset:0,indentLevel:0,pure:!1,map:void 0,helper(e){return`_${iu[e]}`},push(e,t){p.code+=e},indent(){f(++p.indentLevel)},deindent(e=!1){e?--p.indentLevel:f(--p.indentLevel)},newline(){f(p.indentLevel)}};function f(e){p.push("\n"+" ".repeat(e))}return p}function qd(e,t={}){const n=Wd(e,t);t.onContextCreated&&t.onContextCreated(n);const{mode:r,push:i,prefixIdentifiers:s,indent:o,deindent:a,newline:l,scopeId:c,ssr:u}=n,d=Array.from(e.helpers),h=d.length>0,p=!s&&"module"!==r,f=!1,g=f?Wd(e,t):n;Xd(e,g);const m=u?"ssrRender":"render",b=u?["_ctx","_push","_parent","_attrs"]:["_ctx","_cache"],_=b.join(", ");if(i(`function ${m}(${_}) {`),o(),p&&(i("with (_ctx) {"),o(),h&&(i(`const { ${d.map(jd).join(", ")} } = _Vue`),i("\n"),l())),e.components.length&&(Yd(e.components,"component",n),(e.directives.length||e.temps>0)&&l()),e.directives.length&&(Yd(e.directives,"directive",n),e.temps>0&&l()),e.filters&&e.filters.length&&(l(),Yd(e.filters,"filter",n),l()),e.temps>0){i("let ");for(let t=0;t0?", ":""}_temp${t}`)}return(e.components.length||e.directives.length||e.temps)&&(i("\n"),l()),u||i("return "),e.codegenNode?Jd(e.codegenNode,n):i("null"),p&&(a(),i("}")),a(),i("}"),{ast:e,code:n.code,preamble:f?g.code:"",map:n.map?n.map.toJSON():void 0}}function Xd(e,t){const{ssr:n,prefixIdentifiers:r,push:i,newline:s,runtimeModuleName:o,runtimeGlobalName:a,ssrRuntimeModuleName:l}=t,c=a,u=Array.from(e.helpers);if(u.length>0&&(i(`const _Vue = ${c}\n`),e.hoists.length)){const e=[Cc,Ic,Rc,kc,Pc].filter((e=>u.includes(e))).map(jd).join(", ");i(`const { ${e} } = _Vue\n`)}Kd(e.hoists,t),s(),i("return ")}function Yd(e,t,{helper:n,push:r,newline:i,isTS:s}){const o=n("filter"===t?Dc:"component"===t?Oc:Mc);for(let a=0;a3||!1;t.push("["),n&&t.indent(),Qd(e,t,n),n&&t.deindent(),t.push("]")}function Qd(e,t,n=!1,r=!0){const{push:i,newline:s}=t;for(let o=0;oe||"null"))}function lh(e,t){const{push:n,helper:r,pure:i}=t,s=x(e.callee)?e.callee:r(e.callee);i&&n(Vd),n(s+"(",e),Qd(e.arguments,t),n(")")}function ch(e,t){const{push:n,indent:r,deindent:i,newline:s}=t,{properties:o}=e;if(!o.length)return void n("{}",e);const a=o.length>1||!1;n(a?"{":"{ "),a&&r();for(let l=0;l "),(l||a)&&(n("{"),r()),o?(l&&n("return "),m(o)?Zd(o,t):Jd(o,t)):a&&Jd(a,t),(l||a)&&(i(),n("}")),c&&(e.isNonScopedSlot&&n(", undefined, true"),n(")"))}function hh(e,t){const{test:n,consequent:r,alternate:i,newline:s}=e,{push:o,indent:a,deindent:l,newline:c}=t;if(4===n.type){const e=!Au(n.content);e&&o("("),th(n,t),e&&o(")")}else o("("),Jd(n,t),o(")");s&&a(),t.indentLevel++,s||o(" "),o("? "),Jd(r,t),t.indentLevel--,s&&c(),s||o(" "),o(": ");const u=19===i.type;u||t.indentLevel++,Jd(i,t),u||t.indentLevel--,s&&l(!0)}function ph(e,t){const{push:n,helper:r,indent:i,deindent:s,newline:o}=t;n(`_cache[${e.index}] || (`),e.isVNode&&(i(),n(`${r(Kc)}(-1),`),o()),n(`_cache[${e.index}] = `),Jd(e.value,t),e.isVNode&&(n(","),o(),n(`${r(Kc)}(1),`),o(),n(`_cache[${e.index}]`),s()),n(")")}new RegExp("\\b"+"arguments,await,break,case,catch,class,const,continue,debugger,default,delete,do,else,export,extends,finally,for,function,if,import,let,new,return,super,switch,throw,try,var,void,while,with,yield".split(",").join("\\b|\\b")+"\\b");const fh=Hd(/^(if|else|else-if)$/,((e,t,n)=>gh(e,t,n,((e,t,r)=>{const i=n.parent.children;let s=i.indexOf(e),o=0;while(s-- >=0){const e=i[s];e&&9===e.type&&(o+=e.branches.length)}return()=>{if(r)e.codegenNode=bh(t,o,n);else{const r=yh(e.codegenNode);r.alternate=bh(t,o+e.branches.length-1,n)}}}))));function gh(e,t,n,r){if("else"!==t.name&&(!t.exp||!t.exp.content.trim())){const r=t.exp?t.exp.loc:e.loc;n.onError(_c(28,t.loc)),t.exp=hu("true",!1,r)}if("if"===t.name){const i=mh(e,t),s={type:9,loc:e.loc,branches:[i]};if(n.replaceNode(s),r)return r(s,i,!0)}else{const i=n.parent.children;let s=i.indexOf(e);while(s-- >=-1){const o=i[s];if(o&&3===o.type)n.removeNode(o);else{if(!o||2!==o.type||o.content.trim().length){if(o&&9===o.type){"else-if"===t.name&&void 0===o.branches[o.branches.length-1].condition&&n.onError(_c(30,e.loc)),n.removeNode();const i=mh(e,t);0,o.branches.push(i);const s=r&&r(o,i,!1);zd(i,n),s&&s(),n.currentNode=null}else n.onError(_c(30,e.loc));break}n.removeNode(o)}}}}function mh(e,t){const n=3===e.tagType;return{type:10,loc:e.loc,condition:"else"===t.name?void 0:t.exp,children:n&&!Du(e,"for")?e.children:[e],userKey:Lu(e,"key"),isTemplateIf:n}}function bh(e,t,n){return e.condition?mu(e.condition,_h(e,t,n),fu(n.helper(Rc),['""',"true"])):_h(e,t,n)}function _h(e,t,n){const{helper:r}=n,i=du("key",hu(`${t}`,!1,ou,2)),{children:s}=e,o=s[0],a=1!==s.length||1!==o.type;if(a){if(1===s.length&&11===o.type){const e=o.codegenNode;return ju(e,i,n),e}{let t=64;q[64];return lu(n,r(yc),uu([i]),s,t+"",void 0,void 0,!0,!1,!1,e.loc)}}{const e=o.codegenNode,t=Xu(e);return 13===t.type&&Eu(t,n),ju(t,i,n),e}}function yh(e){while(1)if(19===e.type){if(19!==e.alternate.type)return e;e=e.alternate}else 20===e.type&&(e=e.value)}const vh=Hd("for",((e,t,n)=>{const{helper:r,removeHelper:i}=n;return Eh(e,t,n,(t=>{const s=fu(r(Fc),[t.source]),o=$u(e),a=Du(e,"memo"),l=Lu(e,"key"),c=l&&(6===l.type?hu(l.value.content,!0):l.exp),u=l?du("key",c):null,d=4===t.source.type&&t.source.constType>0,h=d?64:l?128:256;return t.codegenNode=lu(n,r(yc),void 0,s,h+"",void 0,void 0,!0,!d,!1,e.loc),()=>{let l;const{children:h}=t;const p=1!==h.length||1!==h[0].type,f=zu(e)?e:o&&1===e.children.length&&zu(e.children[0])?e.children[0]:null;if(f?(l=f.codegenNode,o&&u&&ju(l,u,n)):p?l=lu(n,r(yc),u?uu([u]):void 0,e.children,"64",void 0,void 0,!0,void 0,!1):(l=h[0].codegenNode,o&&u&&ju(l,u,n),l.isBlock!==!d&&(l.isBlock?(i(wc),i(vu(n.inSSR,l.isComponent))):i(yu(n.inSSR,l.isComponent))),l.isBlock=!d,l.isBlock?(r(wc),r(vu(n.inSSR,l.isComponent))):r(yu(n.inSSR,l.isComponent))),a){const e=gu(Ch(t.parseResult,[hu("_cached")]));e.body=_u([pu(["const _memo = (",a.exp,")"]),pu(["if (_cached",...c?[" && _cached.key === ",c]:[],` && ${n.helperString(ru)}(_cached, _memo)) return _cached`]),pu(["const _item = ",l]),hu("_item.memo = _memo"),hu("return _item")]),s.arguments.push(e,hu("_cache"),hu(String(n.cached++)))}else s.arguments.push(gu(Ch(t.parseResult),l,!0))}}))}));function Eh(e,t,n,r){if(!t.exp)return void n.onError(_c(31,t.loc));const i=Th(t.exp,n);if(!i)return void n.onError(_c(32,t.loc));const{addIdentifiers:s,removeIdentifiers:o,scopes:a}=n,{source:l,value:c,key:u,index:d}=i,h={type:11,loc:t.loc,source:l,valueAlias:c,keyAlias:u,objectIndexAlias:d,parseResult:i,children:$u(e)?e.children:[e]};n.replaceNode(h),a.vFor++;const p=r&&r(h);return()=>{a.vFor--,p&&p()}}const xh=/([\s\S]*?)\s+(?:in|of)\s+([\s\S]*)/,Sh=/,([^,\}\]]*)(?:,([^,\}\]]*))?$/,wh=/^\(|\)$/g;function Th(e,t){const n=e.loc,r=e.content,i=r.match(xh);if(!i)return;const[,s,o]=i,a={source:Ah(n,o.trim(),r.indexOf(o,s.length)),value:void 0,key:void 0,index:void 0};let l=s.trim().replace(wh,"").trim();const c=s.indexOf(l),u=l.match(Sh);if(u){l=l.replace(Sh,"").trim();const e=u[1].trim();let t;if(e&&(t=r.indexOf(e,c+l.length),a.key=Ah(n,e,t)),u[2]){const i=u[2].trim();i&&(a.index=Ah(n,i,r.indexOf(i,a.key?t+e.length:c+l.length)))}}return l&&(a.value=Ah(n,l,c)),a}function Ah(e,t,n){return hu(t,!1,Ou(e,n,t.length))}function Ch({value:e,key:t,index:n},r=[]){return Ih([e,t,n,...r])}function Ih(e){let t=e.length;while(t--)if(e[t])break;return e.slice(0,t+1).map(((e,t)=>e||hu("_".repeat(t+1),!1)))}const Rh=hu("undefined",!1),kh=(e,t)=>{if(1===e.type&&(1===e.tagType||3===e.tagType)){const n=Du(e,"slot");if(n)return n.exp,t.scopes.vSlot++,()=>{t.scopes.vSlot--}}},Ph=(e,t,n)=>gu(e,t,!1,!0,t.length?t[0].loc:n);function Oh(e,t,n=Ph){t.helper(Jc);const{children:r,loc:i}=e,s=[],o=[];let a=t.scopes.vSlot>0||t.scopes.vFor>0;const l=Du(e,"slot",!0);if(l){const{arg:e,exp:t}=l;e&&!xu(e)&&(a=!0),s.push(du(e||hu("default",!0),n(t,r,i)))}let c=!1,u=!1;const d=[],h=new Set;let p=0;for(let m=0;m{const s=n(e,r,i);return t.compatConfig&&(s.isNonScopedSlot=!0),du("default",s)};c?d.length&&d.some((e=>Dh(e)))&&(u?t.onError(_c(39,d[0].loc)):s.push(e(void 0,d))):s.push(e(void 0,r))}const f=a?2:Mh(e.children)?3:1;let g=uu(s.concat(du("_",hu(f+"",!1))),i);return o.length&&(g=fu(t.helper(Uc),[g,cu(o)])),{slots:g,hasDynamicSlots:a}}function Nh(e,t,n){const r=[du("name",e),du("fn",t)];return null!=n&&r.push(du("key",hu(String(n),!0))),uu(r)}function Mh(e){for(let t=0;tfunction(){if(e=t.currentNode,1!==e.type||0!==e.tagType&&1!==e.tagType)return;const{tag:n,props:r}=e,i=1===e.tagType;let s=i?Bh(e,t):`"${n}"`;const o=w(s)&&s.callee===Nc;let a,l,c,u,d,h,p=0,f=o||s===vc||s===Ec||!i&&("svg"===n||"foreignObject"===n);if(r.length>0){const n=Uh(e,t,void 0,i,o);a=n.props,p=n.patchFlag,d=n.dynamicPropNames;const r=n.directives;h=r&&r.length?cu(r.map((e=>zh(e,t)))):void 0,n.shouldUseBlock&&(f=!0)}if(e.children.length>0){s===xc&&(f=!0,p|=1024);const n=i&&s!==vc&&s!==xc;if(n){const{slots:n,hasDynamicSlots:r}=Oh(e,t);l=n,r&&(p|=1024)}else if(1===e.children.length&&s!==vc){const n=e.children[0],r=n.type,i=5===r||8===r;i&&0===Od(n,t)&&(p|=1),l=i||2===r?n:e.children}else l=e.children}0!==p&&(c=String(p),d&&d.length&&(u=Hh(d))),e.codegenNode=lu(t,s,a,l,c,u,h,!!f,!1,i,e.loc)};function Bh(e,t,n=!1){let{tag:r}=e;const i=Vh(r),s=Lu(e,"is");if(s)if(i||Ku("COMPILER_IS_ON_ELEMENT",t)){const e=6===s.type?s.value&&hu(s.value.content,!0):s.exp;if(e)return fu(t.helper(Nc),[e])}else 6===s.type&&s.value.content.startsWith("vue:")&&(r=s.value.content.slice(4));const o=!i&&Du(e,"is");if(o&&o.exp)return fu(t.helper(Nc),[o.exp]);const a=wu(r)||t.isBuiltInComponent(r);return a?(n||t.helper(a),a):(t.helper(Oc),t.components.add(r),qu(r,"component"))}function Uh(e,t,n=e.props,r,i,s=!1){const{tag:o,loc:a,children:l}=e;let c=[];const d=[],h=[],p=l.length>0;let f=!1,g=0,m=!1,b=!1,_=!1,y=!1,v=!1,E=!1;const x=[],w=e=>{c.length&&(d.push(uu(Gh(c),a)),c=[]),e&&d.push(e)},T=({key:e,value:n})=>{if(xu(e)){const s=e.content,o=u(s);if(!o||r&&!i||"onclick"===s.toLowerCase()||"onUpdate:modelValue"===s||P(s)||(y=!0),o&&P(s)&&(E=!0),20===n.type||(4===n.type||8===n.type)&&Od(n,t)>0)return;"ref"===s?m=!0:"class"===s?b=!0:"style"===s?_=!0:"key"===s||x.includes(s)||x.push(s),!r||"class"!==s&&"style"!==s||x.includes(s)||x.push(s)}else v=!0};for(let u=0;u0&&c.push(du(hu("ref_for",!0),hu("true")))),"is"===n&&(Vh(o)||r&&r.content.startsWith("vue:")||Ku("COMPILER_IS_ON_ELEMENT",t)))continue;c.push(du(hu(n,!0,Ou(e,0,n.length)),hu(r?r.content:"",s,r?r.loc:e)))}else{const{name:n,arg:l,exp:u,loc:g}=i,m="bind"===n,b="on"===n;if("slot"===n){r||t.onError(_c(40,g));continue}if("once"===n||"memo"===n)continue;if("is"===n||m&&Fu(l,"is")&&(Vh(o)||Ku("COMPILER_IS_ON_ELEMENT",t)))continue;if(b&&s)continue;if((m&&Fu(l,"key")||b&&p&&Fu(l,"vue:before-update"))&&(f=!0),m&&Fu(l,"ref")&&t.scopes.vFor>0&&c.push(du(hu("ref_for",!0),hu("true"))),!l&&(m||b)){if(v=!0,u)if(m){if(w(),Ku("COMPILER_V_BIND_OBJECT_ORDER",t)){d.unshift(u);continue}d.push(u)}else w({type:14,loc:g,callee:t.helper(Wc),arguments:r?[u]:[u,"true"]});else t.onError(_c(m?34:35,g));continue}const _=t.directiveTransforms[n];if(_){const{props:n,needRuntime:r}=_(i,e,t);!s&&n.forEach(T),b&&l&&!xu(l)?w(uu(n,a)):c.push(...n),r&&(h.push(i),S(r)&&Lh.set(i,r))}else O(n)||(h.push(i),p&&(f=!0))}}let A;if(d.length?(w(),A=d.length>1?fu(t.helper($c),d,a):d[0]):c.length&&(A=uu(Gh(c),a)),v?g|=16:(b&&!r&&(g|=2),_&&!r&&(g|=4),x.length&&(g|=8),y&&(g|=32)),f||0!==g&&32!==g||!(m||E||h.length>0)||(g|=512),!t.inSSR&&A)switch(A.type){case 15:let e=-1,n=-1,r=!1;for(let t=0;tdu(e,t))),i))}return cu(n,e.loc)}function Hh(e){let t="[";for(let n=0,r=e.length;n{if(zu(e)){const{children:n,loc:r}=e,{slotName:i,slotProps:s}=Wh(e,t),o=[t.prefixIdentifiers?"_ctx.$slots":"$slots",i,"{}","undefined","true"];let a=2;s&&(o[2]=s,a=3),n.length&&(o[3]=gu([],n,!1,!1,r),a=4),t.scopeId&&!t.slotted&&(a=5),o.splice(a),e.codegenNode=fu(t.helper(Bc),o,r)}};function Wh(e,t){let n,r='"default"';const i=[];for(let s=0;s0){const{props:r,directives:s}=Uh(e,t,i,!1,!1);n=r,s.length&&t.onError(_c(36,s[0].loc))}return{slotName:r,slotProps:n}}const qh=/^\s*([\w$_]+|(async\s*)?\([^)]*?\))\s*(:[^=]+)?=>|^\s*(async\s+)?function(?:\s+[\w$]+)?\s*\(/,Xh=(e,t,n,r)=>{const{loc:i,modifiers:s,arg:o}=e;let a;if(e.exp||s.length||n.onError(_c(35,i)),4===o.type)if(o.isStatic){let e=o.content;0,e.startsWith("vue:")&&(e=`vnode-${e.slice(4)}`);const n=0!==t.tagType||e.startsWith("vnode")||!/[A-Z]/.test(e)?U(D(e)):`on:${e}`;a=hu(n,!0,o.loc)}else a=pu([`${n.helperString(Yc)}(`,o,")"]);else a=o,a.children.unshift(`${n.helperString(Yc)}(`),a.children.push(")");let l=e.exp;l&&!l.content.trim()&&(l=void 0);let c=n.cacheHandlers&&!l&&!n.inVOnce;if(l){const e=Pu(l.content),t=!(e||qh.test(l.content)),n=l.content.includes(";");0,(t||c&&e)&&(l=pu([`${t?"$event":"(...args)"} => ${n?"{":"("}`,l,n?"}":")"]))}let u={props:[du(a,l||hu("() => {}",!1,i))]};return r&&(u=r(u)),c&&(u.props[0].value=n.cache(u.props[0].value)),u.props.forEach((e=>e.key.isHandlerKey=!0)),u},Yh=(e,t,n)=>{const{exp:r,modifiers:i,loc:s}=e,o=e.arg;return 4!==o.type?(o.children.unshift("("),o.children.push(') || ""')):o.isStatic||(o.content=`${o.content} || ""`),i.includes("camel")&&(4===o.type?o.isStatic?o.content=D(o.content):o.content=`${n.helperString(qc)}(${o.content})`:(o.children.unshift(`${n.helperString(qc)}(`),o.children.push(")"))),n.inSSR||(i.includes("prop")&&Kh(o,"."),i.includes("attr")&&Kh(o,"^")),!r||4===r.type&&!r.content.trim()?(n.onError(_c(34,s)),{props:[du(o,hu("",!0,s))]}):{props:[du(o,r)]}},Kh=(e,t)=>{4===e.type?e.isStatic?e.content=t+e.content:e.content=`\`${t}\${${e.content}}\``:(e.children.unshift(`'${t}' + (`),e.children.push(")"))},Zh=(e,t)=>{if(0===e.type||1===e.type||11===e.type||10===e.type)return()=>{const n=e.children;let r,i=!1;for(let e=0;e7===e.type&&!t.directiveTransforms[e.name]))||"template"===e.tag)))for(let e=0;e{if(1===e.type&&Du(e,"once",!0)){if(Qh.has(e)||t.inVOnce||t.inSSR)return;return Qh.add(e),t.inVOnce=!0,t.helper(Kc),()=>{t.inVOnce=!1;const e=t.currentNode;e.codegenNode&&(e.codegenNode=t.cache(e.codegenNode,!0))}}},ep=(e,t,n)=>{const{exp:r,arg:i}=e;if(!r)return n.onError(_c(41,e.loc)),tp();const s=r.loc.source,o=4===r.type?r.content:s,a=n.bindingMetadata[s];if("props"===a||"props-aliased"===a)return n.onError(_c(44,r.loc)),tp();const l=!1;if(!o.trim()||!Pu(o)&&!l)return n.onError(_c(42,r.loc)),tp();const c=i||hu("modelValue",!0),u=i?xu(i)?`onUpdate:${D(i.content)}`:pu(['"onUpdate:" + ',i]):"onUpdate:modelValue";let d;const h=n.isTS?"($event: any)":"$event";d=pu([`${h} => ((`,r,") = $event)"]);const p=[du(c,e.exp),du(u,d)];if(e.modifiers.length&&1===t.tagType){const t=e.modifiers.map((e=>(Au(e)?e:JSON.stringify(e))+": true")).join(", "),n=i?xu(i)?`${i.content}Modifiers`:pu([i,' + "Modifiers"']):"modelModifiers";p.push(du(n,hu(`{ ${t} }`,!1,e.loc,2)))}return tp(p)};function tp(e=[]){return{props:e}}const np=/[\w).+\-_$\]]/,rp=(e,t)=>{Ku("COMPILER_FILTER",t)&&(5===e.type&&ip(e.content,t),1===e.type&&e.props.forEach((e=>{7===e.type&&"for"!==e.name&&e.exp&&ip(e.exp,t)})))};function ip(e,t){if(4===e.type)sp(e,t);else for(let n=0;n=0;t--)if(e=n.charAt(t)," "!==e)break;e&&np.test(e)||(u=!0)}}else void 0===o?(f=s+1,o=n.slice(0,s).trim()):m();function m(){g.push(n.slice(f,s).trim()),f=s+1}if(void 0===o?o=n.slice(0,s).trim():0!==f&&m(),g.length){for(s=0;s{if(1===e.type){const n=Du(e,"memo");if(!n||ap.has(e))return;return ap.add(e),()=>{const r=e.codegenNode||t.currentNode.codegenNode;r&&13===r.type&&(1!==e.tagType&&Eu(r,t),e.codegenNode=fu(t.helper(nu),[n.exp,gu(void 0,r),"_cache",String(t.cached++)]))}}};function cp(e){return[[Jh,fh,lp,vh,rp,jh,Fh,kh,Zh],{on:Xh,bind:Yh,model:ep}]}function up(e,t={}){const n=t.onError||mc,r="module"===t.mode;!0===t.prefixIdentifiers?n(_c(47)):r&&n(_c(48));const i=!1;t.cacheHandlers&&n(_c(49)),t.scopeId&&!r&&n(_c(50));const s=x(e)?td(e,t):e,[o,a]=cp();return Ud(s,h({},t,{prefixIdentifiers:i,nodeTransforms:[...o,...t.nodeTransforms||[]],directiveTransforms:h({},a,t.directiveTransforms||{})})),qd(s,h({},t,{prefixIdentifiers:i}))}const dp=()=>({props:[]}),hp=Symbol(""),pp=Symbol(""),fp=Symbol(""),gp=Symbol(""),mp=Symbol(""),bp=Symbol(""),_p=Symbol(""),yp=Symbol(""),vp=Symbol(""),Ep=Symbol("");let xp;function Sp(e,t=!1){return xp||(xp=document.createElement("div")),t?(xp.innerHTML=`
    `,xp.children[0].getAttribute("foo")):(xp.innerHTML=e,xp.textContent)}su({[hp]:"vModelRadio",[pp]:"vModelCheckbox",[fp]:"vModelText",[gp]:"vModelSelect",[mp]:"vModelDynamic",[bp]:"withModifiers",[_p]:"withKeys",[yp]:"vShow",[vp]:"Transition",[Ep]:"TransitionGroup"});const wp=i("style,iframe,script,noscript",!0),Tp={isVoidTag:le,isNativeTag:e=>oe(e)||ae(e),isPreTag:e=>"pre"===e,decodeEntities:Sp,isBuiltInComponent:e=>Su(e,"Transition")?vp:Su(e,"TransitionGroup")?Ep:void 0,getNamespace(e,t){let n=t?t.ns:0;if(t&&2===n)if("annotation-xml"===t.tag){if("svg"===e)return 1;t.props.some((e=>6===e.type&&"encoding"===e.name&&null!=e.value&&("text/html"===e.value.content||"application/xhtml+xml"===e.value.content)))&&(n=0)}else/^m(?:[ions]|text)$/.test(t.tag)&&"mglyph"!==e&&"malignmark"!==e&&(n=0);else t&&1===n&&("foreignObject"!==t.tag&&"desc"!==t.tag&&"title"!==t.tag||(n=0));if(0===n){if("svg"===e)return 1;if("math"===e)return 2}return n},getTextMode({tag:e,ns:t}){if(0===t){if("textarea"===e||"title"===e)return 1;if(wp(e))return 2}return 0}},Ap=e=>{1===e.type&&e.props.forEach(((t,n)=>{6===t.type&&"style"===t.name&&t.value&&(e.props[n]={type:7,name:"bind",arg:hu("style",!0,t.loc),exp:Cp(t.value.content,t.loc),modifiers:[],loc:t.loc})}))},Cp=(e,t)=>{const n=ee(e);return hu(JSON.stringify(n),!1,t,3)};function Ip(e,t){return _c(e,t,void 0)}const Rp=(e,t,n)=>{const{exp:r,loc:i}=e;return r||n.onError(Ip(53,i)),t.children.length&&(n.onError(Ip(54,i)),t.children.length=0),{props:[du(hu("innerHTML",!0,i),r||hu("",!0))]}},kp=(e,t,n)=>{const{exp:r,loc:i}=e;return r||n.onError(Ip(55,i)),t.children.length&&(n.onError(Ip(56,i)),t.children.length=0),{props:[du(hu("textContent",!0),r?Od(r,n)>0?r:fu(n.helperString(Gc),[r],i):hu("",!0))]}},Pp=(e,t,n)=>{const r=ep(e,t,n);if(!r.props.length||1===t.tagType)return r;e.arg&&n.onError(Ip(58,e.arg.loc));const{tag:i}=t,s=n.isCustomElement(i);if("input"===i||"textarea"===i||"select"===i||s){let o=fp,a=!1;if("input"===i||s){const r=Lu(t,"type");if(r){if(7===r.type)o=mp;else if(r.value)switch(r.value.content){case"radio":o=hp;break;case"checkbox":o=pp;break;case"file":a=!0,n.onError(Ip(59,e.loc));break;default:break}}else Bu(t)&&(o=mp)}else"select"===i&&(o=gp);a||(r.needRuntime=n.helper(o))}else n.onError(Ip(57,e.loc));return r.props=r.props.filter((e=>!(4===e.key.type&&"modelValue"===e.key.content))),r},Op=i("passive,once,capture"),Np=i("stop,prevent,self,ctrl,shift,alt,meta,exact,middle"),Mp=i("left,right"),Dp=i("onkeyup,onkeydown,onkeypress",!0),Lp=(e,t,n,r)=>{const i=[],s=[],o=[];for(let a=0;a{const n=xu(e)&&"onclick"===e.content.toLowerCase();return n?hu(t,!0):4!==e.type?pu(["(",e,`) === "onClick" ? "${t}" : (`,e,")"]):e},Bp=(e,t,n)=>Xh(e,t,n,(t=>{const{modifiers:r}=e;if(!r.length)return t;let{key:i,value:s}=t.props[0];const{keyModifiers:o,nonKeyModifiers:a,eventOptionModifiers:l}=Lp(i,r,n,e.loc);if(a.includes("right")&&(i=Fp(i,"onContextmenu")),a.includes("middle")&&(i=Fp(i,"onMouseup")),a.length&&(s=fu(n.helper(bp),[s,JSON.stringify(a)])),!o.length||xu(i)&&!Dp(i.content)||(s=fu(n.helper(_p),[s,JSON.stringify(o)])),l.length){const e=l.map(B).join("");i=xu(i)?hu(`${i.content}${e}`,!0):pu(["(",i,`) + "${e}"`])}return{props:[du(i,s)]}})),Up=(e,t,n)=>{const{exp:r,loc:i}=e;return r||n.onError(Ip(61,i)),{props:[],needRuntime:n.helper(yp)}};const Gp=(e,t)=>{1!==e.type||0!==e.tagType||"script"!==e.tag&&"style"!==e.tag||t.removeNode()},$p=[Ap],zp={cloak:dp,html:Rp,text:kp,model:Pp,on:Bp,show:Up};function Hp(e,t={}){return up(e,h({},Tp,t,{nodeTransforms:[Gp,...$p,...t.nodeTransforms||[]],directiveTransforms:h({},zp,t.directiveTransforms||{}),transformHoist:null}))}const Vp=Object.create(null);function jp(e,t){if(!x(e)){if(!e.nodeType)return a;e=e.innerHTML}const n=e,i=Vp[n];if(i)return i;if("#"===e[0]){const t=document.querySelector(e);0,e=t?t.innerHTML:""}const s=h({hoistStatic:!0,onError:void 0,onWarn:a},t);s.isCustomElement||"undefined"===typeof customElements||(s.isCustomElement=e=>!!customElements.get(e));const{code:o}=Hp(e,s);const l=new Function("Vue",o)(r);return l._rc=!0,Vp[n]=l}aa(jp)},4569:function(e){function t(e,t,n,r,i,s,o){try{var a=e[s](o),l=a.value}catch(c){return void n(c)}a.done?t(l):Promise.resolve(l).then(r,i)}function n(e){return function(){var n=this,r=arguments;return new Promise((function(i,s){var o=e.apply(n,r);function a(e){t(o,i,s,a,l,"next",e)}function l(e){t(o,i,s,a,l,"throw",e)}a(void 0)}))}}e.exports=n,e.exports.__esModule=!0,e.exports["default"]=e.exports},9514:function(e){function t(){return e.exports=t=Object.assign?Object.assign.bind():function(e){for(var t=1;t=0;--r){var i=this.tryEntries[r],o=i.completion;if("root"===i.tryLoc)return n("end");if(i.tryLoc<=this.prev){var a=s.call(i,"catchLoc"),l=s.call(i,"finallyLoc");if(a&&l){if(this.prev=0;--n){var r=this.tryEntries[n];if(r.tryLoc<=this.prev&&s.call(r,"finallyLoc")&&this.prev=0;--t){var n=this.tryEntries[t];if(n.finallyLoc===e)return this.complete(n.completion,n.afterLoc),C(n),f}},catch:function(e){for(var t=this.tryEntries.length-1;t>=0;--t){var n=this.tryEntries[t];if(n.tryLoc===e){var r=n.completion;if("throw"===r.type){var i=r.arg;C(n)}return i}}throw new Error("illegal catch attempt")},delegateYield:function(e,t,n){return this.delegate={iterator:R(e),resultName:t,nextLoc:n},"next"===this.method&&(this.arg=void 0),f}},t}e.exports=i,e.exports.__esModule=!0,e.exports["default"]=e.exports},9107:function(e){function t(n){return e.exports=t="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e},e.exports.__esModule=!0,e.exports["default"]=e.exports,t(n)}e.exports=t,e.exports.__esModule=!0,e.exports["default"]=e.exports},4684:function(e,t,n){var r=n(4138)();e.exports=r;try{regeneratorRuntime=r}catch(i){"object"===typeof globalThis?globalThis.regeneratorRuntime=r:Function("r","regeneratorRuntime = r")(r)}},6278:function(e,t,n){"use strict";n.d(t,{I:function(){return M}});var r={grad:.9,turn:360,rad:360/(2*Math.PI)},i=function(e){return"string"==typeof e?e.length>0:"number"==typeof e},s=function(e,t,n){return void 0===t&&(t=0),void 0===n&&(n=Math.pow(10,t)),Math.round(n*e)/n+0},o=function(e,t,n){return void 0===t&&(t=0),void 0===n&&(n=1),e>n?n:e>t?e:t},a=function(e){return(e=isFinite(e)?e%360:0)>0?e:e+360},l=function(e){return{r:o(e.r,0,255),g:o(e.g,0,255),b:o(e.b,0,255),a:o(e.a)}},c=function(e){return{r:s(e.r),g:s(e.g),b:s(e.b),a:s(e.a,3)}},u=/^#([0-9a-f]{3,8})$/i,d=function(e){var t=e.toString(16);return t.length<2?"0"+t:t},h=function(e){var t=e.r,n=e.g,r=e.b,i=e.a,s=Math.max(t,n,r),o=s-Math.min(t,n,r),a=o?s===t?(n-r)/o:s===n?2+(r-t)/o:4+(t-n)/o:0;return{h:60*(a<0?a+6:a),s:s?o/s*100:0,v:s/255*100,a:i}},p=function(e){var t=e.h,n=e.s,r=e.v,i=e.a;t=t/360*6,n/=100,r/=100;var s=Math.floor(t),o=r*(1-n),a=r*(1-(t-s)*n),l=r*(1-(1-t+s)*n),c=s%6;return{r:255*[r,a,o,o,l,r][c],g:255*[l,r,r,a,o,o][c],b:255*[o,o,l,r,r,a][c],a:i}},f=function(e){return{h:a(e.h),s:o(e.s,0,100),l:o(e.l,0,100),a:o(e.a)}},g=function(e){return{h:s(e.h),s:s(e.s),l:s(e.l),a:s(e.a,3)}},m=function(e){return p((n=(t=e).s,{h:t.h,s:(n*=((r=t.l)<50?r:100-r)/100)>0?2*n/(r+n)*100:0,v:r+n,a:t.a}));var t,n,r},b=function(e){return{h:(t=h(e)).h,s:(i=(200-(n=t.s))*(r=t.v)/100)>0&&i<200?n*r/100/(i<=100?i:200-i)*100:0,l:i/2,a:t.a};var t,n,r,i},_=/^hsla?\(\s*([+-]?\d*\.?\d+)(deg|rad|grad|turn)?\s*,\s*([+-]?\d*\.?\d+)%\s*,\s*([+-]?\d*\.?\d+)%\s*(?:,\s*([+-]?\d*\.?\d+)(%)?\s*)?\)$/i,y=/^hsla?\(\s*([+-]?\d*\.?\d+)(deg|rad|grad|turn)?\s+([+-]?\d*\.?\d+)%\s+([+-]?\d*\.?\d+)%\s*(?:\/\s*([+-]?\d*\.?\d+)(%)?\s*)?\)$/i,v=/^rgba?\(\s*([+-]?\d*\.?\d+)(%)?\s*,\s*([+-]?\d*\.?\d+)(%)?\s*,\s*([+-]?\d*\.?\d+)(%)?\s*(?:,\s*([+-]?\d*\.?\d+)(%)?\s*)?\)$/i,E=/^rgba?\(\s*([+-]?\d*\.?\d+)(%)?\s+([+-]?\d*\.?\d+)(%)?\s+([+-]?\d*\.?\d+)(%)?\s*(?:\/\s*([+-]?\d*\.?\d+)(%)?\s*)?\)$/i,x={string:[[function(e){var t=u.exec(e);return t?(e=t[1]).length<=4?{r:parseInt(e[0]+e[0],16),g:parseInt(e[1]+e[1],16),b:parseInt(e[2]+e[2],16),a:4===e.length?s(parseInt(e[3]+e[3],16)/255,2):1}:6===e.length||8===e.length?{r:parseInt(e.substr(0,2),16),g:parseInt(e.substr(2,2),16),b:parseInt(e.substr(4,2),16),a:8===e.length?s(parseInt(e.substr(6,2),16)/255,2):1}:null:null},"hex"],[function(e){var t=v.exec(e)||E.exec(e);return t?t[2]!==t[4]||t[4]!==t[6]?null:l({r:Number(t[1])/(t[2]?100/255:1),g:Number(t[3])/(t[4]?100/255:1),b:Number(t[5])/(t[6]?100/255:1),a:void 0===t[7]?1:Number(t[7])/(t[8]?100:1)}):null},"rgb"],[function(e){var t=_.exec(e)||y.exec(e);if(!t)return null;var n,i,s=f({h:(n=t[1],i=t[2],void 0===i&&(i="deg"),Number(n)*(r[i]||1)),s:Number(t[3]),l:Number(t[4]),a:void 0===t[5]?1:Number(t[5])/(t[6]?100:1)});return m(s)},"hsl"]],object:[[function(e){var t=e.r,n=e.g,r=e.b,s=e.a,o=void 0===s?1:s;return i(t)&&i(n)&&i(r)?l({r:Number(t),g:Number(n),b:Number(r),a:Number(o)}):null},"rgb"],[function(e){var t=e.h,n=e.s,r=e.l,s=e.a,o=void 0===s?1:s;if(!i(t)||!i(n)||!i(r))return null;var a=f({h:Number(t),s:Number(n),l:Number(r),a:Number(o)});return m(a)},"hsl"],[function(e){var t=e.h,n=e.s,r=e.v,s=e.a,l=void 0===s?1:s;if(!i(t)||!i(n)||!i(r))return null;var c=function(e){return{h:a(e.h),s:o(e.s,0,100),v:o(e.v,0,100),a:o(e.a)}}({h:Number(t),s:Number(n),v:Number(r),a:Number(l)});return p(c)},"hsv"]]},S=function(e,t){for(var n=0;n=.5},e.prototype.toHex=function(){return e=c(this.rgba),t=e.r,n=e.g,r=e.b,o=(i=e.a)<1?d(s(255*i)):"","#"+d(t)+d(n)+d(r)+o;var e,t,n,r,i,o},e.prototype.toRgb=function(){return c(this.rgba)},e.prototype.toRgbString=function(){return e=c(this.rgba),t=e.r,n=e.g,r=e.b,(i=e.a)<1?"rgba("+t+", "+n+", "+r+", "+i+")":"rgb("+t+", "+n+", "+r+")";var e,t,n,r,i},e.prototype.toHsl=function(){return g(b(this.rgba))},e.prototype.toHslString=function(){return e=g(b(this.rgba)),t=e.h,n=e.s,r=e.l,(i=e.a)<1?"hsla("+t+", "+n+"%, "+r+"%, "+i+")":"hsl("+t+", "+n+"%, "+r+"%)";var e,t,n,r,i},e.prototype.toHsv=function(){return e=h(this.rgba),{h:s(e.h),s:s(e.s),v:s(e.v),a:s(e.a,3)};var e},e.prototype.invert=function(){return R({r:255-(e=this.rgba).r,g:255-e.g,b:255-e.b,a:e.a});var e},e.prototype.saturate=function(e){return void 0===e&&(e=.1),R(T(this.rgba,e))},e.prototype.desaturate=function(e){return void 0===e&&(e=.1),R(T(this.rgba,-e))},e.prototype.grayscale=function(){return R(T(this.rgba,-1))},e.prototype.lighten=function(e){return void 0===e&&(e=.1),R(C(this.rgba,e))},e.prototype.darken=function(e){return void 0===e&&(e=.1),R(C(this.rgba,-e))},e.prototype.rotate=function(e){return void 0===e&&(e=15),this.hue(this.hue()+e)},e.prototype.alpha=function(e){return"number"==typeof e?R({r:(t=this.rgba).r,g:t.g,b:t.b,a:e}):s(this.rgba.a,3);var t},e.prototype.hue=function(e){var t=b(this.rgba);return"number"==typeof e?R({h:e,s:t.s,l:t.l,a:t.a}):s(t.h)},e.prototype.isEqual=function(e){return this.toHex()===R(e).toHex()},e}(),R=function(e){return e instanceof I?e:new I(e)},k=[],P=function(e){e.forEach((function(e){k.indexOf(e)<0&&(e(I,x),k.push(e))}))};function O(e,t){var n={white:"#ffffff",bisque:"#ffe4c4",blue:"#0000ff",cadetblue:"#5f9ea0",chartreuse:"#7fff00",chocolate:"#d2691e",coral:"#ff7f50",antiquewhite:"#faebd7",aqua:"#00ffff",azure:"#f0ffff",whitesmoke:"#f5f5f5",papayawhip:"#ffefd5",plum:"#dda0dd",blanchedalmond:"#ffebcd",black:"#000000",gold:"#ffd700",goldenrod:"#daa520",gainsboro:"#dcdcdc",cornsilk:"#fff8dc",cornflowerblue:"#6495ed",burlywood:"#deb887",aquamarine:"#7fffd4",beige:"#f5f5dc",crimson:"#dc143c",cyan:"#00ffff",darkblue:"#00008b",darkcyan:"#008b8b",darkgoldenrod:"#b8860b",darkkhaki:"#bdb76b",darkgray:"#a9a9a9",darkgreen:"#006400",darkgrey:"#a9a9a9",peachpuff:"#ffdab9",darkmagenta:"#8b008b",darkred:"#8b0000",darkorchid:"#9932cc",darkorange:"#ff8c00",darkslateblue:"#483d8b",gray:"#808080",darkslategray:"#2f4f4f",darkslategrey:"#2f4f4f",deeppink:"#ff1493",deepskyblue:"#00bfff",wheat:"#f5deb3",firebrick:"#b22222",floralwhite:"#fffaf0",ghostwhite:"#f8f8ff",darkviolet:"#9400d3",magenta:"#ff00ff",green:"#008000",dodgerblue:"#1e90ff",grey:"#808080",honeydew:"#f0fff0",hotpink:"#ff69b4",blueviolet:"#8a2be2",forestgreen:"#228b22",lawngreen:"#7cfc00",indianred:"#cd5c5c",indigo:"#4b0082",fuchsia:"#ff00ff",brown:"#a52a2a",maroon:"#800000",mediumblue:"#0000cd",lightcoral:"#f08080",darkturquoise:"#00ced1",lightcyan:"#e0ffff",ivory:"#fffff0",lightyellow:"#ffffe0",lightsalmon:"#ffa07a",lightseagreen:"#20b2aa",linen:"#faf0e6",mediumaquamarine:"#66cdaa",lemonchiffon:"#fffacd",lime:"#00ff00",khaki:"#f0e68c",mediumseagreen:"#3cb371",limegreen:"#32cd32",mediumspringgreen:"#00fa9a",lightskyblue:"#87cefa",lightblue:"#add8e6",midnightblue:"#191970",lightpink:"#ffb6c1",mistyrose:"#ffe4e1",moccasin:"#ffe4b5",mintcream:"#f5fffa",lightslategray:"#778899",lightslategrey:"#778899",navajowhite:"#ffdead",navy:"#000080",mediumvioletred:"#c71585",powderblue:"#b0e0e6",palegoldenrod:"#eee8aa",oldlace:"#fdf5e6",paleturquoise:"#afeeee",mediumturquoise:"#48d1cc",mediumorchid:"#ba55d3",rebeccapurple:"#663399",lightsteelblue:"#b0c4de",mediumslateblue:"#7b68ee",thistle:"#d8bfd8",tan:"#d2b48c",orchid:"#da70d6",mediumpurple:"#9370db",purple:"#800080",pink:"#ffc0cb",skyblue:"#87ceeb",springgreen:"#00ff7f",palegreen:"#98fb98",red:"#ff0000",yellow:"#ffff00",slateblue:"#6a5acd",lavenderblush:"#fff0f5",peru:"#cd853f",palevioletred:"#db7093",violet:"#ee82ee",teal:"#008080",slategray:"#708090",slategrey:"#708090",aliceblue:"#f0f8ff",darkseagreen:"#8fbc8f",darkolivegreen:"#556b2f",greenyellow:"#adff2f",seagreen:"#2e8b57",seashell:"#fff5ee",tomato:"#ff6347",silver:"#c0c0c0",sienna:"#a0522d",lavender:"#e6e6fa",lightgreen:"#90ee90",orange:"#ffa500",orangered:"#ff4500",steelblue:"#4682b4",royalblue:"#4169e1",turquoise:"#40e0d0",yellowgreen:"#9acd32",salmon:"#fa8072",saddlebrown:"#8b4513",sandybrown:"#f4a460",rosybrown:"#bc8f8f",darksalmon:"#e9967a",lightgoldenrodyellow:"#fafad2",snow:"#fffafa",lightgrey:"#d3d3d3",lightgray:"#d3d3d3",dimgray:"#696969",dimgrey:"#696969",olivedrab:"#6b8e23",olive:"#808000"},r={};for(var i in n)r[n[i]]=i;var s={};e.prototype.toName=function(t){if(!(this.rgba.a||this.rgba.r||this.rgba.g||this.rgba.b))return"transparent";var i,o,a=r[this.toHex()];if(a)return a;if(null==t?void 0:t.closest){var l=this.toRgb(),c=1/0,u="black";if(!s.length)for(var d in n)s[d]=new e(n[d]).toRgb();for(var h in n){var p=(i=l,o=s[h],Math.pow(i.r-o.r,2)+Math.pow(i.g-o.g,2)+Math.pow(i.b-o.b,2));pe===t[n]));if(null!==e&&null!==t){const n=Object.keys(e),r=Object.keys(t);return n.length===r.length&&n.every((n=>e[n]===t[n]))}return e===t}toRgba(){const[e,t,n,r]=this._components;return{r:e,g:t,b:n,a:r}}toRgb(){const[e,t,n]=this._components;return{r:e,g:t,b:n}}toRgbaString(){const[e,t,n]=this.toUint8RgbArray();return`rgba(${e},${t},${n},${this.alpha})`}toUint8RgbArray(e){const[t,n,r]=this._components;return e=e??[],e[0]=Math.round(255*t),e[1]=Math.round(255*n),e[2]=Math.round(255*r),e}toRgbArray(e){e=e??[];const[t,n,r]=this._components;return e[0]=t,e[1]=n,e[2]=r,e}toNumber(){return this._int}toLittleEndianNumber(){const e=this._int;return(e>>16)+(65280&e)+((255&e)<<16)}multiply(e){const[t,n,r,i]=N.temp.setValue(e)._components;return this._components[0]*=t,this._components[1]*=n,this._components[2]*=r,this._components[3]*=i,this.refreshInt(),this._value=null,this}premultiply(e,t=!0){return t&&(this._components[0]*=e,this._components[1]*=e,this._components[2]*=e),this._components[3]=e,this.refreshInt(),this._value=null,this}toPremultiplied(e,t=!0){if(1===e)return(255<<24)+this._int;if(0===e)return t?0:this._int;let n=this._int>>16&255,r=this._int>>8&255,i=255&this._int;return t&&(n=n*e+.5|0,r=r*e+.5|0,i=i*e+.5|0),(255*e<<24)+(n<<16)+(r<<8)+i}toHex(){const e=this._int.toString(16);return`#${"000000".substring(0,6-e.length)+e}`}toHexa(){const e=Math.round(255*this._components[3]),t=e.toString(16);return this.toHex()+"00".substring(0,2-t.length)+t}setAlpha(e){return this._components[3]=this._clamp(e),this}round(e){const[t,n,r]=this._components;return this._components[0]=Math.round(t*e)/e,this._components[1]=Math.round(n*e)/e,this._components[2]=Math.round(r*e)/e,this.refreshInt(),this._value=null,this}toArray(e){e=e??[];const[t,n,r,i]=this._components;return e[0]=t,e[1]=n,e[2]=r,e[3]=i,e}normalize(e){let t,n,r,i;if(("number"===typeof e||e instanceof Number)&&e>=0&&e<=16777215){const s=e;t=(s>>16&255)/255,n=(s>>8&255)/255,r=(255&s)/255,i=1}else if((Array.isArray(e)||e instanceof Float32Array)&&e.length>=3&&e.length<=4)e=this._clamp(e),[t,n,r,i=1]=e;else if((e instanceof Uint8Array||e instanceof Uint8ClampedArray)&&e.length>=3&&e.length<=4)e=this._clamp(e,0,255),[t,n,r,i=255]=e,t/=255,n/=255,r/=255,i/=255;else if("string"===typeof e||"object"===typeof e){if("string"===typeof e){const t=N.HEX_PATTERN.exec(e);t&&(e=`#${t[2]}`)}const s=R(e);s.isValid()&&(({r:t,g:n,b:r,a:i}=s.rgba),t/=255,n/=255,r/=255)}if(void 0===t)throw new Error(`Unable to convert color ${e}`);this._components[0]=t,this._components[1]=n,this._components[2]=r,this._components[3]=i,this.refreshInt()}refreshInt(){this._clamp(this._components);const[e,t,n]=this._components;this._int=(255*e<<16)+(255*t<<8)+(255*n|0)}_clamp(e,t=0,n=1){return"number"===typeof e?Math.min(Math.max(e,t),n):(e.forEach(((r,i)=>{e[i]=Math.min(Math.max(r,t),n)})),e)}};let M=N;M.shared=new N,M.temp=new N,M.HEX_PATTERN=/^(#|0x)?(([a-f0-9]{3}){1,2}([a-f0-9]{2})?)$/i},7361:function(e,t,n){"use strict";n.d(t,{A7:function(){return y},G5:function(){return v},I2:function(){return l},N3:function(){return i},Nt:function(){return p},T$:function(){return o},UN:function(){return b},V0:function(){return s},Vi:function(){return r},WB:function(){return f},aH:function(){return h},cB:function(){return _},iw:function(){return g},lg:function(){return a},mr:function(){return E},oT:function(){return d},sp:function(){return c},vK:function(){return u},yl:function(){return m}});var r=(e=>(e[e["WEBGL_LEGACY"]=0]="WEBGL_LEGACY",e[e["WEBGL"]=1]="WEBGL",e[e["WEBGL2"]=2]="WEBGL2",e))(r||{}),i=(e=>(e[e["UNKNOWN"]=0]="UNKNOWN",e[e["WEBGL"]=1]="WEBGL",e[e["CANVAS"]=2]="CANVAS",e))(i||{}),s=(e=>(e[e["COLOR"]=16384]="COLOR",e[e["DEPTH"]=256]="DEPTH",e[e["STENCIL"]=1024]="STENCIL",e))(s||{}),o=(e=>(e[e["NORMAL"]=0]="NORMAL",e[e["ADD"]=1]="ADD",e[e["MULTIPLY"]=2]="MULTIPLY",e[e["SCREEN"]=3]="SCREEN",e[e["OVERLAY"]=4]="OVERLAY",e[e["DARKEN"]=5]="DARKEN",e[e["LIGHTEN"]=6]="LIGHTEN",e[e["COLOR_DODGE"]=7]="COLOR_DODGE",e[e["COLOR_BURN"]=8]="COLOR_BURN",e[e["HARD_LIGHT"]=9]="HARD_LIGHT",e[e["SOFT_LIGHT"]=10]="SOFT_LIGHT",e[e["DIFFERENCE"]=11]="DIFFERENCE",e[e["EXCLUSION"]=12]="EXCLUSION",e[e["HUE"]=13]="HUE",e[e["SATURATION"]=14]="SATURATION",e[e["COLOR"]=15]="COLOR",e[e["LUMINOSITY"]=16]="LUMINOSITY",e[e["NORMAL_NPM"]=17]="NORMAL_NPM",e[e["ADD_NPM"]=18]="ADD_NPM",e[e["SCREEN_NPM"]=19]="SCREEN_NPM",e[e["NONE"]=20]="NONE",e[e["SRC_OVER"]=0]="SRC_OVER",e[e["SRC_IN"]=21]="SRC_IN",e[e["SRC_OUT"]=22]="SRC_OUT",e[e["SRC_ATOP"]=23]="SRC_ATOP",e[e["DST_OVER"]=24]="DST_OVER",e[e["DST_IN"]=25]="DST_IN",e[e["DST_OUT"]=26]="DST_OUT",e[e["DST_ATOP"]=27]="DST_ATOP",e[e["ERASE"]=26]="ERASE",e[e["SUBTRACT"]=28]="SUBTRACT",e[e["XOR"]=29]="XOR",e))(o||{}),a=(e=>(e[e["POINTS"]=0]="POINTS",e[e["LINES"]=1]="LINES",e[e["LINE_LOOP"]=2]="LINE_LOOP",e[e["LINE_STRIP"]=3]="LINE_STRIP",e[e["TRIANGLES"]=4]="TRIANGLES",e[e["TRIANGLE_STRIP"]=5]="TRIANGLE_STRIP",e[e["TRIANGLE_FAN"]=6]="TRIANGLE_FAN",e))(a||{}),l=(e=>(e[e["RGBA"]=6408]="RGBA",e[e["RGB"]=6407]="RGB",e[e["RG"]=33319]="RG",e[e["RED"]=6403]="RED",e[e["RGBA_INTEGER"]=36249]="RGBA_INTEGER",e[e["RGB_INTEGER"]=36248]="RGB_INTEGER",e[e["RG_INTEGER"]=33320]="RG_INTEGER",e[e["RED_INTEGER"]=36244]="RED_INTEGER",e[e["ALPHA"]=6406]="ALPHA",e[e["LUMINANCE"]=6409]="LUMINANCE",e[e["LUMINANCE_ALPHA"]=6410]="LUMINANCE_ALPHA",e[e["DEPTH_COMPONENT"]=6402]="DEPTH_COMPONENT",e[e["DEPTH_STENCIL"]=34041]="DEPTH_STENCIL",e))(l||{}),c=(e=>(e[e["TEXTURE_2D"]=3553]="TEXTURE_2D",e[e["TEXTURE_CUBE_MAP"]=34067]="TEXTURE_CUBE_MAP",e[e["TEXTURE_2D_ARRAY"]=35866]="TEXTURE_2D_ARRAY",e[e["TEXTURE_CUBE_MAP_POSITIVE_X"]=34069]="TEXTURE_CUBE_MAP_POSITIVE_X",e[e["TEXTURE_CUBE_MAP_NEGATIVE_X"]=34070]="TEXTURE_CUBE_MAP_NEGATIVE_X",e[e["TEXTURE_CUBE_MAP_POSITIVE_Y"]=34071]="TEXTURE_CUBE_MAP_POSITIVE_Y",e[e["TEXTURE_CUBE_MAP_NEGATIVE_Y"]=34072]="TEXTURE_CUBE_MAP_NEGATIVE_Y",e[e["TEXTURE_CUBE_MAP_POSITIVE_Z"]=34073]="TEXTURE_CUBE_MAP_POSITIVE_Z",e[e["TEXTURE_CUBE_MAP_NEGATIVE_Z"]=34074]="TEXTURE_CUBE_MAP_NEGATIVE_Z",e))(c||{}),u=(e=>(e[e["UNSIGNED_BYTE"]=5121]="UNSIGNED_BYTE",e[e["UNSIGNED_SHORT"]=5123]="UNSIGNED_SHORT",e[e["UNSIGNED_SHORT_5_6_5"]=33635]="UNSIGNED_SHORT_5_6_5",e[e["UNSIGNED_SHORT_4_4_4_4"]=32819]="UNSIGNED_SHORT_4_4_4_4",e[e["UNSIGNED_SHORT_5_5_5_1"]=32820]="UNSIGNED_SHORT_5_5_5_1",e[e["UNSIGNED_INT"]=5125]="UNSIGNED_INT",e[e["UNSIGNED_INT_10F_11F_11F_REV"]=35899]="UNSIGNED_INT_10F_11F_11F_REV",e[e["UNSIGNED_INT_2_10_10_10_REV"]=33640]="UNSIGNED_INT_2_10_10_10_REV",e[e["UNSIGNED_INT_24_8"]=34042]="UNSIGNED_INT_24_8",e[e["UNSIGNED_INT_5_9_9_9_REV"]=35902]="UNSIGNED_INT_5_9_9_9_REV",e[e["BYTE"]=5120]="BYTE",e[e["SHORT"]=5122]="SHORT",e[e["INT"]=5124]="INT",e[e["FLOAT"]=5126]="FLOAT",e[e["FLOAT_32_UNSIGNED_INT_24_8_REV"]=36269]="FLOAT_32_UNSIGNED_INT_24_8_REV",e[e["HALF_FLOAT"]=36193]="HALF_FLOAT",e))(u||{}),d=(e=>(e[e["FLOAT"]=0]="FLOAT",e[e["INT"]=1]="INT",e[e["UINT"]=2]="UINT",e))(d||{}),h=(e=>(e[e["NEAREST"]=0]="NEAREST",e[e["LINEAR"]=1]="LINEAR",e))(h||{}),p=(e=>(e[e["CLAMP"]=33071]="CLAMP",e[e["REPEAT"]=10497]="REPEAT",e[e["MIRRORED_REPEAT"]=33648]="MIRRORED_REPEAT",e))(p||{}),f=(e=>(e[e["OFF"]=0]="OFF",e[e["POW2"]=1]="POW2",e[e["ON"]=2]="ON",e[e["ON_MANUAL"]=3]="ON_MANUAL",e))(f||{}),g=(e=>(e[e["NPM"]=0]="NPM",e[e["UNPACK"]=1]="UNPACK",e[e["PMA"]=2]="PMA",e[e["NO_PREMULTIPLIED_ALPHA"]=0]="NO_PREMULTIPLIED_ALPHA",e[e["PREMULTIPLY_ON_UPLOAD"]=1]="PREMULTIPLY_ON_UPLOAD",e[e["PREMULTIPLIED_ALPHA"]=2]="PREMULTIPLIED_ALPHA",e))(g||{}),m=(e=>(e[e["NO"]=0]="NO",e[e["YES"]=1]="YES",e[e["AUTO"]=2]="AUTO",e[e["BLEND"]=0]="BLEND",e[e["CLEAR"]=1]="CLEAR",e[e["BLIT"]=2]="BLIT",e))(m||{}),b=(e=>(e[e["AUTO"]=0]="AUTO",e[e["MANUAL"]=1]="MANUAL",e))(b||{}),_=(e=>(e["LOW"]="lowp",e["MEDIUM"]="mediump",e["HIGH"]="highp",e))(_||{}),y=(e=>(e[e["NONE"]=0]="NONE",e[e["SCISSOR"]=1]="SCISSOR",e[e["STENCIL"]=2]="STENCIL",e[e["SPRITE"]=3]="SPRITE",e[e["COLOR"]=4]="COLOR",e))(y||{}),v=(e=>(e[e["NONE"]=0]="NONE",e[e["LOW"]=2]="LOW",e[e["MEDIUM"]=4]="MEDIUM",e[e["HIGH"]=8]="HIGH",e))(v||{}),E=(e=>(e[e["ELEMENT_ARRAY_BUFFER"]=34963]="ELEMENT_ARRAY_BUFFER",e[e["ARRAY_BUFFER"]=34962]="ARRAY_BUFFER",e[e["UNIFORM_BUFFER"]=35345]="UNIFORM_BUFFER",e))(E||{})},2848:function(e,t,n){"use strict";n.d(t,{iw:function(){return r.iw},T$:function(){return r.T$},VL:function(){return R},a$:function(){return k},JZ:function(){return G},Ie:function(){return pe},lW:function(){return O},qm:function(){return A},yl:function(){return r.yl},Cd:function(){return $.Cd},Il:function(){return o.I},ZX:function(){return $.ZX},lg:function(){return r.lg},Pj:function(){return $.Pj},nw:function(){return a},I2:function(){return r.I2},wn:function(){return we},wG:function(){return U},A7:function(){return r.A7},KI:function(){return r.WB},y3:function(){return $.y3},bO:function(){return me},AB:function(){return $.AB},_b:function(){return $._b},E9:function(){return $.E9},mg:function(){return $.mg},$r:function(){return le},ud:function(){return $e},jl:function(){return $.jl},Ae:function(){return $.Ae},TI:function(){return Be},c9:function(){return $.c9},HS:function(){return $.HS},pX:function(){return An},ex:function(){return de},ZM:function(){return E},vK:function(){return r.vK},xE:function(){return Fe},UX:function(){return Qe},vB:function(){return sn},wx:function(){return $.wx},uF:function(){return tn},oo:function(){return ue},Rv:function(){return d},Nt:function(){return r.Nt},e6:function(){return ln},Y9:function(){return hn},kP:function(){return dn},Rw:function(){return u},Xd:function(){return i.Xd},P6:function(){return s}});var r=n(7361),i=n(8706),s=n(4038),o=n(6278),a=(e=>(e["Renderer"]="renderer",e["Application"]="application",e["RendererSystem"]="renderer-webgl-system",e["RendererPlugin"]="renderer-webgl-plugin",e["CanvasRendererSystem"]="renderer-canvas-system",e["CanvasRendererPlugin"]="renderer-canvas-plugin",e["Asset"]="asset",e["LoadParser"]="load-parser",e["ResolveParser"]="resolve-parser",e["CacheParser"]="cache-parser",e["DetectionParser"]="detection-parser",e))(a||{});const l=e=>{if("function"===typeof e||"object"===typeof e&&e.extension){if(!e.extension)throw new Error("Extension class must have an extension object");const t="object"!==typeof e.extension?{type:e.extension}:e.extension;e={...t,ref:e}}if("object"!==typeof e)throw new Error("Invalid extension type");return e={...e},"string"===typeof e.type&&(e.type=[e.type]),e},c=(e,t)=>l(e).priority??t,u={_addHandlers:{},_removeHandlers:{},_queue:{},remove(...e){return e.map(l).forEach((e=>{e.type.forEach((t=>this._removeHandlers[t]?.(e)))})),this},add(...e){return e.map(l).forEach((e=>{e.type.forEach((t=>{const n=this._addHandlers,r=this._queue;n[t]?n[t](e):(r[t]=r[t]||[],r[t].push(e))}))})),this},handle(e,t,n){const r=this._addHandlers,i=this._removeHandlers;if(r[e]||i[e])throw new Error(`Extension type ${e} already has a handler`);r[e]=t,i[e]=n;const s=this._queue;return s[e]&&(s[e].forEach((e=>t(e))),delete s[e]),this},handleByMap(e,t){return this.handle(e,(e=>{t[e.name]=e.ref}),(e=>{delete t[e.name]}))},handleByList(e,t,n=-1){return this.handle(e,(e=>{t.includes(e.ref)||(t.push(e.ref),t.sort(((e,t)=>c(t,n)-c(e,n))))}),(e=>{const n=t.indexOf(e.ref);-1!==n&&t.splice(n,1)}))}};class d{constructor(e){"number"===typeof e?this.rawBinaryData=new ArrayBuffer(e):e instanceof Uint8Array?this.rawBinaryData=e.buffer:this.rawBinaryData=e,this.uint32View=new Uint32Array(this.rawBinaryData),this.float32View=new Float32Array(this.rawBinaryData)}get int8View(){return this._int8View||(this._int8View=new Int8Array(this.rawBinaryData)),this._int8View}get uint8View(){return this._uint8View||(this._uint8View=new Uint8Array(this.rawBinaryData)),this._uint8View}get int16View(){return this._int16View||(this._int16View=new Int16Array(this.rawBinaryData)),this._int16View}get uint16View(){return this._uint16View||(this._uint16View=new Uint16Array(this.rawBinaryData)),this._uint16View}get int32View(){return this._int32View||(this._int32View=new Int32Array(this.rawBinaryData)),this._int32View}view(e){return this[`${e}View`]}destroy(){this.rawBinaryData=null,this._int8View=null,this._uint8View=null,this._int16View=null,this._uint16View=null,this._int32View=null,this.uint32View=null,this.float32View=null}static sizeOf(e){switch(e){case"int8":case"uint8":return 1;case"int16":case"uint16":return 2;case"int32":case"uint32":case"float32":return 4;default:throw new Error(`${e} isn't a valid view type`)}}}const h=["precision mediump float;","void main(void){","float test = 0.1;","%forloop%","gl_FragColor = vec4(0.0);","}"].join("\n");function p(e){let t="";for(let n=0;n0&&(t+="\nelse "),n=0;--r){const i=x[r];if(i.test&&i.test(e,n))return new i(e,t)}throw new Error("Unrecognized source type to auto-detect Resource")}class w{constructor(e){this.items=[],this._name=e,this._aliasCount=0}emit(e,t,n,r,i,s,o,a){if(arguments.length>8)throw new Error("max arguments reached");const{name:l,items:c}=this;this._aliasCount++;for(let u=0,d=c.length;u0&&this.items.length>1&&(this._aliasCount=0,this.items=this.items.slice(0))}add(e){return e[this._name]&&(this.ensureNonAliasedItems(),this.remove(e),this.items.push(e)),this}remove(e){const t=this.items.indexOf(e);return-1!==t&&(this.ensureNonAliasedItems(),this.items.splice(t,1)),this}contains(e){return this.items.includes(e)}removeAll(){return this.ensureNonAliasedItems(),this.items.length=0,this}destroy(){this.removeAll(),this.items=null,this._name=null}get empty(){return 0===this.items.length}get name(){return this._name}}Object.defineProperties(w.prototype,{dispatch:{value:w.prototype.emit},run:{value:w.prototype.emit}});class T{constructor(e=0,t=0){this._width=e,this._height=t,this.destroyed=!1,this.internal=!1,this.onResize=new w("setRealSize"),this.onUpdate=new w("update"),this.onError=new w("onError")}bind(e){this.onResize.add(e),this.onUpdate.add(e),this.onError.add(e),(this._width||this._height)&&this.onResize.emit(this._width,this._height)}unbind(e){this.onResize.remove(e),this.onUpdate.remove(e),this.onError.remove(e)}resize(e,t){e===this._width&&t===this._height||(this._width=e,this._height=t,this.onResize.emit(e,t))}get valid(){return!!this._width&&!!this._height}update(){this.destroyed||this.onUpdate.emit()}load(){return Promise.resolve(this)}get width(){return this._width}get height(){return this._height}style(e,t,n){return!1}dispose(){}destroy(){this.destroyed||(this.destroyed=!0,this.dispose(),this.onError.removeAll(),this.onError=null,this.onResize.removeAll(),this.onResize=null,this.onUpdate.removeAll(),this.onUpdate=null)}static test(e,t){return!1}}class A extends T{constructor(e,t){const{width:n,height:r}=t||{};if(!n||!r)throw new Error("BufferResource width or height invalid");super(n,r),this.data=e}upload(e,t,n){const i=e.gl;i.pixelStorei(i.UNPACK_PREMULTIPLY_ALPHA_WEBGL,t.alphaMode===r.iw.UNPACK);const s=t.realWidth,o=t.realHeight;return n.width===s&&n.height===o?i.texSubImage2D(t.target,0,0,0,s,o,t.format,n.type,this.data):(n.width=s,n.height=o,i.texImage2D(t.target,0,n.internalFormat,s,o,0,t.format,n.type,this.data)),!0}dispose(){this.data=null}static test(e){return e instanceof Float32Array||e instanceof Uint8Array||e instanceof Uint32Array}}const C={scaleMode:r.aH.NEAREST,format:r.I2.RGBA,alphaMode:r.iw.NPM},I=class extends s.EventEmitter{constructor(e=null,t=null){super(),t=Object.assign({},I.defaultOptions,t);const{alphaMode:n,mipmap:r,anisotropicLevel:o,scaleMode:a,width:l,height:c,wrapMode:u,format:d,type:h,target:p,resolution:f,resourceOptions:g}=t;!e||e instanceof T||(e=S(e,g),e.internal=!0),this.resolution=f||i.Xd.RESOLUTION,this.width=Math.round((l||0)*this.resolution)/this.resolution,this.height=Math.round((c||0)*this.resolution)/this.resolution,this._mipmap=r,this.anisotropicLevel=o,this._wrapMode=u,this._scaleMode=a,this.format=d,this.type=h,this.target=p,this.alphaMode=n,this.uid=(0,s.uid)(),this.touched=0,this.isPowerOfTwo=!1,this._refreshPOT(),this._glTextures={},this.dirtyId=0,this.dirtyStyleId=0,this.cacheId=null,this.valid=l>0&&c>0,this.textureCacheIds=[],this.destroyed=!1,this.resource=null,this._batchEnabled=0,this._batchLocation=0,this.parentTextureArray=null,this.setResource(e)}get realWidth(){return Math.round(this.width*this.resolution)}get realHeight(){return Math.round(this.height*this.resolution)}get mipmap(){return this._mipmap}set mipmap(e){this._mipmap!==e&&(this._mipmap=e,this.dirtyStyleId++)}get scaleMode(){return this._scaleMode}set scaleMode(e){this._scaleMode!==e&&(this._scaleMode=e,this.dirtyStyleId++)}get wrapMode(){return this._wrapMode}set wrapMode(e){this._wrapMode!==e&&(this._wrapMode=e,this.dirtyStyleId++)}setStyle(e,t){let n;return void 0!==e&&e!==this.scaleMode&&(this.scaleMode=e,n=!0),void 0!==t&&t!==this.mipmap&&(this.mipmap=t,n=!0),n&&this.dirtyStyleId++,this}setSize(e,t,n){return n=n||this.resolution,this.setRealSize(e*n,t*n,n)}setRealSize(e,t,n){return this.resolution=n||this.resolution,this.width=Math.round(e)/this.resolution,this.height=Math.round(t)/this.resolution,this._refreshPOT(),this.update(),this}_refreshPOT(){this.isPowerOfTwo=(0,s.isPow2)(this.realWidth)&&(0,s.isPow2)(this.realHeight)}setResolution(e){const t=this.resolution;return t===e||(this.resolution=e,this.valid&&(this.width=Math.round(this.width*t)/e,this.height=Math.round(this.height*t)/e,this.emit("update",this)),this._refreshPOT()),this}setResource(e){if(this.resource===e)return this;if(this.resource)throw new Error("Resource can be set only once");return e.bind(this),this.resource=e,this}update(){this.valid?(this.dirtyId++,this.dirtyStyleId++,this.emit("update",this)):this.width>0&&this.height>0&&(this.valid=!0,this.emit("loaded",this),this.emit("update",this))}onError(e){this.emit("error",this,e)}destroy(){this.resource&&(this.resource.unbind(this),this.resource.internal&&this.resource.destroy(),this.resource=null),this.cacheId&&(delete s.BaseTextureCache[this.cacheId],delete s.TextureCache[this.cacheId],this.cacheId=null),this.dispose(),I.removeFromCache(this),this.textureCacheIds=null,this.destroyed=!0}dispose(){this.emit("dispose",this)}castToBaseTexture(){return this}static from(e,t,n=i.Xd.STRICT_TEXTURE_CACHE){const r="string"===typeof e;let o=null;if(r)o=e;else{if(!e._pixiId){const n=t?.pixiIdPrefix||"pixiid";e._pixiId=`${n}_${(0,s.uid)()}`}o=e._pixiId}let a=s.BaseTextureCache[o];if(r&&n&&!a)throw new Error(`The cacheId "${o}" does not exist in BaseTextureCache.`);return a||(a=new I(e,t),a.cacheId=o,I.addToCache(a,o)),a}static fromBuffer(e,t,n,i){e=e||new Float32Array(t*n*4);const s=new A(e,{width:t,height:n}),o=e instanceof Float32Array?r.vK.FLOAT:r.vK.UNSIGNED_BYTE;return new I(s,Object.assign({},C,{type:o},i))}static addToCache(e,t){t&&(e.textureCacheIds.includes(t)||e.textureCacheIds.push(t),s.BaseTextureCache[t]&&s.BaseTextureCache[t]!==e&&console.warn(`BaseTexture added to the cache with an id [${t}] that already had an entry`),s.BaseTextureCache[t]=e)}static removeFromCache(e){if("string"===typeof e){const t=s.BaseTextureCache[e];if(t){const n=t.textureCacheIds.indexOf(e);return n>-1&&t.textureCacheIds.splice(n,1),delete s.BaseTextureCache[e],t}}else if(e?.textureCacheIds){for(let t=0;t1){for(let e=0;e"float"===e.type&&1===e.size&&!e.isArray,code:e=>`\n if(uv["${e}"] !== ud["${e}"].value)\n {\n ud["${e}"].value = uv["${e}"]\n gl.uniform1f(ud["${e}"].location, uv["${e}"])\n }\n `},{test:(e,t)=>("sampler2D"===e.type||"samplerCube"===e.type||"sampler2DArray"===e.type)&&1===e.size&&!e.isArray&&(null==t||void 0!==t.castToBaseTexture),code:e=>`t = syncData.textureCount++;\n\n renderer.texture.bind(uv["${e}"], t);\n\n if(ud["${e}"].value !== t)\n {\n ud["${e}"].value = t;\n gl.uniform1i(ud["${e}"].location, t);\n; // eslint-disable-line max-len\n }`},{test:(e,t)=>"mat3"===e.type&&1===e.size&&!e.isArray&&void 0!==t.a,code:e=>`\n gl.uniformMatrix3fv(ud["${e}"].location, false, uv["${e}"].toArray(true));\n `,codeUbo:e=>`\n var ${e}_matrix = uv.${e}.toArray(true);\n\n data[offset] = ${e}_matrix[0];\n data[offset+1] = ${e}_matrix[1];\n data[offset+2] = ${e}_matrix[2];\n \n data[offset + 4] = ${e}_matrix[3];\n data[offset + 5] = ${e}_matrix[4];\n data[offset + 6] = ${e}_matrix[5];\n \n data[offset + 8] = ${e}_matrix[6];\n data[offset + 9] = ${e}_matrix[7];\n data[offset + 10] = ${e}_matrix[8];\n `},{test:(e,t)=>"vec2"===e.type&&1===e.size&&!e.isArray&&void 0!==t.x,code:e=>`\n cv = ud["${e}"].value;\n v = uv["${e}"];\n\n if(cv[0] !== v.x || cv[1] !== v.y)\n {\n cv[0] = v.x;\n cv[1] = v.y;\n gl.uniform2f(ud["${e}"].location, v.x, v.y);\n }`,codeUbo:e=>`\n v = uv.${e};\n\n data[offset] = v.x;\n data[offset+1] = v.y;\n `},{test:e=>"vec2"===e.type&&1===e.size&&!e.isArray,code:e=>`\n cv = ud["${e}"].value;\n v = uv["${e}"];\n\n if(cv[0] !== v[0] || cv[1] !== v[1])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n gl.uniform2f(ud["${e}"].location, v[0], v[1]);\n }\n `},{test:(e,t)=>"vec4"===e.type&&1===e.size&&!e.isArray&&void 0!==t.width,code:e=>`\n cv = ud["${e}"].value;\n v = uv["${e}"];\n\n if(cv[0] !== v.x || cv[1] !== v.y || cv[2] !== v.width || cv[3] !== v.height)\n {\n cv[0] = v.x;\n cv[1] = v.y;\n cv[2] = v.width;\n cv[3] = v.height;\n gl.uniform4f(ud["${e}"].location, v.x, v.y, v.width, v.height)\n }`,codeUbo:e=>`\n v = uv.${e};\n\n data[offset] = v.x;\n data[offset+1] = v.y;\n data[offset+2] = v.width;\n data[offset+3] = v.height;\n `},{test:(e,t)=>"vec4"===e.type&&1===e.size&&!e.isArray&&void 0!==t.red,code:e=>`\n cv = ud["${e}"].value;\n v = uv["${e}"];\n\n if(cv[0] !== v.red || cv[1] !== v.green || cv[2] !== v.blue || cv[3] !== v.alpha)\n {\n cv[0] = v.red;\n cv[1] = v.green;\n cv[2] = v.blue;\n cv[3] = v.alpha;\n gl.uniform4f(ud["${e}"].location, v.red, v.green, v.blue, v.alpha)\n }`,codeUbo:e=>`\n v = uv.${e};\n\n data[offset] = v.red;\n data[offset+1] = v.green;\n data[offset+2] = v.blue;\n data[offset+3] = v.alpha;\n `},{test:(e,t)=>"vec3"===e.type&&1===e.size&&!e.isArray&&void 0!==t.red,code:e=>`\n cv = ud["${e}"].value;\n v = uv["${e}"];\n\n if(cv[0] !== v.red || cv[1] !== v.green || cv[2] !== v.blue || cv[3] !== v.a)\n {\n cv[0] = v.red;\n cv[1] = v.green;\n cv[2] = v.blue;\n \n gl.uniform3f(ud["${e}"].location, v.red, v.green, v.blue)\n }`,codeUbo:e=>`\n v = uv.${e};\n\n data[offset] = v.red;\n data[offset+1] = v.green;\n data[offset+2] = v.blue;\n `},{test:e=>"vec4"===e.type&&1===e.size&&!e.isArray,code:e=>`\n cv = ud["${e}"].value;\n v = uv["${e}"];\n\n if(cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2] || cv[3] !== v[3])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n cv[3] = v[3];\n\n gl.uniform4f(ud["${e}"].location, v[0], v[1], v[2], v[3])\n }`}],j={float:"\n if (cv !== v)\n {\n cu.value = v;\n gl.uniform1f(location, v);\n }",vec2:"\n if (cv[0] !== v[0] || cv[1] !== v[1])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n\n gl.uniform2f(location, v[0], v[1])\n }",vec3:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n\n gl.uniform3f(location, v[0], v[1], v[2])\n }",vec4:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2] || cv[3] !== v[3])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n cv[3] = v[3];\n\n gl.uniform4f(location, v[0], v[1], v[2], v[3]);\n }",int:"\n if (cv !== v)\n {\n cu.value = v;\n\n gl.uniform1i(location, v);\n }",ivec2:"\n if (cv[0] !== v[0] || cv[1] !== v[1])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n\n gl.uniform2i(location, v[0], v[1]);\n }",ivec3:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n\n gl.uniform3i(location, v[0], v[1], v[2]);\n }",ivec4:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2] || cv[3] !== v[3])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n cv[3] = v[3];\n\n gl.uniform4i(location, v[0], v[1], v[2], v[3]);\n }",uint:"\n if (cv !== v)\n {\n cu.value = v;\n\n gl.uniform1ui(location, v);\n }",uvec2:"\n if (cv[0] !== v[0] || cv[1] !== v[1])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n\n gl.uniform2ui(location, v[0], v[1]);\n }",uvec3:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n\n gl.uniform3ui(location, v[0], v[1], v[2]);\n }",uvec4:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2] || cv[3] !== v[3])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n cv[3] = v[3];\n\n gl.uniform4ui(location, v[0], v[1], v[2], v[3]);\n }",bool:"\n if (cv !== v)\n {\n cu.value = v;\n gl.uniform1i(location, v);\n }",bvec2:"\n if (cv[0] != v[0] || cv[1] != v[1])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n\n gl.uniform2i(location, v[0], v[1]);\n }",bvec3:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n\n gl.uniform3i(location, v[0], v[1], v[2]);\n }",bvec4:"\n if (cv[0] !== v[0] || cv[1] !== v[1] || cv[2] !== v[2] || cv[3] !== v[3])\n {\n cv[0] = v[0];\n cv[1] = v[1];\n cv[2] = v[2];\n cv[3] = v[3];\n\n gl.uniform4i(location, v[0], v[1], v[2], v[3]);\n }",mat2:"gl.uniformMatrix2fv(location, false, v)",mat3:"gl.uniformMatrix3fv(location, false, v)",mat4:"gl.uniformMatrix4fv(location, false, v)",sampler2D:"\n if (cv !== v)\n {\n cu.value = v;\n\n gl.uniform1i(location, v);\n }",samplerCube:"\n if (cv !== v)\n {\n cu.value = v;\n\n gl.uniform1i(location, v);\n }",sampler2DArray:"\n if (cv !== v)\n {\n cu.value = v;\n\n gl.uniform1i(location, v);\n }"},W={float:"gl.uniform1fv(location, v)",vec2:"gl.uniform2fv(location, v)",vec3:"gl.uniform3fv(location, v)",vec4:"gl.uniform4fv(location, v)",mat4:"gl.uniformMatrix4fv(location, false, v)",mat3:"gl.uniformMatrix3fv(location, false, v)",mat2:"gl.uniformMatrix2fv(location, false, v)",int:"gl.uniform1iv(location, v)",ivec2:"gl.uniform2iv(location, v)",ivec3:"gl.uniform3iv(location, v)",ivec4:"gl.uniform4iv(location, v)",uint:"gl.uniform1uiv(location, v)",uvec2:"gl.uniform2uiv(location, v)",uvec3:"gl.uniform3uiv(location, v)",uvec4:"gl.uniform4uiv(location, v)",bool:"gl.uniform1iv(location, v)",bvec2:"gl.uniform2iv(location, v)",bvec3:"gl.uniform3iv(location, v)",bvec4:"gl.uniform4iv(location, v)",sampler2D:"gl.uniform1iv(location, v)",samplerCube:"gl.uniform1iv(location, v)",sampler2DArray:"gl.uniform1iv(location, v)"};function q(e,t){const n=["\n var v = null;\n var cv = null;\n var cu = null;\n var t = 0;\n var gl = renderer.gl;\n "];for(const r in e.uniforms){const i=t[r];if(!i){e.uniforms[r]?.group&&(e.uniforms[r].ubo?n.push(`\n renderer.shader.syncUniformBufferGroup(uv.${r}, '${r}');\n `):n.push(`\n renderer.shader.syncUniformGroup(uv.${r}, syncData);\n `));continue}const s=e.uniforms[r];let o=!1;for(let e=0;e=r.Vi.WEBGL2&&(t=e.getContext("webgl2",{})),t||(t=e.getContext("webgl",{})||e.getContext("experimental-webgl",{}),t?t.getExtension("WEBGL_draw_buffers"):t=null),K=t}return K}function Q(){if(!Y){Y=r.cB.MEDIUM;const e=Z();if(e&&e.getShaderPrecisionFormat){const t=e.getShaderPrecisionFormat(e.FRAGMENT_SHADER,e.HIGH_FLOAT);Y=t.precision?r.cB.HIGH:r.cB.MEDIUM}}return Y}const J={float:1,vec2:2,vec3:3,vec4:4,int:1,ivec2:2,ivec3:3,ivec4:4,uint:1,uvec2:2,uvec3:3,uvec4:4,bool:1,bvec2:2,bvec3:3,bvec4:4,mat2:4,mat3:9,mat4:16,sampler2D:1};function ee(e){return J[e]}let te=null;const ne={FLOAT:"float",FLOAT_VEC2:"vec2",FLOAT_VEC3:"vec3",FLOAT_VEC4:"vec4",INT:"int",INT_VEC2:"ivec2",INT_VEC3:"ivec3",INT_VEC4:"ivec4",UNSIGNED_INT:"uint",UNSIGNED_INT_VEC2:"uvec2",UNSIGNED_INT_VEC3:"uvec3",UNSIGNED_INT_VEC4:"uvec4",BOOL:"bool",BOOL_VEC2:"bvec2",BOOL_VEC3:"bvec3",BOOL_VEC4:"bvec4",FLOAT_MAT2:"mat2",FLOAT_MAT3:"mat3",FLOAT_MAT4:"mat4",SAMPLER_2D:"sampler2D",INT_SAMPLER_2D:"sampler2D",UNSIGNED_INT_SAMPLER_2D:"sampler2D",SAMPLER_CUBE:"samplerCube",INT_SAMPLER_CUBE:"samplerCube",UNSIGNED_INT_SAMPLER_CUBE:"samplerCube",SAMPLER_2D_ARRAY:"sampler2DArray",INT_SAMPLER_2D_ARRAY:"sampler2DArray",UNSIGNED_INT_SAMPLER_2D_ARRAY:"sampler2DArray"};function re(e,t){if(!te){const t=Object.keys(ne);te={};for(let n=0;n0&&(t+="\nelse "),nthis.size&&this.flush(),this._vertexCount+=e.vertexData.length/2,this._indexCount+=e.indices.length,this._bufferedTextures[this._bufferSize]=e._texture.baseTexture,this._bufferedElements[this._bufferSize++]=e)}buildTexturesAndDrawCalls(){const{_bufferedTextures:e,maxTextures:t}=this,n=ye._textureArrayPool,r=this.renderer.batch,i=this._tempBoundTextures,s=this.renderer.textureGC.count;let o=++R._globalBatch,a=0,l=n[0],c=0;r.copyBoundTextures(i,t);for(let u=0;u=t&&(r.boundArray(l,i,o,t),this.buildDrawCalls(l,c,u),c=u,l=n[++a],++o),d._batchEnabled=o,d.touched=s,l.elements[l.count++]=d)}l.count>0&&(r.boundArray(l,i,o,t),this.buildDrawCalls(l,c,this._bufferSize),++a,++o);for(let u=0;u0);for(let o=0;o=0;--r)e[r]=n[r]||null,e[r]&&(e[r]._batchLocation=r)}boundArray(e,t,n,r){const{elements:i,ids:s,count:o}=e;let a=0;for(let l=0;l=0&&o=r.Vi.WEBGL2&&(n=e.getContext("webgl2",t)),n)this.webGLVersion=2;else if(this.webGLVersion=1,n=e.getContext("webgl",t)||e.getContext("experimental-webgl",t),!n)throw new Error("This browser does not support WebGL. Try using the canvas renderer");return this.gl=n,this.getExtensions(),this.gl}getExtensions(){const{gl:e}=this,t={loseContext:e.getExtension("WEBGL_lose_context"),anisotropicFiltering:e.getExtension("EXT_texture_filter_anisotropic"),floatTextureLinear:e.getExtension("OES_texture_float_linear"),s3tc:e.getExtension("WEBGL_compressed_texture_s3tc"),s3tc_sRGB:e.getExtension("WEBGL_compressed_texture_s3tc_srgb"),etc:e.getExtension("WEBGL_compressed_texture_etc"),etc1:e.getExtension("WEBGL_compressed_texture_etc1"),pvrtc:e.getExtension("WEBGL_compressed_texture_pvrtc")||e.getExtension("WEBKIT_WEBGL_compressed_texture_pvrtc"),atc:e.getExtension("WEBGL_compressed_texture_atc"),astc:e.getExtension("WEBGL_compressed_texture_astc")};1===this.webGLVersion?Object.assign(this.extensions,t,{drawBuffers:e.getExtension("WEBGL_draw_buffers"),depthTexture:e.getExtension("WEBGL_depth_texture"),vertexArrayObject:e.getExtension("OES_vertex_array_object")||e.getExtension("MOZ_OES_vertex_array_object")||e.getExtension("WEBKIT_OES_vertex_array_object"),uint32ElementIndex:e.getExtension("OES_element_index_uint"),floatTexture:e.getExtension("OES_texture_float"),floatTextureLinear:e.getExtension("OES_texture_float_linear"),textureHalfFloat:e.getExtension("OES_texture_half_float"),textureHalfFloatLinear:e.getExtension("OES_texture_half_float_linear")}):2===this.webGLVersion&&Object.assign(this.extensions,t,{colorBufferFloat:e.getExtension("EXT_color_buffer_float")})}handleContextLost(e){e.preventDefault(),setTimeout((()=>{this.gl.isContextLost()&&this.extensions.loseContext&&this.extensions.loseContext.restoreContext()}),0)}handleContextRestored(){this.renderer.runners.contextChange.emit(this.gl)}destroy(){const e=this.renderer.view;this.renderer=null,void 0!==e.removeEventListener&&(e.removeEventListener("webglcontextlost",this.handleContextLost),e.removeEventListener("webglcontextrestored",this.handleContextRestored)),this.gl.useProgram(null),this.extensions.loseContext&&this.extensions.loseContext.loseContext()}postrender(){this.renderer.objectRenderer.renderingToScreen&&this.gl.flush()}validateContext(e){const t=e.getContextAttributes(),n="WebGL2RenderingContext"in globalThis&&e instanceof globalThis.WebGL2RenderingContext;n&&(this.webGLVersion=2),t&&!t.stencil&&console.warn("Provided WebGL context does not have a stencil buffer, masks may not render correctly");const r=n||!!e.getExtension("OES_element_index_uint");this.supports.uint32Indices=r,r||console.warn("Provided WebGL context does not support 32 index buffer, complex graphics may not render correctly")}}Ie.defaultOptions={context:null,antialias:!1,premultipliedAlpha:!0,preserveDrawingBuffer:!1,powerPreference:"default"},Ie.extension={type:a.RendererSystem,name:"context"},u.add(Ie);class Re extends A{upload(e,t,n){const i=e.gl;i.pixelStorei(i.UNPACK_PREMULTIPLY_ALPHA_WEBGL,t.alphaMode===r.iw.UNPACK);const s=t.realWidth,o=t.realHeight;return n.width===s&&n.height===o?i.texSubImage2D(t.target,0,0,0,s,o,t.format,n.type,this.data):(n.width=s,n.height=o,i.texImage2D(t.target,0,n.internalFormat,s,o,0,t.format,n.type,this.data)),!0}}class ke{constructor(e,t){this.width=Math.round(e||100),this.height=Math.round(t||100),this.stencil=!1,this.depth=!1,this.dirtyId=0,this.dirtyFormat=0,this.dirtySize=0,this.depthTexture=null,this.colorTextures=[],this.glFramebuffers={},this.disposeRunner=new w("disposeFramebuffer"),this.multisample=r.G5.NONE}get colorTexture(){return this.colorTextures[0]}addColorTexture(e=0,t){return this.colorTextures[e]=t||new R(null,{scaleMode:r.aH.NEAREST,resolution:1,mipmap:r.WB.OFF,width:this.width,height:this.height}),this.dirtyId++,this.dirtyFormat++,this}addDepthTexture(e){return this.depthTexture=e||new R(new Re(null,{width:this.width,height:this.height}),{scaleMode:r.aH.NEAREST,resolution:1,width:this.width,height:this.height,mipmap:r.WB.OFF,format:r.I2.DEPTH_COMPONENT,type:r.vK.UNSIGNED_SHORT}),this.dirtyId++,this.dirtyFormat++,this}enableDepth(){return this.depth=!0,this.dirtyId++,this.dirtyFormat++,this}enableStencil(){return this.stencil=!0,this.dirtyId++,this.dirtyFormat++,this}resize(e,t){if(e=Math.round(e),t=Math.round(t),e!==this.width||t!==this.height){this.width=e,this.height=t,this.dirtyId++,this.dirtySize++;for(let n=0;n{const n=this.source;this.url=n.src;const r=()=>{this.destroyed||(n.onload=null,n.onerror=null,this.resize(n.width,n.height),this._load=null,this.createBitmap?e(this.process()):e(this))};n.complete&&n.src?r():(n.onload=r,n.onerror=e=>{t(e),this.onError.emit(e)})}))),this._load}process(){const e=this.source;if(null!==this._process)return this._process;if(null!==this.bitmap||!globalThis.createImageBitmap)return Promise.resolve(this);const t=globalThis.createImageBitmap,n=!e.crossOrigin||"anonymous"===e.crossOrigin;return this._process=fetch(e.src,{mode:n?"cors":"no-cors"}).then((e=>e.blob())).then((n=>t(n,0,0,e.width,e.height,{premultiplyAlpha:null===this.alphaMode||this.alphaMode===r.iw.UNPACK?"premultiply":"none"}))).then((e=>this.destroyed?Promise.reject():(this.bitmap=e,this.update(),this._process=null,Promise.resolve(this)))),this._process}upload(e,t,n){if("number"===typeof this.alphaMode&&(t.alphaMode=this.alphaMode),!this.createBitmap)return super.upload(e,t,n);if(!this.bitmap&&(this.process(),!this.bitmap))return!1;if(super.upload(e,t,n,this.bitmap),!this.preserveBitmap){let e=!0;const r=t._glTextures;for(const i in r){const s=r[i];if(s!==n&&s.dirtyId!==t.dirtyId){e=!1;break}}e&&(this.bitmap.close&&this.bitmap.close(),this.bitmap=null)}return!0}dispose(){this.source.onload=null,this.source.onerror=null,super.dispose(),this.bitmap&&(this.bitmap.close(),this.bitmap=null),this._process=null,this._load=null}static test(e){return"undefined"!==typeof HTMLImageElement&&("string"===typeof e||e instanceof HTMLImageElement)}}class Me{constructor(){this.x0=0,this.y0=0,this.x1=1,this.y1=0,this.x2=1,this.y2=1,this.x3=0,this.y3=1,this.uvsFloat32=new Float32Array(8)}set(e,t,n){const r=t.width,i=t.height;if(n){const t=e.width/2/r,s=e.height/2/i,o=e.x/r+t,a=e.y/i+s;n=$.Lv.add(n,$.Lv.NW),this.x0=o+t*$.Lv.uX(n),this.y0=a+s*$.Lv.uY(n),n=$.Lv.add(n,2),this.x1=o+t*$.Lv.uX(n),this.y1=a+s*$.Lv.uY(n),n=$.Lv.add(n,2),this.x2=o+t*$.Lv.uX(n),this.y2=a+s*$.Lv.uY(n),n=$.Lv.add(n,2),this.x3=o+t*$.Lv.uX(n),this.y3=a+s*$.Lv.uY(n)}else this.x0=e.x/r,this.y0=e.y/i,this.x1=(e.x+e.width)/r,this.y1=e.y/i,this.x2=(e.x+e.width)/r,this.y2=(e.y+e.height)/i,this.x3=e.x/r,this.y3=(e.y+e.height)/i;this.uvsFloat32[0]=this.x0,this.uvsFloat32[1]=this.y0,this.uvsFloat32[2]=this.x1,this.uvsFloat32[3]=this.y1,this.uvsFloat32[4]=this.x2,this.uvsFloat32[5]=this.y2,this.uvsFloat32[6]=this.x3,this.uvsFloat32[7]=this.y3}toString(){return`[@pixi/core:TextureUvs x0=${this.x0} y0=${this.y0} x1=${this.x1} y1=${this.y1} x2=${this.x2} y2=${this.y2} x3=${this.x3} y3=${this.y3}]`}}const De=new Me;function Le(e){e.destroy=function(){},e.on=function(){},e.once=function(){},e.emit=function(){}}class Fe extends s.EventEmitter{constructor(e,t,n,r,i,s,o){if(super(),this.noFrame=!1,t||(this.noFrame=!0,t=new $.Ae(0,0,1,1)),e instanceof Fe&&(e=e.baseTexture),this.baseTexture=e,this._frame=t,this.trim=r,this.valid=!1,this._uvs=De,this.uvMatrix=null,this.orig=n||t,this._rotate=Number(i||0),!0===i)this._rotate=2;else if(this._rotate%2!==0)throw new Error("attempt to use diamond-shaped UVs. If you are sure, set rotation manually");this.defaultAnchor=s?new $.E9(s.x,s.y):new $.E9(0,0),this.defaultBorders=o,this._updateID=0,this.textureCacheIds=[],e.valid?this.noFrame?e.valid&&this.onBaseTextureUpdated(e):this.frame=t:e.once("loaded",this.onBaseTextureUpdated,this),this.noFrame&&e.on("update",this.onBaseTextureUpdated,this)}update(){this.baseTexture.resource&&this.baseTexture.resource.update()}onBaseTextureUpdated(e){if(this.noFrame){if(!this.baseTexture.valid)return;this._frame.width=e.width,this._frame.height=e.height,this.valid=!0,this.updateUvs()}else this.frame=this._frame;this.emit("update",this)}destroy(e){if(this.baseTexture){if(e){const{resource:e}=this.baseTexture;e?.url&&s.TextureCache[e.url]&&Fe.removeFromCache(e.url),this.baseTexture.destroy()}this.baseTexture.off("loaded",this.onBaseTextureUpdated,this),this.baseTexture.off("update",this.onBaseTextureUpdated,this),this.baseTexture=null}this._frame=null,this._uvs=null,this.trim=null,this.orig=null,this.valid=!1,Fe.removeFromCache(this),this.textureCacheIds=null}clone(){const e=this._frame.clone(),t=this._frame===this.orig?e:this.orig.clone(),n=new Fe(this.baseTexture,!this.noFrame&&e,t,this.trim?.clone(),this.rotate,this.defaultAnchor,this.defaultBorders);return this.noFrame&&(n._frame=e),n}updateUvs(){this._uvs===De&&(this._uvs=new Me),this._uvs.set(this._frame,this.baseTexture,this.rotate),this._updateID++}static from(e,t={},n=i.Xd.STRICT_TEXTURE_CACHE){const r="string"===typeof e;let o=null;if(r)o=e;else if(e instanceof R){if(!e.cacheId){const n=t?.pixiIdPrefix||"pixiid";e.cacheId=`${n}-${(0,s.uid)()}`,R.addToCache(e,e.cacheId)}o=e.cacheId}else{if(!e._pixiId){const n=t?.pixiIdPrefix||"pixiid";e._pixiId=`${n}_${(0,s.uid)()}`}o=e._pixiId}let a=s.TextureCache[o];if(r&&n&&!a)throw new Error(`The cacheId "${o}" does not exist in TextureCache.`);return a||e instanceof R?!a&&e instanceof R&&(a=new Fe(e),Fe.addToCache(a,o)):(t.resolution||(t.resolution=(0,s.getResolutionOfUrl)(e)),a=new Fe(new R(e,t)),a.baseTexture.cacheId=o,R.addToCache(a.baseTexture,o),Fe.addToCache(a,o)),a}static fromURL(e,t){const n=Object.assign({autoLoad:!1},t?.resourceOptions),r=Fe.from(e,Object.assign({resourceOptions:n},t),!1),i=r.baseTexture.resource;return r.baseTexture.valid?Promise.resolve(r):i.load().then((()=>Promise.resolve(r)))}static fromBuffer(e,t,n,r){return new Fe(R.fromBuffer(e,t,n,r))}static fromLoader(e,t,n,r){const i=new R(e,Object.assign({scaleMode:R.defaultOptions.scaleMode,resolution:(0,s.getResolutionOfUrl)(t)},r)),{resource:o}=i;o instanceof Ne&&(o.url=t);const a=new Fe(i);return n||(n=t),R.addToCache(a.baseTexture,n),Fe.addToCache(a,n),n!==t&&(R.addToCache(a.baseTexture,t),Fe.addToCache(a,t)),a.baseTexture.valid?Promise.resolve(a):new Promise((e=>{a.baseTexture.once("loaded",(()=>e(a)))}))}static addToCache(e,t){t&&(e.textureCacheIds.includes(t)||e.textureCacheIds.push(t),s.TextureCache[t]&&s.TextureCache[t]!==e&&console.warn(`Texture added to the cache with an id [${t}] that already had an entry`),s.TextureCache[t]=e)}static removeFromCache(e){if("string"===typeof e){const t=s.TextureCache[e];if(t){const n=t.textureCacheIds.indexOf(e);return n>-1&&t.textureCacheIds.splice(n,1),delete s.TextureCache[e],t}}else if(e?.textureCacheIds){for(let t=0;tthis.baseTexture.width,o=n+i>this.baseTexture.height;if(s||o){const e=s&&o?"and":"or",a=`X: ${t} + ${r} = ${t+r} > ${this.baseTexture.width}`,l=`Y: ${n} + ${i} = ${n+i} > ${this.baseTexture.height}`;throw new Error(`Texture Error: frame does not fit inside the base Texture dimensions: ${a} ${e} ${l}`)}this.valid=r&&i&&this.baseTexture.valid,this.trim||this.rotate||(this.orig=e),this.valid&&this.updateUvs()}get rotate(){return this._rotate}set rotate(e){this._rotate=e,this.valid&&this.updateUvs()}get width(){return this.orig.width}get height(){return this.orig.height}castToBaseTexture(){return this.baseTexture}static get EMPTY(){return Fe._EMPTY||(Fe._EMPTY=new Fe(new R),Le(Fe._EMPTY),Le(Fe._EMPTY.baseTexture)),Fe._EMPTY}static get WHITE(){if(!Fe._WHITE){const e=i.Xd.ADAPTER.createCanvas(16,16),t=e.getContext("2d");e.width=16,e.height=16,t.fillStyle="white",t.fillRect(0,0,16,16),Fe._WHITE=new Fe(R.from(e)),Le(Fe._WHITE),Le(Fe._WHITE.baseTexture)}return Fe._WHITE}}class Be extends Fe{constructor(e,t){super(e,t),this.valid=!0,this.filterFrame=null,this.filterPoolKey=null,this.updateUvs()}get framebuffer(){return this.baseTexture.framebuffer}get multisample(){return this.framebuffer.multisample}set multisample(e){this.framebuffer.multisample=e}resize(e,t,n=!0){const r=this.baseTexture.resolution,i=Math.round(e*r)/r,s=Math.round(t*r)/r;this.valid=i>0&&s>0,this._frame.width=this.orig.width=i,this._frame.height=this.orig.height=s,n&&this.baseTexture.resize(i,s),this.updateUvs()}setResolution(e){const{baseTexture:t}=this;t.resolution!==e&&(t.setResolution(e),this.resize(t.width,t.height,!1))}static create(e){return new Be(new Pe(e))}}class Ue{constructor(e){this.texturePool={},this.textureOptions=e||{},this.enableFullScreen=!1,this._pixelsWidth=0,this._pixelsHeight=0}createTexture(e,t,n=r.G5.NONE){const i=new Pe(Object.assign({width:e,height:t,resolution:1,multisample:n},this.textureOptions));return new Be(i)}getOptimalTexture(e,t,n=1,i=r.G5.NONE){let o;e=Math.ceil(e*n-1e-6),t=Math.ceil(t*n-1e-6),this.enableFullScreen&&e===this._pixelsWidth&&t===this._pixelsHeight?o=i>1?-i:-1:(e=(0,s.nextPow2)(e),t=(0,s.nextPow2)(t),o=((65535&e)<<16|65535&t)>>>0,i>1&&(o+=4294967296*i)),this.texturePool[o]||(this.texturePool[o]=[]);let a=this.texturePool[o].pop();return a||(a=this.createTexture(e,t,i)),a.filterPoolKey=o,a.setResolution(n),a}getFilterTexture(e,t,n){const i=this.getOptimalTexture(e.width,e.height,t||e.resolution,n||r.G5.NONE);return i.filterFrame=e.filterFrame,i}returnTexture(e){const t=e.filterPoolKey;e.filterFrame=null,this.texturePool[t].push(e)}returnFilterTexture(e){this.returnTexture(e)}clear(e){if(e=!1!==e,e)for(const t in this.texturePool){const e=this.texturePool[t];if(e)for(let t=0;t0&&e.height>0;for(const e in this.texturePool){if(!(Number(e)<0))continue;const t=this.texturePool[e];if(t)for(let e=0;e1&&(i=this.getOptimalFilterTexture(e.width,e.height,t.resolution),i.filterFrame=e.filterFrame),n[s].apply(this,e,i,r.yl.CLEAR,t);const o=e;e=i,i=o}n[s].apply(this,e,l.renderTexture,r.yl.BLEND,t),s>1&&t.multisample>1&&this.returnFilterTexture(t.renderTexture),this.returnFilterTexture(e),this.returnFilterTexture(i)}t.clear(),this.statePool.push(t)}bindAndClear(e,t=r.yl.CLEAR){const{renderTexture:n,state:i}=this.renderer;if(e===this.defaultFilterStack[this.defaultFilterStack.length-1].renderTexture?this.renderer.projection.transform=this.activeState.transform:this.renderer.projection.transform=null,e?.filterFrame){const t=this.tempRect;t.x=0,t.y=0,t.width=e.filterFrame.width,t.height=e.filterFrame.height,n.bind(e,e.filterFrame,t)}else e!==this.defaultFilterStack[this.defaultFilterStack.length-1].renderTexture?n.bind(e):this.renderer.renderTexture.bind(e,this.activeState.bindingSourceFrame,this.activeState.bindingDestinationFrame);const s=1&i.stateId||this.forceClear;(t===r.yl.CLEAR||t===r.yl.BLIT&&s)&&this.renderer.framebuffer.clear(0,0,0,0)}applyFilter(e,t,n,i){const s=this.renderer;s.state.set(e.state),this.bindAndClear(n,i),e.uniforms.uSampler=t,e.uniforms.filterGlobals=this.globalUniforms,s.shader.bind(e),e.legacy=!!e.program.attributeData.aTextureCoord,e.legacy?(this.quadUv.map(t._frame,t.filterFrame),s.geometry.bind(this.quadUv),s.geometry.draw(r.lg.TRIANGLES)):(s.geometry.bind(this.quad),s.geometry.draw(r.lg.TRIANGLE_STRIP))}calculateSpriteMatrix(e,t){const{sourceFrame:n,destinationFrame:r}=this.activeState,{orig:i}=t._texture,s=e.set(r.width,0,0,r.height,n.x,n.y),o=t.worldTransform.copyTo($.y3.TEMP_MATRIX);return o.invert(),s.prepend(o),s.scale(1/i.width,1/i.height),s.translate(t.anchor.x,t.anchor.y),s}destroy(){this.renderer=null,this.texturePool.clear(!1)}getOptimalFilterTexture(e,t,n=1,i=r.G5.NONE){return this.texturePool.getOptimalTexture(e,t,n,i)}getFilterTexture(e,t,n){if("number"===typeof e){const n=e;e=t,t=n}e=e||this.activeState.renderTexture;const i=this.texturePool.getOptimalTexture(e.width,e.height,t||e.resolution,n||r.G5.NONE);return i.filterFrame=e.filterFrame,i}returnFilterTexture(e){this.texturePool.returnTexture(e)}emptyPool(){this.texturePool.clear(!0)}resize(){this.texturePool.setScreenSize(this.renderer.view)}transformAABB(e,t){const n=He[0],r=He[1],i=He[2],s=He[3];n.set(t.left,t.top),r.set(t.left,t.bottom),i.set(t.right,t.top),s.set(t.right,t.bottom),e.apply(n,n),e.apply(r,r),e.apply(i,i),e.apply(s,s);const o=Math.min(n.x,r.x,i.x,s.x),a=Math.min(n.y,r.y,i.y,s.y),l=Math.max(n.x,r.x,i.x,s.x),c=Math.max(n.y,r.y,i.y,s.y);t.x=o,t.y=a,t.width=l-o,t.height=c-a}roundFrame(e,t,n,r,i){if(!(e.width<=0||e.height<=0||n.width<=0||n.height<=0)){if(i){const{a:e,b:t,c:n,d:r}=i;if((Math.abs(t)>1e-4||Math.abs(n)>1e-4)&&(Math.abs(e)>1e-4||Math.abs(r)>1e-4))return}i=i?Ve.copyFrom(i):Ve.identity(),i.translate(-n.x,-n.y).scale(r.width/n.width,r.height/n.height).translate(r.x,r.y),this.transformAABB(i,e),e.ceil(t),this.transformAABB(i.invert(),e)}}}je.extension={type:a.RendererSystem,name:"filter"},u.add(je);class We{constructor(e){this.framebuffer=e,this.stencil=null,this.dirtyId=-1,this.dirtyFormat=-1,this.dirtySize=-1,this.multisample=r.G5.NONE,this.msaaBuffer=null,this.blitFramebuffer=null,this.mipLevel=0}}const qe=new $.Ae;class Xe{constructor(e){this.renderer=e,this.managedFramebuffers=[],this.unknownFramebuffer=new ke(10,10),this.msaaSamples=null}contextChange(){this.disposeAll(!0);const e=this.gl=this.renderer.gl;if(this.CONTEXT_UID=this.renderer.CONTEXT_UID,this.current=this.unknownFramebuffer,this.viewport=new $.Ae,this.hasMRT=!0,this.writeDepthTexture=!0,1===this.renderer.context.webGLVersion){let t=this.renderer.context.extensions.drawBuffers,n=this.renderer.context.extensions.depthTexture;i.Xd.PREFER_ENV===r.Vi.WEBGL_LEGACY&&(t=null,n=null),t?e.drawBuffers=e=>t.drawBuffersWEBGL(e):(this.hasMRT=!1,e.drawBuffers=()=>{}),n||(this.writeDepthTexture=!1)}else this.msaaSamples=e.getInternalformatParameter(e.RENDERBUFFER,e.RGBA8,e.SAMPLES)}bind(e,t,n=0){const{gl:r}=this;if(e){const i=e.glFramebuffers[this.CONTEXT_UID]||this.initFramebuffer(e);this.current!==e&&(this.current=e,r.bindFramebuffer(r.FRAMEBUFFER,i.framebuffer)),i.mipLevel!==n&&(e.dirtyId++,e.dirtyFormat++,i.mipLevel=n),i.dirtyId!==e.dirtyId&&(i.dirtyId=e.dirtyId,i.dirtyFormat!==e.dirtyFormat?(i.dirtyFormat=e.dirtyFormat,i.dirtySize=e.dirtySize,this.updateFramebuffer(e,n)):i.dirtySize!==e.dirtySize&&(i.dirtySize=e.dirtySize,this.resizeFramebuffer(e)));for(let t=0;t>n,r=t.height>>n,i=e/t.width;this.setViewport(t.x*i,t.y*i,e,r)}else{const t=e.width>>n,r=e.height>>n;this.setViewport(0,0,t,r)}}else this.current&&(this.current=null,r.bindFramebuffer(r.FRAMEBUFFER,null)),t?this.setViewport(t.x,t.y,t.width,t.height):this.setViewport(0,0,this.renderer.width,this.renderer.height)}setViewport(e,t,n,r){const i=this.viewport;e=Math.round(e),t=Math.round(t),n=Math.round(n),r=Math.round(r),i.width===n&&i.height===r&&i.x===e&&i.y===t||(i.x=e,i.y=t,i.width=n,i.height=r,this.gl.viewport(e,t,n,r))}get size(){return this.current?{x:0,y:0,width:this.current.width,height:this.current.height}:{x:0,y:0,width:this.renderer.width,height:this.renderer.height}}clear(e,t,n,i,s=r.V0.COLOR|r.V0.DEPTH){const{gl:o}=this;o.clearColor(e,t,n,i),o.clear(s)}initFramebuffer(e){const{gl:t}=this,n=new We(t.createFramebuffer());return n.multisample=this.detectSamples(e.multisample),e.glFramebuffers[this.CONTEXT_UID]=n,this.managedFramebuffers.push(e),e.disposeRunner.add(this),n}resizeFramebuffer(e){const{gl:t}=this,n=e.glFramebuffers[this.CONTEXT_UID];n.stencil&&(t.bindRenderbuffer(t.RENDERBUFFER,n.stencil),n.msaaBuffer?t.renderbufferStorageMultisample(t.RENDERBUFFER,n.multisample,t.DEPTH24_STENCIL8,e.width,e.height):t.renderbufferStorage(t.RENDERBUFFER,t.DEPTH_STENCIL,e.width,e.height));const r=e.colorTextures;let i=r.length;t.drawBuffers||(i=Math.min(i,1));for(let s=0;s1&&this.canMultisampleFramebuffer(e)?r.msaaBuffer=r.msaaBuffer||n.createRenderbuffer():r.msaaBuffer&&(n.deleteRenderbuffer(r.msaaBuffer),r.msaaBuffer=null,r.blitFramebuffer&&(r.blitFramebuffer.dispose(),r.blitFramebuffer=null));const o=[];for(let a=0;a1&&n.drawBuffers(o),e.depthTexture){const r=this.writeDepthTexture;if(r){const r=e.depthTexture;this.renderer.texture.bind(r,0),n.framebufferTexture2D(n.FRAMEBUFFER,n.DEPTH_ATTACHMENT,n.TEXTURE_2D,r._glTextures[this.CONTEXT_UID].texture,t)}}!e.stencil&&!e.depth||e.depthTexture&&this.writeDepthTexture?r.stencil&&(n.deleteRenderbuffer(r.stencil),r.stencil=null):(r.stencil=r.stencil||n.createRenderbuffer(),n.bindRenderbuffer(n.RENDERBUFFER,r.stencil),r.msaaBuffer?n.renderbufferStorageMultisample(n.RENDERBUFFER,r.multisample,n.DEPTH24_STENCIL8,e.width,e.height):n.renderbufferStorage(n.RENDERBUFFER,n.DEPTH_STENCIL,e.width,e.height),n.framebufferRenderbuffer(n.FRAMEBUFFER,n.DEPTH_STENCIL_ATTACHMENT,n.RENDERBUFFER,r.stencil))}canMultisampleFramebuffer(e){return 1!==this.renderer.context.webGLVersion&&e.colorTextures.length<=1&&!e.depthTexture}detectSamples(e){const{msaaSamples:t}=this;let n=r.G5.NONE;if(e<=1||null===t)return n;for(let r=0;r=0&&this.managedFramebuffers.splice(i,1),e.disposeRunner.remove(this),t||(r.deleteFramebuffer(n.framebuffer),n.msaaBuffer&&r.deleteRenderbuffer(n.msaaBuffer),n.stencil&&r.deleteRenderbuffer(n.stencil)),n.blitFramebuffer&&this.disposeFramebuffer(n.blitFramebuffer,t)}disposeAll(e){const t=this.managedFramebuffers;this.managedFramebuffers=[];for(let n=0;nt.createVertexArrayOES(),e.bindVertexArray=e=>t.bindVertexArrayOES(e),e.deleteVertexArray=e=>t.deleteVertexArrayOES(e)):(this.hasVao=!1,e.createVertexArray=()=>null,e.bindVertexArray=()=>null,e.deleteVertexArray=()=>null)}if(2!==t.webGLVersion){const t=e.getExtension("ANGLE_instanced_arrays");t?(e.vertexAttribDivisor=(e,n)=>t.vertexAttribDivisorANGLE(e,n),e.drawElementsInstanced=(e,n,r,i,s)=>t.drawElementsInstancedANGLE(e,n,r,i,s),e.drawArraysInstanced=(e,n,r,i)=>t.drawArraysInstancedANGLE(e,n,r,i)):this.hasInstance=!1}this.canUseUInt32ElementIndex=2===t.webGLVersion||!!t.extensions.uint32ElementIndex}bind(e,t){t=t||this.renderer.shader.shader;const{gl:n}=this;let r=e.glVertexArrayObjects[this.CONTEXT_UID],i=!1;r||(this.managedGeometries[e.id]=e,e.disposeRunner.add(this),e.glVertexArrayObjects[this.CONTEXT_UID]=r={},i=!0);const s=r[t.program.id]||this.initGeometryVao(e,t,i);this._activeGeometry=e,this._activeVao!==s&&(this._activeVao=s,this.hasVao?n.bindVertexArray(s):this.activateVao(e,t.program)),this.updateBuffers()}reset(){this.unbind()}updateBuffers(){const e=this._activeGeometry,t=this.renderer.buffer;for(let n=0;n0?this.maskStack[this.maskStack.length-1]._colorMask:15;n!==t&&this.renderer.gl.colorMask(0!==(1&n),0!==(2&n),0!==(4&n),0!==(8&n))}destroy(){this.renderer=null}}rt.extension={type:a.RendererSystem,name:"mask"},u.add(rt);class it{constructor(e){this.renderer=e,this.maskStack=[],this.glConst=0}getStackLength(){return this.maskStack.length}setMaskStack(e){const{gl:t}=this.renderer,n=this.getStackLength();this.maskStack=e;const r=this.getStackLength();r!==n&&(0===r?t.disable(this.glConst):(t.enable(this.glConst),this._useCurrent()))}_useCurrent(){}destroy(){this.renderer=null,this.maskStack=null}}const st=new $.y3,ot=[],at=class extends it{constructor(e){super(e),this.glConst=i.Xd.ADAPTER.getWebGLRenderingContext().SCISSOR_TEST}getStackLength(){const e=this.maskStack[this.maskStack.length-1];return e?e._scissorCounter:0}calcScissorRect(e){if(e._scissorRectLocal)return;const t=e._scissorRect,{maskObject:n}=e,{renderer:r}=this,i=r.renderTexture,s=n.getBounds(!0,ot.pop()??new $.Ae);this.roundFrameToPixels(s,i.current?i.current.resolution:r.resolution,i.sourceFrame,i.destinationFrame,r.projection.transform),t&&s.fit(t),e._scissorRectLocal=s}static isMatrixRotated(e){if(!e)return!1;const{a:t,b:n,c:r,d:i}=e;return(Math.abs(n)>1e-4||Math.abs(r)>1e-4)&&(Math.abs(t)>1e-4||Math.abs(i)>1e-4)}testScissor(e){const{maskObject:t}=e;if(!t.isFastRect||!t.isFastRect())return!1;if(at.isMatrixRotated(t.worldTransform))return!1;if(at.isMatrixRotated(this.renderer.projection.transform))return!1;this.calcScissorRect(e);const n=e._scissorRectLocal;return n.width>0&&n.height>0}roundFrameToPixels(e,t,n,r,i){at.isMatrixRotated(i)||(i=i?st.copyFrom(i):st.identity(),i.translate(-n.x,-n.y).scale(r.width/n.width,r.height/n.height).translate(r.x,r.y),this.renderer.filter.transformAABB(i,e),e.fit(r),e.x=Math.round(e.x*t),e.y=Math.round(e.y*t),e.width=Math.round(e.width*t),e.height=Math.round(e.height*t))}push(e){e._scissorRectLocal||this.calcScissorRect(e);const{gl:t}=this.renderer;e._scissorRect||t.enable(t.SCISSOR_TEST),e._scissorCounter++,e._scissorRect=e._scissorRectLocal,this._useCurrent()}pop(e){const{gl:t}=this.renderer;e&&ot.push(e._scissorRectLocal),this.getStackLength()>0?this._useCurrent():t.disable(t.SCISSOR_TEST)}_useCurrent(){const e=this.maskStack[this.maskStack.length-1]._scissorRect;let t;t=this.renderer.renderTexture.current?e.y:this.renderer.height-e.height-e.y,this.renderer.gl.scissor(e.x,t,e.width,e.height)}};let lt=at;lt.extension={type:a.RendererSystem,name:"scissor"},u.add(lt);class ct extends it{constructor(e){super(e),this.glConst=i.Xd.ADAPTER.getWebGLRenderingContext().STENCIL_TEST}getStackLength(){const e=this.maskStack[this.maskStack.length-1];return e?e._stencilCounter:0}push(e){const t=e.maskObject,{gl:n}=this.renderer,r=e._stencilCounter;0===r&&(this.renderer.framebuffer.forceStencil(),n.clearStencil(0),n.clear(n.STENCIL_BUFFER_BIT),n.enable(n.STENCIL_TEST)),e._stencilCounter++;const i=e._colorMask;0!==i&&(e._colorMask=0,n.colorMask(!1,!1,!1,!1)),n.stencilFunc(n.EQUAL,r,4294967295),n.stencilOp(n.KEEP,n.KEEP,n.INCR),t.renderable=!0,t.render(this.renderer),this.renderer.batch.flush(),t.renderable=!1,0!==i&&(e._colorMask=i,n.colorMask(0!==(1&i),0!==(2&i),0!==(4&i),0!==(8&i))),this._useCurrent()}pop(e){const t=this.renderer.gl;if(0===this.getStackLength())t.disable(t.STENCIL_TEST);else{const n=0!==this.maskStack.length?this.maskStack[this.maskStack.length-1]:null,r=n?n._colorMask:15;0!==r&&(n._colorMask=0,t.colorMask(!1,!1,!1,!1)),t.stencilOp(t.KEEP,t.KEEP,t.DECR),e.renderable=!0,e.render(this.renderer),this.renderer.batch.flush(),e.renderable=!1,0!==r&&(n._colorMask=r,t.colorMask(0!==(1&r),0!==(2&r),0!==(4&r),0!==(8&r))),this._useCurrent()}}_useCurrent(){const e=this.renderer.gl;e.stencilFunc(e.EQUAL,this.getStackLength(),4294967295),e.stencilOp(e.KEEP,e.KEEP,e.KEEP)}}ct.extension={type:a.RendererSystem,name:"stencil"},u.add(ct);class ut{constructor(e){this.renderer=e,this.plugins={},Object.defineProperties(this.plugins,{extract:{enumerable:!1,get(){return(0,s.deprecation)("7.0.0","renderer.plugins.extract has moved to renderer.extract"),e.extract}},prepare:{enumerable:!1,get(){return(0,s.deprecation)("7.0.0","renderer.plugins.prepare has moved to renderer.prepare"),e.prepare}},interaction:{enumerable:!1,get(){return(0,s.deprecation)("7.0.0","renderer.plugins.interaction has been deprecated, use renderer.events"),e.events}}})}init(){const e=this.rendererPlugins;for(const t in e)this.plugins[t]=new e[t](this.renderer)}destroy(){for(const e in this.plugins)this.plugins[e].destroy(),this.plugins[e]=null}}ut.extension={type:[a.RendererSystem,a.CanvasRendererSystem],name:"_plugin"},u.add(ut);class dt{constructor(e){this.renderer=e,this.destinationFrame=null,this.sourceFrame=null,this.defaultFrame=null,this.projectionMatrix=new $.y3,this.transform=null}update(e,t,n,r){this.destinationFrame=e||this.destinationFrame||this.defaultFrame,this.sourceFrame=t||this.sourceFrame||e,this.calculateProjection(this.destinationFrame,this.sourceFrame,n,r),this.transform&&this.projectionMatrix.append(this.transform);const i=this.renderer;i.globalUniforms.uniforms.projectionMatrix=this.projectionMatrix,i.globalUniforms.update(),i.shader.shader&&i.shader.syncUniformGroup(i.shader.shader.uniforms.globals)}calculateProjection(e,t,n,r){const i=this.projectionMatrix,s=r?-1:1;i.identity(),i.a=1/t.width*2,i.d=s*(1/t.height*2),i.tx=-1-t.x*i.a,i.ty=-s-t.y*i.d}setTransform(e){}destroy(){this.renderer=null}}dt.extension={type:a.RendererSystem,name:"projection"},u.add(dt);const ht=new $.wx;class pt{constructor(e){this.renderer=e,this._tempMatrix=new $.y3}generateTexture(e,t){const{region:n,...r}=t||{},i=n||e.getLocalBounds(null,!0);0===i.width&&(i.width=1),0===i.height&&(i.height=1);const s=Be.create({width:i.width,height:i.height,...r});this._tempMatrix.tx=-i.x,this._tempMatrix.ty=-i.y;const o=e.transform;return e.transform=ht,this.renderer.render(e,{renderTexture:s,transform:this._tempMatrix,skipUpdateTransform:!!e.parent,blit:!0}),e.transform=o,s}destroy(){}}pt.extension={type:[a.RendererSystem,a.CanvasRendererSystem],name:"textureGenerator"},u.add(pt);const ft=new $.Ae,gt=new $.Ae;class mt{constructor(e){this.renderer=e,this.defaultMaskStack=[],this.current=null,this.sourceFrame=new $.Ae,this.destinationFrame=new $.Ae,this.viewportFrame=new $.Ae}contextChange(){const e=this.renderer?.gl.getContextAttributes();this._rendererPremultipliedAlpha=!!(e&&e.alpha&&e.premultipliedAlpha)}bind(e=null,t,n){const r=this.renderer;let i,s,o;this.current=e,e?(i=e.baseTexture,o=i.resolution,t||(ft.width=e.frame.width,ft.height=e.frame.height,t=ft),n||(gt.x=e.frame.x,gt.y=e.frame.y,gt.width=t.width,gt.height=t.height,n=gt),s=i.framebuffer):(o=r.resolution,t||(ft.width=r._view.screen.width,ft.height=r._view.screen.height,t=ft),n||(n=ft,n.width=t.width,n.height=t.height));const a=this.viewportFrame;a.x=n.x*o,a.y=n.y*o,a.width=n.width*o,a.height=n.height*o,e||(a.y=r.view.height-(a.y+a.height)),a.ceil(),this.renderer.framebuffer.bind(s,a),this.renderer.projection.update(n,t,o,!s),e?this.renderer.mask.setMaskStack(i.maskStack):this.renderer.mask.setMaskStack(this.defaultMaskStack),this.sourceFrame.copyFrom(t),this.destinationFrame.copyFrom(n)}clear(e,t){const n=this.current?this.current.baseTexture.clear:this.renderer.background.backgroundColor,r=o.I.shared.setValue(e||n);(this.current&&this.current.baseTexture.alphaMode>0||!this.current&&this._rendererPremultipliedAlpha)&&r.premultiply(r.alpha);const i=this.destinationFrame,s=this.current?this.current.baseTexture:this.renderer._view.screen,a=i.width!==s.width||i.height!==s.height;if(a){let{x:e,y:t,width:n,height:r}=this.viewportFrame;e=Math.round(e),t=Math.round(t),n=Math.round(n),r=Math.round(r),this.renderer.gl.enable(this.renderer.gl.SCISSOR_TEST),this.renderer.gl.scissor(e,t,n,r)}this.renderer.framebuffer.clear(r.red,r.green,r.blue,r.alpha,t),a&&this.renderer.scissor.pop()}resize(){this.bind(null)}reset(){this.bind(null)}destroy(){this.renderer=null}}mt.extension={type:a.RendererSystem,name:"renderTexture"},u.add(mt);class bt{constructor(e,t){this.program=e,this.uniformData=t,this.uniformGroups={},this.uniformDirtyGroups={},this.uniformBufferBindings={}}destroy(){this.uniformData=null,this.uniformGroups=null,this.uniformDirtyGroups=null,this.uniformBufferBindings=null,this.program=null}}function _t(e,t,n){const r=e.createShader(t);return e.shaderSource(r,n),e.compileShader(r),r}function yt(e){const t=new Array(e);for(let n=0;n`${t}: ${e}`)),r=e.getShaderInfoLog(t),i=r.split("\n"),s={},o=i.map((e=>parseFloat(e.replace(/^ERROR\: 0\:([\d]+)\:.*$/,"$1")))).filter((e=>!(!e||s[e])&&(s[e]=!0,!0))),a=[""];o.forEach((e=>{n[e-1]=`%c${n[e-1]}%c`,a.push("background: #FF0000; color:#FFFFFF; font-size: 10px","font-size: 10px")}));const l=n.join("\n");a[0]=l,console.error(r),console.groupCollapsed("click to view full shader code"),console.warn(...a),console.groupEnd()}function wt(e,t,n,r){e.getProgramParameter(t,e.LINK_STATUS)||(e.getShaderParameter(n,e.COMPILE_STATUS)||St(e,n),e.getShaderParameter(r,e.COMPILE_STATUS)||St(e,r),console.error("PixiJS Error: Could not initialize shader."),""!==e.getProgramInfoLog(t)&&console.warn("PixiJS Warning: gl.getProgramInfoLog()",e.getProgramInfoLog(t)))}function Tt(e,t){const n=_t(e,e.VERTEX_SHADER,t.vertexSrc),r=_t(e,e.FRAGMENT_SHADER,t.fragmentSrc),i=e.createProgram();e.attachShader(i,n),e.attachShader(i,r);const s=t.extra?.transformFeedbackVaryings;if(s&&("function"!==typeof e.transformFeedbackVaryings?console.warn("TransformFeedback is not supported but TransformFeedbackVaryings are given."):e.transformFeedbackVaryings(i,s.names,"separate"===s.bufferMode?e.SEPARATE_ATTRIBS:e.INTERLEAVED_ATTRIBS)),e.linkProgram(i),e.getProgramParameter(i,e.LINK_STATUS)||wt(e,i,n,r),t.attributeData=Et(i,e),t.uniformData=xt(i,e),!/^[ \t]*#[ \t]*version[ \t]+300[ \t]+es[ \t]*$/m.test(t.vertexSrc)){const n=Object.keys(t.attributeData);n.sort(((e,t)=>e>t?1:-1));for(let r=0;r({data:e,offset:0,dataLen:0,dirty:0})));let n=0,r=0,i=0;for(let s=0;s1&&(n=Math.max(n,16)*e.data.size),e.dataLen=n,r%n!==0&&r<16){const e=r%n%16;r+=e,i+=e}r+n>16?(i=16*Math.ceil(i/16),e.offset=i,i+=n,r=n):(e.offset=i,r+=n,i+=n)}return i=16*Math.ceil(i/16),{uboElements:t,size:i}}function kt(e,t){const n=[];for(const r in e)t[r]&&n.push(t[r]);return n.sort(((e,t)=>e.index-t.index)),n}function Pt(e,t){if(!e.autoManage)return{size:0,syncFunc:At};const n=kt(e.uniforms,t),{uboElements:r,size:i}=Rt(n),s=["\n var v = null;\n var v2 = null;\n var cv = null;\n var t = 0;\n var gl = renderer.gl\n var index = 0;\n var data = buffer.data;\n "];for(let o=0;o1){const e=ee(t.data.type),n=Math.max(It[t.data.type]/16,1),r=e/n,o=(4-r%4)%4;s.push(`\n cv = ud.${i}.value;\n v = uv.${i};\n offset = ${t.offset/4};\n\n t = 0;\n\n for(var i=0; i < ${t.data.size*n}; i++)\n {\n for(var j = 0; j < ${r}; j++)\n {\n data[offset++] = v[t++];\n }\n offset += ${o};\n }\n\n `)}else{const e=Ct[t.data.type];s.push(`\n cv = ud.${i}.value;\n v = uv.${i};\n offset = ${t.offset/4};\n ${e};\n `)}}return s.push("\n renderer.buffer.update(buffer);\n "),{size:i,syncFunc:new Function("ud","uv","renderer","syncData","buffer",s.join("\n"))}}let Ot;function Nt(){if("boolean"===typeof Ot)return Ot;try{const e=new Function("param1","param2","param3","return param1[param2] === param3;");Ot=!0===e({a:"b"},"a","b")}catch(e){Ot=!1}return Ot}let Mt=0;const Dt={textureCount:0,uboCount:0};class Lt{constructor(e){this.destroyed=!1,this.renderer=e,this.systemCheck(),this.gl=null,this.shader=null,this.program=null,this.cache={},this._uboCache={},this.id=Mt++}systemCheck(){if(!Nt())throw new Error("Current environment does not allow unsafe-eval, please use @pixi/unsafe-eval module to enable support.")}contextChange(e){this.gl=e,this.reset()}bind(e,t){e.disposeRunner.add(this),e.uniforms.globals=this.renderer.globalUniforms;const n=e.program,r=n.glPrograms[this.renderer.CONTEXT_UID]||this.generateProgram(e);return this.shader=e,this.program!==n&&(this.program=n,this.gl.useProgram(r.program)),t||(Dt.textureCount=0,Dt.uboCount=0,this.syncUniformGroup(e.uniformGroup,Dt)),r}setUniforms(e){const t=this.shader.program,n=t.glPrograms[this.renderer.CONTEXT_UID];t.syncUniforms(n.uniformData,e,this.renderer)}syncUniformGroup(e,t){const n=this.getGlProgram();e.static&&e.dirtyId===n.uniformDirtyGroups[e.id]||(n.uniformDirtyGroups[e.id]=e.dirtyId,this.syncUniforms(e,n,t))}syncUniforms(e,t,n){const r=e.syncUniforms[this.shader.program.id]||this.createSyncGroups(e);r(t.uniformData,e.uniforms,this.renderer,n)}createSyncGroups(e){const t=this.getSignature(e,this.shader.program.uniformData,"u");return this.cache[t]||(this.cache[t]=q(e,this.shader.program.uniformData)),e.syncUniforms[this.shader.program.id]=this.cache[t],e.syncUniforms[this.shader.program.id]}syncUniformBufferGroup(e,t){const n=this.getGlProgram();if(!e.static||0!==e.dirtyId||!n.uniformGroups[e.id]){e.dirtyId=0;const r=n.uniformGroups[e.id]||this.createSyncBufferGroup(e,n,t);e.buffer.update(),r(n.uniformData,e.uniforms,this.renderer,Dt,e.buffer)}this.renderer.buffer.bindBufferBase(e.buffer,n.uniformBufferBindings[t])}createSyncBufferGroup(e,t,n){const{gl:r}=this.renderer;this.renderer.buffer.bind(e.buffer);const i=this.gl.getUniformBlockIndex(t.program,n);t.uniformBufferBindings[n]=this.shader.uniformBindCount,r.uniformBlockBinding(t.program,i,this.shader.uniformBindCount),this.shader.uniformBindCount++;const s=this.getSignature(e,this.shader.program.uniformData,"ubo");let o=this._uboCache[s];if(o||(o=this._uboCache[s]=Pt(e,this.shader.program.uniformData)),e.autoManage){const t=new Float32Array(o.size/4);e.buffer.update(t)}return t.uniformGroups[e.id]=o.syncFunc,t.uniformGroups[e.id]}getSignature(e,t,n){const r=e.uniforms,i=[`${n}-`];for(const s in r)i.push(s),t[s]&&i.push(t[s].type);return i.join("-")}getGlProgram(){return this.shader?this.shader.program.glPrograms[this.renderer.CONTEXT_UID]:null}generateProgram(e){const t=this.gl,n=e.program,r=Tt(t,n);return n.glPrograms[this.renderer.CONTEXT_UID]=r,r}reset(){this.program=null,this.shader=null}disposeShader(e){this.shader===e&&(this.shader=null)}destroy(){this.renderer=null,this.destroyed=!0}}Lt.extension={type:a.RendererSystem,name:"shader"},u.add(Lt);class Ft{constructor(e){this.renderer=e}run(e){const{renderer:t}=this;t.runners.init.emit(t.options),e.hello&&console.log(`PixiJS 7.2.4 - ${t.rendererLogId} - https://pixijs.com`),t.resize(t.screen.width,t.screen.height)}destroy(){}}function Bt(e,t=[]){return t[r.T$.NORMAL]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.ADD]=[e.ONE,e.ONE],t[r.T$.MULTIPLY]=[e.DST_COLOR,e.ONE_MINUS_SRC_ALPHA,e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.SCREEN]=[e.ONE,e.ONE_MINUS_SRC_COLOR,e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.OVERLAY]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.DARKEN]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.LIGHTEN]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.COLOR_DODGE]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.COLOR_BURN]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.HARD_LIGHT]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.SOFT_LIGHT]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.DIFFERENCE]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.EXCLUSION]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.HUE]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.SATURATION]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.COLOR]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.LUMINOSITY]=[e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.NONE]=[0,0],t[r.T$.NORMAL_NPM]=[e.SRC_ALPHA,e.ONE_MINUS_SRC_ALPHA,e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.ADD_NPM]=[e.SRC_ALPHA,e.ONE,e.ONE,e.ONE],t[r.T$.SCREEN_NPM]=[e.SRC_ALPHA,e.ONE_MINUS_SRC_COLOR,e.ONE,e.ONE_MINUS_SRC_ALPHA],t[r.T$.SRC_IN]=[e.DST_ALPHA,e.ZERO],t[r.T$.SRC_OUT]=[e.ONE_MINUS_DST_ALPHA,e.ZERO],t[r.T$.SRC_ATOP]=[e.DST_ALPHA,e.ONE_MINUS_SRC_ALPHA],t[r.T$.DST_OVER]=[e.ONE_MINUS_DST_ALPHA,e.ONE],t[r.T$.DST_IN]=[e.ZERO,e.SRC_ALPHA],t[r.T$.DST_OUT]=[e.ZERO,e.ONE_MINUS_SRC_ALPHA],t[r.T$.DST_ATOP]=[e.ONE_MINUS_DST_ALPHA,e.SRC_ALPHA],t[r.T$.XOR]=[e.ONE_MINUS_DST_ALPHA,e.ONE_MINUS_SRC_ALPHA],t[r.T$.SUBTRACT]=[e.ONE,e.ONE,e.ONE,e.ONE,e.FUNC_REVERSE_SUBTRACT,e.FUNC_ADD],t}Ft.defaultOptions={hello:!1},Ft.extension={type:[a.RendererSystem,a.CanvasRendererSystem],name:"startup"},u.add(Ft);const Ut=0,Gt=1,$t=2,zt=3,Ht=4,Vt=5,jt=class{constructor(){this.gl=null,this.stateId=0,this.polygonOffset=0,this.blendMode=r.T$.NONE,this._blendEq=!1,this.map=[],this.map[Ut]=this.setBlend,this.map[Gt]=this.setOffset,this.map[$t]=this.setCullFace,this.map[zt]=this.setDepthTest,this.map[Ht]=this.setFrontFace,this.map[Vt]=this.setDepthMask,this.checks=[],this.defaultState=new E,this.defaultState.blend=!0}contextChange(e){this.gl=e,this.blendModes=Bt(e),this.set(this.defaultState),this.reset()}set(e){if(e=e||this.defaultState,this.stateId!==e.data){let t=this.stateId^e.data,n=0;while(t)1&t&&this.map[n].call(this,!!(e.data&1<>=1,n++;this.stateId=e.data}for(let t=0;te.systems[t])),n=[...t,...Object.keys(e.systems).filter((e=>!t.includes(e)))];for(const r of n)this.addSystem(e.systems[r],r)}addRunners(...e){e.forEach((e=>{this.runners[e]=new w(e)}))}addSystem(e,t){const n=new e(this);if(this[t])throw new Error(`Whoops! The name "${t}" is already in use`);this[t]=n,this._systemsHash[t]=n;for(const r in this.runners)this.runners[r].add(n);return this}emitWithCustomOptions(e,t){const n=Object.keys(this._systemsHash);e.items.forEach((r=>{const i=n.find((e=>this._systemsHash[e]===r));r[e.name](t[i])}))}destroy(){Object.values(this.runners).forEach((e=>{e.destroy()})),this._systemsHash={}}}const Xt=class{constructor(e){this.renderer=e,this.count=0,this.checkCount=0,this.maxIdle=Xt.defaultMaxIdle,this.checkCountMax=Xt.defaultCheckCountMax,this.mode=Xt.defaultMode}postrender(){this.renderer.objectRenderer.renderingToScreen&&(this.count++,this.mode!==r.UN.MANUAL&&(this.checkCount++,this.checkCount>this.checkCountMax&&(this.checkCount=0,this.run())))}run(){const e=this.renderer.texture,t=e.managedTextures;let n=!1;for(let r=0;rthis.maxIdle&&(e.destroyTexture(i,!0),t[r]=null,n=!0)}if(n){let e=0;for(let n=0;n=0;r--)this.unload(e.children[r])}destroy(){this.renderer=null}};let Yt=Xt;Yt.defaultMode=r.UN.AUTO,Yt.defaultMaxIdle=3600,Yt.defaultCheckCountMax=600,Yt.extension={type:a.RendererSystem,name:"textureGC"},u.add(Yt);class Kt{constructor(e){this.texture=e,this.width=-1,this.height=-1,this.dirtyId=-1,this.dirtyStyleId=-1,this.mipmap=!1,this.wrapMode=33071,this.type=r.vK.UNSIGNED_BYTE,this.internalFormat=r.I2.RGBA,this.samplerType=0}}function Zt(e){let t;return t="WebGL2RenderingContext"in globalThis&&e instanceof globalThis.WebGL2RenderingContext?{[r.vK.UNSIGNED_BYTE]:{[r.I2.RGBA]:e.RGBA8,[r.I2.RGB]:e.RGB8,[r.I2.RG]:e.RG8,[r.I2.RED]:e.R8,[r.I2.RGBA_INTEGER]:e.RGBA8UI,[r.I2.RGB_INTEGER]:e.RGB8UI,[r.I2.RG_INTEGER]:e.RG8UI,[r.I2.RED_INTEGER]:e.R8UI,[r.I2.ALPHA]:e.ALPHA,[r.I2.LUMINANCE]:e.LUMINANCE,[r.I2.LUMINANCE_ALPHA]:e.LUMINANCE_ALPHA},[r.vK.BYTE]:{[r.I2.RGBA]:e.RGBA8_SNORM,[r.I2.RGB]:e.RGB8_SNORM,[r.I2.RG]:e.RG8_SNORM,[r.I2.RED]:e.R8_SNORM,[r.I2.RGBA_INTEGER]:e.RGBA8I,[r.I2.RGB_INTEGER]:e.RGB8I,[r.I2.RG_INTEGER]:e.RG8I,[r.I2.RED_INTEGER]:e.R8I},[r.vK.UNSIGNED_SHORT]:{[r.I2.RGBA_INTEGER]:e.RGBA16UI,[r.I2.RGB_INTEGER]:e.RGB16UI,[r.I2.RG_INTEGER]:e.RG16UI,[r.I2.RED_INTEGER]:e.R16UI,[r.I2.DEPTH_COMPONENT]:e.DEPTH_COMPONENT16},[r.vK.SHORT]:{[r.I2.RGBA_INTEGER]:e.RGBA16I,[r.I2.RGB_INTEGER]:e.RGB16I,[r.I2.RG_INTEGER]:e.RG16I,[r.I2.RED_INTEGER]:e.R16I},[r.vK.UNSIGNED_INT]:{[r.I2.RGBA_INTEGER]:e.RGBA32UI,[r.I2.RGB_INTEGER]:e.RGB32UI,[r.I2.RG_INTEGER]:e.RG32UI,[r.I2.RED_INTEGER]:e.R32UI,[r.I2.DEPTH_COMPONENT]:e.DEPTH_COMPONENT24},[r.vK.INT]:{[r.I2.RGBA_INTEGER]:e.RGBA32I,[r.I2.RGB_INTEGER]:e.RGB32I,[r.I2.RG_INTEGER]:e.RG32I,[r.I2.RED_INTEGER]:e.R32I},[r.vK.FLOAT]:{[r.I2.RGBA]:e.RGBA32F,[r.I2.RGB]:e.RGB32F,[r.I2.RG]:e.RG32F,[r.I2.RED]:e.R32F,[r.I2.DEPTH_COMPONENT]:e.DEPTH_COMPONENT32F},[r.vK.HALF_FLOAT]:{[r.I2.RGBA]:e.RGBA16F,[r.I2.RGB]:e.RGB16F,[r.I2.RG]:e.RG16F,[r.I2.RED]:e.R16F},[r.vK.UNSIGNED_SHORT_5_6_5]:{[r.I2.RGB]:e.RGB565},[r.vK.UNSIGNED_SHORT_4_4_4_4]:{[r.I2.RGBA]:e.RGBA4},[r.vK.UNSIGNED_SHORT_5_5_5_1]:{[r.I2.RGBA]:e.RGB5_A1},[r.vK.UNSIGNED_INT_2_10_10_10_REV]:{[r.I2.RGBA]:e.RGB10_A2,[r.I2.RGBA_INTEGER]:e.RGB10_A2UI},[r.vK.UNSIGNED_INT_10F_11F_11F_REV]:{[r.I2.RGB]:e.R11F_G11F_B10F},[r.vK.UNSIGNED_INT_5_9_9_9_REV]:{[r.I2.RGB]:e.RGB9_E5},[r.vK.UNSIGNED_INT_24_8]:{[r.I2.DEPTH_STENCIL]:e.DEPTH24_STENCIL8},[r.vK.FLOAT_32_UNSIGNED_INT_24_8_REV]:{[r.I2.DEPTH_STENCIL]:e.DEPTH32F_STENCIL8}}:{[r.vK.UNSIGNED_BYTE]:{[r.I2.RGBA]:e.RGBA,[r.I2.RGB]:e.RGB,[r.I2.ALPHA]:e.ALPHA,[r.I2.LUMINANCE]:e.LUMINANCE,[r.I2.LUMINANCE_ALPHA]:e.LUMINANCE_ALPHA},[r.vK.UNSIGNED_SHORT_5_6_5]:{[r.I2.RGB]:e.RGB},[r.vK.UNSIGNED_SHORT_4_4_4_4]:{[r.I2.RGBA]:e.RGBA},[r.vK.UNSIGNED_SHORT_5_5_5_1]:{[r.I2.RGBA]:e.RGBA}},t}class Qt{constructor(e){this.renderer=e,this.boundTextures=[],this.currentLocation=-1,this.managedTextures=[],this._unknownBoundTextures=!1,this.unknownTexture=new R,this.hasIntegerTextures=!1}contextChange(){const e=this.gl=this.renderer.gl;this.CONTEXT_UID=this.renderer.CONTEXT_UID,this.webGLVersion=this.renderer.context.webGLVersion,this.internalFormats=Zt(e);const t=e.getParameter(e.MAX_TEXTURE_IMAGE_UNITS);this.boundTextures.length=t;for(let r=0;r=0;--s){const e=t[s];if(e){const t=e._glTextures[i];t.samplerType!==r.oT.FLOAT&&this.renderer.texture.unbind(e)}}}initTexture(e){const t=new Kt(this.gl.createTexture());return t.dirtyId=-1,e._glTextures[this.CONTEXT_UID]=t,this.managedTextures.push(e),e.on("dispose",this.destroyTexture,this),t}initTextureType(e,t){t.internalFormat=this.internalFormats[e.type]?.[e.format]??e.format,2===this.webGLVersion&&e.type===r.vK.HALF_FLOAT?t.type=this.gl.HALF_FLOAT:t.type=e.type}updateTexture(e){const t=e._glTextures[this.CONTEXT_UID];if(!t)return;const n=this.renderer;if(this.initTextureType(e,t),e.resource?.upload(n,e,t))t.samplerType!==r.oT.FLOAT&&(this.hasIntegerTextures=!0);else{const r=e.realWidth,i=e.realHeight,s=n.gl;(t.width!==r||t.height!==i||t.dirtyId<0)&&(t.width=r,t.height=i,s.texImage2D(e.target,0,t.internalFormat,r,i,0,e.format,t.type,null))}e.dirtyStyleId!==t.dirtyStyleId&&this.updateTextureStyle(e),t.dirtyId=e.dirtyId}destroyTexture(e,t){const{gl:n}=this;if(e=e.castToBaseTexture(),e._glTextures[this.CONTEXT_UID]&&(this.unbind(e),n.deleteTexture(e._glTextures[this.CONTEXT_UID].texture),e.off("dispose",this.destroyTexture,this),delete e._glTextures[this.CONTEXT_UID],!t)){const t=this.managedTextures.indexOf(e);-1!==t&&(0,s.removeItems)(this.managedTextures,t,1)}}updateTextureStyle(e){const t=e._glTextures[this.CONTEXT_UID];t&&(e.mipmap!==r.WB.POW2&&2===this.webGLVersion||e.isPowerOfTwo?t.mipmap=e.mipmap>=1:t.mipmap=!1,2===this.webGLVersion||e.isPowerOfTwo?t.wrapMode=e.wrapMode:t.wrapMode=r.Nt.CLAMP,e.resource?.style(this.renderer,e,t)||this.setStyle(e,t),t.dirtyStyleId=e.dirtyStyleId)}setStyle(e,t){const n=this.gl;if(t.mipmap&&e.mipmap!==r.WB.ON_MANUAL&&n.generateMipmap(e.target),n.texParameteri(e.target,n.TEXTURE_WRAP_S,t.wrapMode),n.texParameteri(e.target,n.TEXTURE_WRAP_T,t.wrapMode),t.mipmap){n.texParameteri(e.target,n.TEXTURE_MIN_FILTER,e.scaleMode===r.aH.LINEAR?n.LINEAR_MIPMAP_LINEAR:n.NEAREST_MIPMAP_NEAREST);const t=this.renderer.context.extensions.anisotropicFiltering;if(t&&e.anisotropicLevel>0&&e.scaleMode===r.aH.LINEAR){const r=Math.min(e.anisotropicLevel,n.getParameter(t.MAX_TEXTURE_MAX_ANISOTROPY_EXT));n.texParameterf(e.target,t.TEXTURE_MAX_ANISOTROPY_EXT,r)}}else n.texParameteri(e.target,n.TEXTURE_MIN_FILTER,e.scaleMode===r.aH.LINEAR?n.LINEAR:n.NEAREST);n.texParameteri(e.target,n.TEXTURE_MAG_FILTER,e.scaleMode===r.aH.LINEAR?n.LINEAR:n.NEAREST)}destroy(){this.renderer=null}}Qt.extension={type:a.RendererSystem,name:"texture"},u.add(Qt);class Jt{constructor(e){this.renderer=e}contextChange(){this.gl=this.renderer.gl,this.CONTEXT_UID=this.renderer.CONTEXT_UID}bind(e){const{gl:t,CONTEXT_UID:n}=this,r=e._glTransformFeedbacks[n]||this.createGLTransformFeedback(e);t.bindTransformFeedback(t.TRANSFORM_FEEDBACK,r)}unbind(){const{gl:e}=this;e.bindTransformFeedback(e.TRANSFORM_FEEDBACK,null)}beginTransformFeedback(e,t){const{gl:n,renderer:r}=this;t&&r.shader.bind(t),n.beginTransformFeedback(e)}endTransformFeedback(){const{gl:e}=this;e.endTransformFeedback()}createGLTransformFeedback(e){const{gl:t,renderer:n,CONTEXT_UID:r}=this,i=t.createTransformFeedback();e._glTransformFeedbacks[r]=i,t.bindTransformFeedback(t.TRANSFORM_FEEDBACK,i);for(let s=0;s(e[e["INTERACTION"]=50]="INTERACTION",e[e["HIGH"]=25]="HIGH",e[e["NORMAL"]=0]="NORMAL",e[e["LOW"]=-25]="LOW",e[e["UTILITY"]=-50]="UTILITY",e))(tn||{});class nn{constructor(e,t=null,n=0,r=!1){this.next=null,this.previous=null,this._destroyed=!1,this.fn=e,this.context=t,this.priority=n,this.once=r}match(e,t=null){return this.fn===e&&this.context===t}emit(e){this.fn&&(this.context?this.fn.call(this.context,e):this.fn(e));const t=this.next;return this.once&&this.destroy(!0),this._destroyed&&(this.next=null),t}connect(e){this.previous=e,e.next&&(e.next.previous=this),this.next=e.next,e.next=this}destroy(e=!1){this._destroyed=!0,this.fn=null,this.context=null,this.previous&&(this.previous.next=this.next),this.next&&(this.next.previous=this.previous);const t=this.next;return this.next=e?null:t,this.previous=null,t}}const rn=class{constructor(){this.autoStart=!1,this.deltaTime=1,this.lastTime=-1,this.speed=1,this.started=!1,this._requestId=null,this._maxElapsedMS=100,this._minElapsedMS=0,this._protected=!1,this._lastFrame=-1,this._head=new nn(null,null,1/0),this.deltaMS=1/rn.targetFPMS,this.elapsedMS=1/rn.targetFPMS,this._tick=e=>{this._requestId=null,this.started&&(this.update(e),this.started&&null===this._requestId&&this._head.next&&(this._requestId=requestAnimationFrame(this._tick)))}}_requestIfNeeded(){null===this._requestId&&this._head.next&&(this.lastTime=performance.now(),this._lastFrame=this.lastTime,this._requestId=requestAnimationFrame(this._tick))}_cancelIfNeeded(){null!==this._requestId&&(cancelAnimationFrame(this._requestId),this._requestId=null)}_startIfPossible(){this.started?this._requestIfNeeded():this.autoStart&&this.start()}add(e,t,n=tn.NORMAL){return this._addListener(new nn(e,t,n))}addOnce(e,t,n=tn.NORMAL){return this._addListener(new nn(e,t,n,!0))}_addListener(e){let t=this._head.next,n=this._head;if(t){while(t){if(e.priority>t.priority){e.connect(n);break}n=t,t=t.next}e.previous||e.connect(n)}else e.connect(n);return this._startIfPossible(),this}remove(e,t){let n=this._head.next;while(n)n=n.match(e,t)?n.destroy():n.next;return this._head.next||this._cancelIfNeeded(),this}get count(){if(!this._head)return 0;let e=0,t=this._head;while(t=t.next)e++;return e}start(){this.started||(this.started=!0,this._requestIfNeeded())}stop(){this.started&&(this.started=!1,this._cancelIfNeeded())}destroy(){if(!this._protected){this.stop();let e=this._head.next;while(e)e=e.destroy(!0);this._head.destroy(),this._head=null}}update(e=performance.now()){let t;if(e>this.lastTime){if(t=this.elapsedMS=e-this.lastTime,t>this._maxElapsedMS&&(t=this._maxElapsedMS),t*=this.speed,this._minElapsedMS){const t=e-this._lastFrame|0;if(t{this._ticker.stop()},this.start=()=>{this._ticker.start()},this._ticker=null,this.ticker=e.sharedTicker?sn.shared:new sn,e.autoStart&&this.start()}static destroy(){if(this._ticker){const e=this._ticker;this.ticker=null,e.destroy()}}}on.extension=a.Application,u.add(on);const an=[];function ln(e){for(const t of an)if(t.test(e))return new t(e);throw new Error("Unable to auto-detect a suitable renderer.")}u.handleByList(a.Renderer,an);var cn="attribute vec2 aVertexPosition;\nattribute vec2 aTextureCoord;\n\nuniform mat3 projectionMatrix;\n\nvarying vec2 vTextureCoord;\n\nvoid main(void)\n{\n gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);\n vTextureCoord = aTextureCoord;\n}",un="attribute vec2 aVertexPosition;\n\nuniform mat3 projectionMatrix;\n\nvarying vec2 vTextureCoord;\n\nuniform vec4 inputSize;\nuniform vec4 outputFrame;\n\nvec4 filterVertexPosition( void )\n{\n vec2 position = aVertexPosition * max(outputFrame.zw, vec2(0.)) + outputFrame.xy;\n\n return vec4((projectionMatrix * vec3(position, 1.0)).xy, 0.0, 1.0);\n}\n\nvec2 filterTextureCoord( void )\n{\n return aVertexPosition * (outputFrame.zw * inputSize.zw);\n}\n\nvoid main(void)\n{\n gl_Position = filterVertexPosition();\n vTextureCoord = filterTextureCoord();\n}\n";const dn=cn,hn=un;class pn{constructor(e){this.renderer=e}contextChange(e){let t;if(1===this.renderer.context.webGLVersion){const n=e.getParameter(e.FRAMEBUFFER_BINDING);e.bindFramebuffer(e.FRAMEBUFFER,null),t=e.getParameter(e.SAMPLES),e.bindFramebuffer(e.FRAMEBUFFER,n)}else{const n=e.getParameter(e.DRAW_FRAMEBUFFER_BINDING);e.bindFramebuffer(e.DRAW_FRAMEBUFFER,null),t=e.getParameter(e.SAMPLES),e.bindFramebuffer(e.DRAW_FRAMEBUFFER,n)}t>=r.G5.HIGH?this.multisample=r.G5.HIGH:t>=r.G5.MEDIUM?this.multisample=r.G5.MEDIUM:t>=r.G5.LOW?this.multisample=r.G5.LOW:this.multisample=r.G5.NONE}destroy(){}}pn.extension={type:a.RendererSystem,name:"_multisample"},u.add(pn);class fn{constructor(e){this.buffer=e||null,this.updateID=-1,this.byteLength=-1,this.refCount=0}}class gn{constructor(e){this.renderer=e,this.managedBuffers={},this.boundBufferBases={}}destroy(){this.renderer=null}contextChange(){this.disposeAll(!0),this.gl=this.renderer.gl,this.CONTEXT_UID=this.renderer.CONTEXT_UID}bind(e){const{gl:t,CONTEXT_UID:n}=this,r=e._glBuffers[n]||this.createGLBuffer(e);t.bindBuffer(e.type,r.buffer)}unbind(e){const{gl:t}=this;t.bindBuffer(e,null)}bindBufferBase(e,t){const{gl:n,CONTEXT_UID:r}=this;if(this.boundBufferBases[t]!==e){const i=e._glBuffers[r]||this.createGLBuffer(e);this.boundBufferBases[t]=e,n.bindBufferBase(n.UNIFORM_BUFFER,t,i.buffer)}}bindBufferRange(e,t,n){const{gl:r,CONTEXT_UID:i}=this;n=n||0;const s=e._glBuffers[i]||this.createGLBuffer(e);r.bindBufferRange(r.UNIFORM_BUFFER,t||0,s.buffer,256*n,256)}update(e){const{gl:t,CONTEXT_UID:n}=this,r=e._glBuffers[n]||this.createGLBuffer(e);if(e._updateID!==r.updateID)if(r.updateID=e._updateID,t.bindBuffer(e.type,r.buffer),r.byteLength>=e.data.byteLength)t.bufferSubData(e.type,0,e.data);else{const n=e.static?t.STATIC_DRAW:t.DYNAMIC_DRAW;r.byteLength=e.data.byteLength,t.bufferData(e.type,e.data,n)}}dispose(e,t){if(!this.managedBuffers[e.id])return;delete this.managedBuffers[e.id];const n=e._glBuffers[this.CONTEXT_UID],r=this.gl;e.disposeRunner.remove(this),n&&(t||r.deleteBuffer(n.buffer),delete e._glBuffers[this.CONTEXT_UID])}disposeAll(e){const t=Object.keys(this.managedBuffers);for(let n=0;ne.resource)).filter((e=>e)),t=e.map((e=>e.load()));return this._load=Promise.all(t).then((()=>{const{realWidth:e,realHeight:t}=this.items[0];return this.resize(e,t),Promise.resolve(this)})),this._load}}class vn extends yn{constructor(e,t){const{width:n,height:r}=t||{};let i,s;Array.isArray(e)?(i=e,s=e.length):s=e,super(s,{width:n,height:r}),i&&this.initFromArray(i,t)}addBaseTextureAt(e,t){if(!e.resource)throw new Error("ArrayResource does not support RenderTexture");return this.addResourceAt(e.resource,t),this}bind(e){super.bind(e),e.target=r.sp.TEXTURE_2D_ARRAY}upload(e,t,n){const{length:r,itemDirtyIds:i,items:s}=this,{gl:o}=e;n.dirtyId<0&&o.texImage3D(o.TEXTURE_2D_ARRAY,0,n.internalFormat,this._width,this._height,r,0,t.format,n.type,null);for(let a=0;a0){if(!e.resource)throw new Error("CubeResource does not support copying of renderTexture.");this.addResourceAt(e.resource,t)}else e.target=r.sp.TEXTURE_CUBE_MAP_POSITIVE_X+t,e.parentTextureArray=this.baseTexture,this.items[t]=e;return e.valid&&!this.valid&&this.resize(e.realWidth,e.realHeight),this.items[t]=e,this}upload(e,t,n){const r=this.itemDirtyIds;for(let i=0;i{if(null!==this.url)try{const t=await i.Xd.ADAPTER.fetch(this.url,{mode:this.crossOrigin?"cors":"no-cors"});if(this.destroyed)return;const n=await t.blob();if(this.destroyed)return;const s=await createImageBitmap(n,{premultiplyAlpha:null===this.alphaMode||this.alphaMode===r.iw.UNPACK?"premultiply":"none"});if(this.destroyed)return;this.source=s,this.update(),e(this)}catch(n){if(this.destroyed)return;t(n),this.onError.emit(n)}else e(this)}))),this._load}upload(e,t,n){return this.source instanceof ImageBitmap?("number"===typeof this.alphaMode&&(t.alphaMode=this.alphaMode),super.upload(e,t,n)):(this.load(),!1)}dispose(){this.source instanceof ImageBitmap&&this.source.close(),super.dispose(),this._load=null}static test(e){return!!globalThis.createImageBitmap&&"undefined"!==typeof ImageBitmap&&("string"===typeof e||e instanceof ImageBitmap)}static get EMPTY(){return wn._EMPTY=wn._EMPTY??i.Xd.ADAPTER.createCanvas(0,0),wn._EMPTY}}const Tn=class extends Oe{constructor(e,t){t=t||{},super(i.Xd.ADAPTER.createCanvas()),this._width=0,this._height=0,this.svg=e,this.scale=t.scale||1,this._overrideWidth=t.width,this._overrideHeight=t.height,this._resolve=null,this._crossorigin=t.crossorigin,this._load=null,!1!==t.autoLoad&&this.load()}load(){return this._load||(this._load=new Promise((e=>{if(this._resolve=()=>{this.resize(this.source.width,this.source.height),e(this)},Tn.SVG_XML.test(this.svg.trim())){if(!btoa)throw new Error("Your browser doesn't support base64 conversions.");this.svg=`data:image/svg+xml;base64,${btoa(unescape(encodeURIComponent(this.svg)))}`}this._loadSvg()}))),this._load}_loadSvg(){const e=new Image;Oe.crossOrigin(e,this.svg,this._crossorigin),e.src=this.svg,e.onerror=t=>{this._resolve&&(e.onerror=null,this.onError.emit(t))},e.onload=()=>{if(!this._resolve)return;const t=e.width,n=e.height;if(!t||!n)throw new Error("The SVG image must have width and height defined (in pixels), canvas API needs them.");let r=t*this.scale,i=n*this.scale;(this._overrideWidth||this._overrideHeight)&&(r=this._overrideWidth||this._overrideHeight/n*t,i=this._overrideHeight||this._overrideWidth/t*n),r=Math.round(r),i=Math.round(i);const o=this.source;o.width=r,o.height=i,o._pixiId=`canvas_${(0,s.uid)()}`,o.getContext("2d").drawImage(e,0,0,t,n,0,0,r,i),this._resolve(),this._resolve=null}}static getSize(e){const t=Tn.SVG_SIZE.exec(e),n={};return t&&(n[t[1]]=Math.round(parseFloat(t[3])),n[t[5]]=Math.round(parseFloat(t[7]))),n}dispose(){super.dispose(),this._resolve=null,this._crossorigin=null}static test(e,t){return"svg"===t||"string"===typeof e&&e.startsWith("data:image/svg+xml")||"string"===typeof e&&Tn.SVG_XML.test(e)}};let An=Tn;An.SVG_XML=/^(<\?xml[^?]+\?>)?\s*()]*-->)?\s*\]*(?:\s(width|height)=('|")(\d*(?:\.\d+)?)(?:px)?('|"))[^>]*(?:\s(width|height)=('|")(\d*(?:\.\d+)?)(?:px)?('|"))[^>]*>/i;const Cn=class extends Oe{constructor(e,t){if(t=t||{},!(e instanceof HTMLVideoElement)){const n=document.createElement("video");n.setAttribute("preload","auto"),n.setAttribute("webkit-playsinline",""),n.setAttribute("playsinline",""),"string"===typeof e&&(e=[e]);const r=e[0].src||e[0];Oe.crossOrigin(n,r,t.crossorigin);for(let t=0;t{this.valid?t(this):(this._resolve=t,e.load())})),this._load}_onError(e){this.source.removeEventListener("error",this._onError,!0),this.onError.emit(e)}_isSourcePlaying(){const e=this.source;return!e.paused&&!e.ended&&this._isSourceReady()}_isSourceReady(){const e=this.source;return e.readyState>2}_onPlayStart(){this.valid||this._onCanPlay(),this.autoUpdate&&!this._isConnectedToTicker&&(sn.shared.add(this.update,this),this._isConnectedToTicker=!0)}_onPlayStop(){this._isConnectedToTicker&&(sn.shared.remove(this.update,this),this._isConnectedToTicker=!1)}_onCanPlay(){const e=this.source;e.removeEventListener("canplay",this._onCanPlay),e.removeEventListener("canplaythrough",this._onCanPlay);const t=this.valid;this.resize(e.videoWidth,e.videoHeight),!t&&this._resolve&&(this._resolve(this),this._resolve=null),this._isSourcePlaying()?this._onPlayStart():this.autoPlay&&e.play()}dispose(){this._isConnectedToTicker&&(sn.shared.remove(this.update,this),this._isConnectedToTicker=!1);const e=this.source;e&&(e.removeEventListener("error",this._onError,!0),e.pause(),e.src="",e.load()),super.dispose()}get autoUpdate(){return this._autoUpdate}set autoUpdate(e){e!==this._autoUpdate&&(this._autoUpdate=e,!this._autoUpdate&&this._isConnectedToTicker?(sn.shared.remove(this.update,this),this._isConnectedToTicker=!1):this._autoUpdate&&!this._isConnectedToTicker&&this._isSourcePlaying()&&(sn.shared.add(this.update,this),this._isConnectedToTicker=!0))}get updateFPS(){return this._updateFPS}set updateFPS(e){e!==this._updateFPS&&(this._updateFPS=e)}static test(e,t){return globalThis.HTMLVideoElement&&e instanceof HTMLVideoElement||Cn.TYPES.includes(t)}};let In=Cn;In.TYPES=["mp4","m4v","webm","ogg","ogv","h264","avi","mov"],In.MIME_TYPES={ogv:"video/ogg",mov:"video/quicktime",m4v:"video/mp4"},x.push(wn,Ne,En,In,An,A,Sn,vn)},8276:function(e,t,n){"use strict";n.d(t,{YZ:function(){return i},W2:function(){return u},s$:function(){return s}});var r=n(2848);class i{constructor(){this.minX=1/0,this.minY=1/0,this.maxX=-1/0,this.maxY=-1/0,this.rect=null,this.updateID=-1}isEmpty(){return this.minX>this.maxX||this.minY>this.maxY}clear(){this.minX=1/0,this.minY=1/0,this.maxX=-1/0,this.maxY=-1/0}getRectangle(e){return this.minX>this.maxX||this.minY>this.maxY?r.Ae.EMPTY:(e=e||new r.Ae(0,0,1,1),e.x=this.minX,e.y=this.minY,e.width=this.maxX-this.minX,e.height=this.maxY-this.minY,e)}addPoint(e){this.minX=Math.min(this.minX,e.x),this.maxX=Math.max(this.maxX,e.x),this.minY=Math.min(this.minY,e.y),this.maxY=Math.max(this.maxY,e.y)}addPointMatrix(e,t){const{a:n,b:r,c:i,d:s,tx:o,ty:a}=e,l=n*t.x+i*t.y+o,c=r*t.x+s*t.y+a;this.minX=Math.min(this.minX,l),this.maxX=Math.max(this.maxX,l),this.minY=Math.min(this.minY,c),this.maxY=Math.max(this.maxY,c)}addQuad(e){let t=this.minX,n=this.minY,r=this.maxX,i=this.maxY,s=e[0],o=e[1];t=sr?s:r,i=o>i?o:i,s=e[2],o=e[3],t=sr?s:r,i=o>i?o:i,s=e[4],o=e[5],t=sr?s:r,i=o>i?o:i,s=e[6],o=e[7],t=sr?s:r,i=o>i?o:i,this.minX=t,this.minY=n,this.maxX=r,this.maxY=i}addFrame(e,t,n,r,i){this.addFrameMatrix(e.worldTransform,t,n,r,i)}addFrameMatrix(e,t,n,r,i){const s=e.a,o=e.b,a=e.c,l=e.d,c=e.tx,u=e.ty;let d=this.minX,h=this.minY,p=this.maxX,f=this.maxY,g=s*t+a*n+c,m=o*t+l*n+u;d=gp?g:p,f=m>f?m:f,g=s*r+a*n+c,m=o*r+l*n+u,d=gp?g:p,f=m>f?m:f,g=s*t+a*i+c,m=o*t+l*i+u,d=gp?g:p,f=m>f?m:f,g=s*r+a*i+c,m=o*r+l*i+u,d=gp?g:p,f=m>f?m:f,this.minX=d,this.minY=h,this.maxX=p,this.maxY=f}addVertexData(e,t,n){let r=this.minX,i=this.minY,s=this.maxX,o=this.maxY;for(let a=t;as?t:s,o=n>o?n:o}this.minX=r,this.minY=i,this.maxX=s,this.maxY=o}addVertices(e,t,n,r){this.addVerticesMatrix(e.worldTransform,t,n,r)}addVerticesMatrix(e,t,n,r,i=0,s=i){const o=e.a,a=e.b,l=e.c,c=e.d,u=e.tx,d=e.ty;let h=this.minX,p=this.minY,f=this.maxX,g=this.maxY;for(let m=n;mr?e.maxX:r,this.maxY=e.maxY>i?e.maxY:i}addBoundsMask(e,t){const n=e.minX>t.minX?e.minX:t.minX,r=e.minY>t.minY?e.minY:t.minY,i=e.maxXo?i:o,this.maxY=s>a?s:a}}addBoundsMatrix(e,t){this.addFrameMatrix(t,e.minX,e.minY,e.maxX,e.maxY)}addBoundsArea(e,t){const n=e.minX>t.x?e.minX:t.x,r=e.minY>t.y?e.minY:t.y,i=e.maxXo?i:o,this.maxY=s>a?s:a}}pad(e=0,t=e){this.isEmpty()||(this.minX-=e,this.maxX+=e,this.minY-=t,this.maxY+=t)}addFramePad(e,t,n,r,i,s){e-=i,t-=s,n+=i,r+=s,this.minX=this.minXn?this.maxX:n,this.minY=this.minYr?this.maxY:r}}class s extends r.P6.EventEmitter{constructor(){super(),this.tempDisplayObjectParent=null,this.transform=new r.wx,this.alpha=1,this.visible=!0,this.renderable=!0,this.cullable=!1,this.cullArea=null,this.parent=null,this.worldAlpha=1,this._lastSortedIndex=0,this._zIndex=0,this.filterArea=null,this.filters=null,this._enabledFilters=null,this._bounds=new i,this._localBounds=null,this._boundsID=0,this._boundsRect=null,this._localBoundsRect=null,this._mask=null,this._maskRefCount=0,this._destroyed=!1,this.isSprite=!1,this.isMask=!1}static mixin(e){const t=Object.keys(e);for(let n=0;n1)for(let t=0;tthis.children.length)throw new Error(`${e}addChildAt: The index ${t} supplied is out of bounds ${this.children.length}`);return e.parent&&e.parent.removeChild(e),e.parent=this,this.sortDirty=!0,e.transform._parentID=-1,this.children.splice(t,0,e),this._boundsID++,this.onChildrenChange(t),e.emit("added",this),this.emit("childAdded",e,this,t),e}swapChildren(e,t){if(e===t)return;const n=this.getChildIndex(e),r=this.getChildIndex(t);this.children[n]=t,this.children[r]=e,this.onChildrenChange(n=this.children.length)throw new Error(`The index ${t} supplied is out of bounds ${this.children.length}`);const n=this.getChildIndex(e);r.P6.removeItems(this.children,n,1),this.children.splice(t,0,e),this.onChildrenChange(t)}getChildAt(e){if(e<0||e>=this.children.length)throw new Error(`getChildAt: Index (${e}) does not exist.`);return this.children[e]}removeChild(...e){if(e.length>1)for(let t=0;t0&&i<=r){s=this.children.splice(n,i);for(let e=0;e1&&this.children.sort(l),this.sortDirty=!1}updateTransform(){this.sortableChildren&&this.sortDirty&&this.sortChildren(),this._boundsID++,this.transform.updateTransform(this.parent.transform),this.worldAlpha=this.alpha*this.parent.worldAlpha;for(let e=0,t=this.children.length;e0&&t.height>0))return;let n,r;this.cullArea?(n=this.cullArea,r=this.worldTransform):this._render!==c.prototype._render&&(n=this.getBounds(!0));const i=e.projection.transform;if(i&&(r?(r=a.copyFrom(r),r.prepend(i)):r=i),n&&t.intersects(n,r))this._render(e);else if(this.cullArea)return;for(let s=0,o=this.children.length;s(e[e["POLY"]=0]="POLY",e[e["RECT"]=1]="RECT",e[e["CIRC"]=2]="CIRC",e[e["ELIP"]=3]="ELIP",e[e["RREC"]=4]="RREC",e))(o||{});class a{constructor(e=0,t=0){this.x=0,this.y=0,this.x=e,this.y=t}clone(){return new a(this.x,this.y)}copyFrom(e){return this.set(e.x,e.y),this}copyTo(e){return e.set(this.x,this.y),e}equals(e){return e.x===this.x&&e.y===this.y}set(e=0,t=e){return this.x=e,this.y=t,this}toString(){return`[@pixi/math:Point x=${this.x} y=${this.y}]`}}const l=[new a,new a,new a,new a];class c{constructor(e=0,t=0,n=0,r=0){this.x=Number(e),this.y=Number(t),this.width=Number(n),this.height=Number(r),this.type=o.RECT}get left(){return this.x}get right(){return this.x+this.width}get top(){return this.y}get bottom(){return this.y+this.height}static get EMPTY(){return new c(0,0,0,0)}clone(){return new c(this.x,this.y,this.width,this.height)}copyFrom(e){return this.x=e.x,this.y=e.y,this.width=e.width,this.height=e.height,this}copyTo(e){return e.x=this.x,e.y=this.y,e.width=this.width,e.height=this.height,e}contains(e,t){return!(this.width<=0||this.height<=0)&&(e>=this.x&&e=this.y&&te.right?e.right:this.right;if(n<=t)return!1;const r=this.ye.bottom?e.bottom:this.bottom;return i>r}const n=this.left,r=this.right,i=this.top,s=this.bottom;if(r<=n||s<=i)return!1;const o=l[0].set(e.left,e.top),a=l[1].set(e.left,e.bottom),c=l[2].set(e.right,e.top),u=l[3].set(e.right,e.bottom);if(c.x<=o.x||a.y<=o.y)return!1;const d=Math.sign(t.a*t.d-t.b*t.c);if(0===d)return!1;if(t.apply(o,o),t.apply(a,a),t.apply(c,c),t.apply(u,u),Math.max(o.x,a.x,c.x,u.x)<=n||Math.min(o.x,a.x,c.x,u.x)>=r||Math.max(o.y,a.y,c.y,u.y)<=i||Math.min(o.y,a.y,c.y,u.y)>=s)return!1;const h=d*(a.y-o.y),p=d*(o.x-a.x),f=h*n+p*i,g=h*r+p*i,m=h*n+p*s,b=h*r+p*s;if(Math.max(f,g,m,b)<=h*o.x+p*o.y||Math.min(f,g,m,b)>=h*u.x+p*u.y)return!1;const _=d*(o.y-c.y),y=d*(c.x-o.x),v=_*n+y*i,E=_*r+y*i,x=_*n+y*s,S=_*r+y*s;return!(Math.max(v,E,x,S)<=_*o.x+y*o.y||Math.min(v,E,x,S)>=_*u.x+y*u.y)}pad(e=0,t=e){return this.x-=e,this.y-=t,this.width+=2*e,this.height+=2*t,this}fit(e){const t=Math.max(this.x,e.x),n=Math.min(this.x+this.width,e.x+e.width),r=Math.max(this.y,e.y),i=Math.min(this.y+this.height,e.y+e.height);return this.x=t,this.width=Math.max(n-t,0),this.y=r,this.height=Math.max(i-r,0),this}ceil(e=1,t=.001){const n=Math.ceil((this.x+this.width-t)*e)/e,r=Math.ceil((this.y+this.height-t)*e)/e;return this.x=Math.floor((this.x+t)*e)/e,this.y=Math.floor((this.y+t)*e)/e,this.width=n-this.x,this.height=r-this.y,this}enlarge(e){const t=Math.min(this.x,e.x),n=Math.max(this.x+this.width,e.x+e.width),r=Math.min(this.y,e.y),i=Math.max(this.y+this.height,e.y+e.height);return this.x=t,this.width=n-t,this.y=r,this.height=i-r,this}toString(){return`[@pixi/math:Rectangle x=${this.x} y=${this.y} width=${this.width} height=${this.height}]`}}class u{constructor(e=0,t=0,n=0){this.x=e,this.y=t,this.radius=n,this.type=o.CIRC}clone(){return new u(this.x,this.y,this.radius)}contains(e,t){if(this.radius<=0)return!1;const n=this.radius*this.radius;let r=this.x-e,i=this.y-t;return r*=r,i*=i,r+i<=n}getBounds(){return new c(this.x-this.radius,this.y-this.radius,2*this.radius,2*this.radius)}toString(){return`[@pixi/math:Circle x=${this.x} y=${this.y} radius=${this.radius}]`}}class d{constructor(e=0,t=0,n=0,r=0){this.x=e,this.y=t,this.width=n,this.height=r,this.type=o.ELIP}clone(){return new d(this.x,this.y,this.width,this.height)}contains(e,t){if(this.width<=0||this.height<=0)return!1;let n=(e-this.x)/this.width,r=(t-this.y)/this.height;return n*=n,r*=r,n+r<=1}getBounds(){return new c(this.x-this.width,this.y-this.height,this.width,this.height)}toString(){return`[@pixi/math:Ellipse x=${this.x} y=${this.y} width=${this.width} height=${this.height}]`}}class h{constructor(...e){let t=Array.isArray(e[0])?e[0]:e;if("number"!==typeof t[0]){const e=[];for(let n=0,r=t.length;nt!==l>t&&e<(t-o)/(l-o)*(a-r)+r;c&&(n=!n)}return n}toString(){return`[@pixi/math:PolygoncloseStroke=${this.closeStroke}points=${this.points.reduce(((e,t)=>`${e}, ${t}`),"")}]`}}class p{constructor(e=0,t=0,n=0,r=0,i=20){this.x=e,this.y=t,this.width=n,this.height=r,this.radius=i,this.type=o.RREC}clone(){return new p(this.x,this.y,this.width,this.height,this.radius)}contains(e,t){if(this.width<=0||this.height<=0)return!1;if(e>=this.x&&e<=this.x+this.width&&t>=this.y&&t<=this.y+this.height){const n=Math.max(0,Math.min(this.radius,Math.min(this.width,this.height)/2));if(t>=this.y+n&&t<=this.y+this.height-n||e>=this.x+n&&e<=this.x+this.width-n)return!0;let r=e-(this.x+n),i=t-(this.y+n);const s=n*n;if(r*r+i*i<=s)return!0;if(r=e-(this.x+this.width-n),r*r+i*i<=s)return!0;if(i=t-(this.y+this.height-n),r*r+i*i<=s)return!0;if(r=e-(this.x+n),r*r+i*i<=s)return!0}return!1}toString(){return`[@pixi/math:RoundedRectangle x=${this.x} y=${this.y}width=${this.width} height=${this.height} radius=${this.radius}]`}}class f{constructor(e=1,t=0,n=0,r=1,i=0,s=0){this.array=null,this.a=e,this.b=t,this.c=n,this.d=r,this.tx=i,this.ty=s}fromArray(e){this.a=e[0],this.b=e[1],this.c=e[3],this.d=e[4],this.tx=e[2],this.ty=e[5]}set(e,t,n,r,i,s){return this.a=e,this.b=t,this.c=n,this.d=r,this.tx=i,this.ty=s,this}toArray(e,t){this.array||(this.array=new Float32Array(9));const n=t||this.array;return e?(n[0]=this.a,n[1]=this.b,n[2]=0,n[3]=this.c,n[4]=this.d,n[5]=0,n[6]=this.tx,n[7]=this.ty,n[8]=1):(n[0]=this.a,n[1]=this.c,n[2]=this.tx,n[3]=this.b,n[4]=this.d,n[5]=this.ty,n[6]=0,n[7]=0,n[8]=1),n}apply(e,t){t=t||new a;const n=e.x,r=e.y;return t.x=this.a*n+this.c*r+this.tx,t.y=this.b*n+this.d*r+this.ty,t}applyInverse(e,t){t=t||new a;const n=1/(this.a*this.d+this.c*-this.b),r=e.x,i=e.y;return t.x=this.d*n*r+-this.c*n*i+(this.ty*this.c-this.tx*this.d)*n,t.y=this.a*n*i+-this.b*n*r+(-this.ty*this.a+this.tx*this.b)*n,t}translate(e,t){return this.tx+=e,this.ty+=t,this}scale(e,t){return this.a*=e,this.d*=t,this.c*=e,this.b*=t,this.tx*=e,this.ty*=t,this}rotate(e){const t=Math.cos(e),n=Math.sin(e),r=this.a,i=this.c,s=this.tx;return this.a=r*t-this.b*n,this.b=r*n+this.b*t,this.c=i*t-this.d*n,this.d=i*n+this.d*t,this.tx=s*t-this.ty*n,this.ty=s*n+this.ty*t,this}append(e){const t=this.a,n=this.b,r=this.c,i=this.d;return this.a=e.a*t+e.b*r,this.b=e.a*n+e.b*i,this.c=e.c*t+e.d*r,this.d=e.c*n+e.d*i,this.tx=e.tx*t+e.ty*r+this.tx,this.ty=e.tx*n+e.ty*i+this.ty,this}setTransform(e,t,n,r,i,s,o,a,l){return this.a=Math.cos(o+l)*i,this.b=Math.sin(o+l)*i,this.c=-Math.sin(o-a)*s,this.d=Math.cos(o-a)*s,this.tx=e-(n*this.a+r*this.c),this.ty=t-(n*this.b+r*this.d),this}prepend(e){const t=this.tx;if(1!==e.a||0!==e.b||0!==e.c||1!==e.d){const t=this.a,n=this.c;this.a=t*e.a+this.b*e.c,this.b=t*e.b+this.b*e.d,this.c=n*e.a+this.d*e.c,this.d=n*e.b+this.d*e.d}return this.tx=t*e.a+this.ty*e.c+e.tx,this.ty=t*e.b+this.ty*e.d+e.ty,this}decompose(e){const t=this.a,n=this.b,i=this.c,s=this.d,o=e.pivot,a=-Math.atan2(-i,s),l=Math.atan2(n,t),c=Math.abs(a+l);return c<1e-5||Math.abs(r-c)<1e-5?(e.rotation=l,e.skew.x=e.skew.y=0):(e.rotation=0,e.skew.x=a,e.skew.y=l),e.scale.x=Math.sqrt(t*t+n*n),e.scale.y=Math.sqrt(i*i+s*s),e.position.x=this.tx+(o.x*t+o.y*i),e.position.y=this.ty+(o.x*n+o.y*s),e}invert(){const e=this.a,t=this.b,n=this.c,r=this.d,i=this.tx,s=e*r-t*n;return this.a=r/s,this.b=-t/s,this.c=-n/s,this.d=e/s,this.tx=(n*this.ty-r*i)/s,this.ty=-(e*this.ty-t*i)/s,this}identity(){return this.a=1,this.b=0,this.c=0,this.d=1,this.tx=0,this.ty=0,this}clone(){const e=new f;return e.a=this.a,e.b=this.b,e.c=this.c,e.d=this.d,e.tx=this.tx,e.ty=this.ty,e}copyTo(e){return e.a=this.a,e.b=this.b,e.c=this.c,e.d=this.d,e.tx=this.tx,e.ty=this.ty,e}copyFrom(e){return this.a=e.a,this.b=e.b,this.c=e.c,this.d=e.d,this.tx=e.tx,this.ty=e.ty,this}toString(){return`[@pixi/math:Matrix a=${this.a} b=${this.b} c=${this.c} d=${this.d} tx=${this.tx} ty=${this.ty}]`}static get IDENTITY(){return new f}static get TEMP_MATRIX(){return new f}}const g=[1,1,0,-1,-1,-1,0,1,1,1,0,-1,-1,-1,0,1],m=[0,1,1,1,0,-1,-1,-1,0,1,1,1,0,-1,-1,-1],b=[0,-1,-1,-1,0,1,1,1,0,1,1,1,0,-1,-1,-1],_=[1,1,0,-1,-1,-1,0,1,-1,-1,0,1,1,1,0,-1],y=[],v=[],E=Math.sign;function x(){for(let e=0;e<16;e++){const t=[];y.push(t);for(let n=0;n<16;n++){const r=E(g[e]*g[n]+b[e]*m[n]),i=E(m[e]*g[n]+_[e]*m[n]),s=E(g[e]*b[n]+b[e]*_[n]),o=E(m[e]*b[n]+_[e]*_[n]);for(let e=0;e<16;e++)if(g[e]===r&&m[e]===i&&b[e]===s&&_[e]===o){t.push(e);break}}}for(let e=0;e<16;e++){const t=new f;t.set(g[e],m[e],b[e],_[e],0,0),v.push(t)}}x();const S={E:0,SE:1,S:2,SW:3,W:4,NW:5,N:6,NE:7,MIRROR_VERTICAL:8,MAIN_DIAGONAL:10,MIRROR_HORIZONTAL:12,REVERSE_DIAGONAL:14,uX:e=>g[e],uY:e=>m[e],vX:e=>b[e],vY:e=>_[e],inv:e=>8&e?15&e:7&-e,add:(e,t)=>y[e][t],sub:(e,t)=>y[e][S.inv(t)],rotate180:e=>4^e,isVertical:e=>2===(3&e),byDirection:(e,t)=>2*Math.abs(e)<=Math.abs(t)?t>=0?S.S:S.N:2*Math.abs(t)<=Math.abs(e)?e>0?S.E:S.W:t>0?e>0?S.SE:S.SW:e>0?S.NE:S.NW,matrixAppendRotationInv:(e,t,n=0,r=0)=>{const i=v[S.inv(t)];i.tx=n,i.ty=r,e.append(i)}};class w{constructor(e,t,n=0,r=0){this._x=n,this._y=r,this.cb=e,this.scope=t}clone(e=this.cb,t=this.scope){return new w(e,t,this._x,this._y)}set(e=0,t=e){return this._x===e&&this._y===t||(this._x=e,this._y=t,this.cb.call(this.scope)),this}copyFrom(e){return this._x===e.x&&this._y===e.y||(this._x=e.x,this._y=e.y,this.cb.call(this.scope)),this}copyTo(e){return e.set(this._x,this._y),e}equals(e){return e.x===this._x&&e.y===this._y}toString(){return`[@pixi/math:ObservablePoint x=0 y=0 scope=${this.scope}]`}get x(){return this._x}set x(e){this._x!==e&&(this._x=e,this.cb.call(this.scope))}get y(){return this._y}set y(e){this._y!==e&&(this._y=e,this.cb.call(this.scope))}}const T=class{constructor(){this.worldTransform=new f,this.localTransform=new f,this.position=new w(this.onChange,this,0,0),this.scale=new w(this.onChange,this,1,1),this.pivot=new w(this.onChange,this,0,0),this.skew=new w(this.updateSkew,this,0,0),this._rotation=0,this._cx=1,this._sx=0,this._cy=0,this._sy=1,this._localID=0,this._currentLocalID=0,this._worldID=0,this._parentID=0}onChange(){this._localID++}updateSkew(){this._cx=Math.cos(this._rotation+this.skew.y),this._sx=Math.sin(this._rotation+this.skew.y),this._cy=-Math.sin(this._rotation-this.skew.x),this._sy=Math.cos(this._rotation-this.skew.x),this._localID++}toString(){return`[@pixi/math:Transform position=(${this.position.x}, ${this.position.y}) rotation=${this.rotation} scale=(${this.scale.x}, ${this.scale.y}) skew=(${this.skew.x}, ${this.skew.y}) ]`}updateLocalTransform(){const e=this.localTransform;this._localID!==this._currentLocalID&&(e.a=this._cx*this.scale.x,e.b=this._sx*this.scale.x,e.c=this._cy*this.scale.y,e.d=this._sy*this.scale.y,e.tx=this.position.x-(this.pivot.x*e.a+this.pivot.y*e.c),e.ty=this.position.y-(this.pivot.x*e.b+this.pivot.y*e.d),this._currentLocalID=this._localID,this._parentID=-1)}updateTransform(e){const t=this.localTransform;if(this._localID!==this._currentLocalID&&(t.a=this._cx*this.scale.x,t.b=this._sx*this.scale.x,t.c=this._cy*this.scale.y,t.d=this._sy*this.scale.y,t.tx=this.position.x-(this.pivot.x*t.a+this.pivot.y*t.c),t.ty=this.position.y-(this.pivot.x*t.b+this.pivot.y*t.d),this._currentLocalID=this._localID,this._parentID=-1),this._parentID!==e._worldID){const n=e.worldTransform,r=this.worldTransform;r.a=t.a*n.a+t.b*n.c,r.b=t.a*n.b+t.b*n.d,r.c=t.c*n.a+t.d*n.c,r.d=t.c*n.b+t.d*n.d,r.tx=t.tx*n.a+t.ty*n.c+n.tx,r.ty=t.tx*n.b+t.ty*n.d+n.ty,this._parentID=e._worldID,this._worldID++}}setFromMatrix(e){e.decompose(this),this._localID++}get rotation(){return this._rotation}set rotation(e){this._rotation!==e&&(this._rotation=e,this.updateSkew())}};let A=T;A.IDENTITY=new T},8706:function(e,t,n){"use strict";n.d(t,{tq:function(){return w},Xd:function(){return i}});const r={createCanvas:(e,t)=>{const n=document.createElement("canvas");return n.width=e,n.height=t,n},getCanvasRenderingContext2D:()=>CanvasRenderingContext2D,getWebGLRenderingContext:()=>WebGLRenderingContext,getNavigator:()=>navigator,getBaseUrl:()=>document.baseURI??window.location.href,getFontFaceSet:()=>document.fonts,fetch:(e,t)=>fetch(e,t),parseXML:e=>{const t=new DOMParser;return t.parseFromString(e,"text/xml")}},i={ADAPTER:r,RESOLUTION:1,CREATE_IMAGE_BITMAP:!1,ROUND_PIXELS:!1};var s=/iPhone/i,o=/iPod/i,a=/iPad/i,l=/\biOS-universal(?:.+)Mac\b/i,c=/\bAndroid(?:.+)Mobile\b/i,u=/Android/i,d=/(?:SD4930UR|\bSilk(?:.+)Mobile\b)/i,h=/Silk/i,p=/Windows Phone/i,f=/\bWindows(?:.+)ARM\b/i,g=/BlackBerry/i,m=/BB10/i,b=/Opera Mini/i,_=/\b(CriOS|Chrome)(?:.+)Mobile/i,y=/Mobile(?:.+)Firefox\b/i,v=function(e){return"undefined"!==typeof e&&"MacIntel"===e.platform&&"number"===typeof e.maxTouchPoints&&e.maxTouchPoints>1&&"undefined"===typeof MSStream};function E(e){return function(t){return t.test(e)}}function x(e){var t={userAgent:"",platform:"",maxTouchPoints:0};e||"undefined"===typeof navigator?"string"===typeof e?t.userAgent=e:e&&e.userAgent&&(t={userAgent:e.userAgent,platform:e.platform,maxTouchPoints:e.maxTouchPoints||0}):t={userAgent:navigator.userAgent,platform:navigator.platform,maxTouchPoints:navigator.maxTouchPoints||0};var n=t.userAgent,r=n.split("[FBAN");"undefined"!==typeof r[1]&&(n=r[0]),r=n.split("Twitter"),"undefined"!==typeof r[1]&&(n=r[0]);var i=E(n),x={apple:{phone:i(s)&&!i(p),ipod:i(o),tablet:!i(s)&&(i(a)||v(t))&&!i(p),universal:i(l),device:(i(s)||i(o)||i(a)||i(l)||v(t))&&!i(p)},amazon:{phone:i(d),tablet:!i(d)&&i(h),device:i(d)||i(h)},android:{phone:!i(p)&&i(d)||!i(p)&&i(c),tablet:!i(p)&&!i(d)&&!i(c)&&(i(h)||i(u)),device:!i(p)&&(i(d)||i(h)||i(c)||i(u))||i(/\bokhttp\b/i)},windows:{phone:i(p),tablet:i(f),device:i(p)||i(f)},other:{blackberry:i(g),blackberry10:i(m),opera:i(b),firefox:i(y),chrome:i(_),device:i(g)||i(m)||i(b)||i(y)||i(_)},any:!1,phone:!1,tablet:!1};return x.any=x.apple.device||x.android.device||x.windows.device||x.other.device,x.phone=x.apple.phone||x.android.phone||x.windows.phone,x.tablet=x.apple.tablet||x.android.tablet||x.windows.tablet,x}const S=x["default"]??x,w=S(globalThis.navigator)},4038:function(e,t,n){"use strict";n.r(t),n.d(t,{BaseTextureCache:function(){return X},BoundingBox:function(){return j},CanvasRenderTarget:function(){return Z},DATA_URI:function(){return O},EventEmitter:function(){return i},ProgramCache:function(){return W},TextureCache:function(){return q},clearTextureCache:function(){return K},correctBlendMode:function(){return I},createIndicesForQuads:function(){return N},decomposeDataUri:function(){return ne},deprecation:function(){return g},destroyTextureCache:function(){return Y},determineCrossOrigin:function(){return ie},earcut:function(){return s},getBufferType:function(){return M},getCanvasBoundingBox:function(){return ee},getResolutionOfUrl:function(){return se},hex2rgb:function(){return E},hex2string:function(){return x},interleaveTypedArrays:function(){return L},isMobile:function(){return r.tq},isPow2:function(){return B},isWebGLSupported:function(){return y},log2:function(){return U},nextPow2:function(){return F},path:function(){return p},premultiplyBlendMode:function(){return C},premultiplyRgba:function(){return R},premultiplyTint:function(){return k},premultiplyTintToRgba:function(){return P},removeItems:function(){return G},rgb2hex:function(){return w},sayHello:function(){return b},sign:function(){return $},skipHello:function(){return m},string2hex:function(){return S},trimCanvas:function(){return te},uid:function(){return H},url:function(){return a}});var r=n(8706);r.Xd.RETINA_PREFIX=/@([0-9\.]+)x/,r.Xd.FAIL_IF_MAJOR_PERFORMANCE_CAVEAT=!1;var i=n(6729),s=n(9187),o=n(8575);const a={parse:o.Qc,format:o.WU,resolve:o.DB};function l(e){if("string"!==typeof e)throw new TypeError(`Path must be a string. Received ${JSON.stringify(e)}`)}function c(e){const t=e.split("?")[0];return t.split("#")[0]}function u(e){return e.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}function d(e,t,n){return e.replace(new RegExp(u(t),"g"),n)}function h(e,t){let n="",r=0,i=-1,s=0,o=-1;for(let a=0;a<=e.length;++a){if(a2){const e=n.lastIndexOf("/");if(e!==n.length-1){-1===e?(n="",r=0):(n=n.slice(0,e),r=n.length-1-n.lastIndexOf("/")),i=a,s=0;continue}}else if(2===n.length||1===n.length){n="",r=0,i=a,s=0;continue}t&&(n.length>0?n+="/..":n="..",r=2)}else n.length>0?n+=`/${e.slice(i+1,a)}`:n=e.slice(i+1,a),r=a-i-1;i=a,s=0}else 46===o&&-1!==s?++s:s=-1}return n}const p={toPosix(e){return d(e,"\\","/")},isUrl(e){return/^https?:/.test(this.toPosix(e))},isDataUrl(e){return/^data:([a-z]+\/[a-z0-9-+.]+(;[a-z0-9-.!#$%*+.{}|~`]+=[a-z0-9-.!#$%*+.{}()_|~`]+)*)?(;base64)?,([a-z0-9!$&',()*+;=\-._~:@\/?%\s<>]*?)$/i.test(e)},hasProtocol(e){return/^[^/:]+:\//.test(this.toPosix(e))},getProtocol(e){l(e),e=this.toPosix(e);let t="";const n=/^file:\/\/\//.exec(e),r=/^[^/:]+:\/\//.exec(e),i=/^[^/:]+:\//.exec(e);if(n||r||i){const s=n?.[0]||r?.[0]||i?.[0];t=s,e=e.slice(s.length)}return t},toAbsolute(e,t,n){if(this.isDataUrl(e))return e;const i=c(this.toPosix(t??r.Xd.ADAPTER.getBaseUrl())),s=c(this.toPosix(n??this.rootname(i)));if(l(e),e=this.toPosix(e),e.startsWith("/"))return p.join(s,e.slice(1));const o=this.isAbsolute(e)?e:this.join(i,e);return o},normalize(e){if(e=this.toPosix(e),l(e),0===e.length)return".";let t="";const n=e.startsWith("/");this.hasProtocol(e)&&(t=this.rootname(e),e=e.slice(t.length));const r=e.endsWith("/");return e=h(e,!1),e.length>0&&r&&(e+="/"),n?`/${e}`:t+e},isAbsolute(e){return l(e),e=this.toPosix(e),!!this.hasProtocol(e)||e.startsWith("/")},join(...e){if(0===e.length)return".";let t;for(let n=0;n0)if(void 0===t)t=r;else{const i=e[n-1]??"";this.extname(i)?t+=`/../${r}`:t+=`/${r}`}}return void 0===t?".":this.normalize(t)},dirname(e){if(l(e),0===e.length)return".";e=this.toPosix(e);let t=e.charCodeAt(0);const n=47===t;let r=-1,i=!0;const s=this.getProtocol(e),o=e;e=e.slice(s.length);for(let a=e.length-1;a>=1;--a)if(t=e.charCodeAt(a),47===t){if(!i){r=a;break}}else i=!1;return-1===r?n?"/":this.isUrl(o)?s+e:s:n&&1===r?"//":s+e.slice(0,r)},rootname(e){l(e),e=this.toPosix(e);let t="";if(t=e.startsWith("/")?"/":this.getProtocol(e),this.isUrl(e)){const n=e.indexOf("/",t.length);t=-1!==n?e.slice(0,n):e,t.endsWith("/")||(t+="/")}return t},basename(e,t){l(e),t&&l(t),e=c(this.toPosix(e));let n,r=0,i=-1,s=!0;if(void 0!==t&&t.length>0&&t.length<=e.length){if(t.length===e.length&&t===e)return"";let o=t.length-1,a=-1;for(n=e.length-1;n>=0;--n){const l=e.charCodeAt(n);if(47===l){if(!s){r=n+1;break}}else-1===a&&(s=!1,a=n+1),o>=0&&(l===t.charCodeAt(o)?-1===--o&&(i=n):(o=-1,i=a))}return r===i?i=a:-1===i&&(i=e.length),e.slice(r,i)}for(n=e.length-1;n>=0;--n)if(47===e.charCodeAt(n)){if(!s){r=n+1;break}}else-1===i&&(s=!1,i=n+1);return-1===i?"":e.slice(r,i)},extname(e){l(e),e=c(this.toPosix(e));let t=-1,n=0,r=-1,i=!0,s=0;for(let o=e.length-1;o>=0;--o){const a=e.charCodeAt(o);if(47!==a)-1===r&&(i=!1,r=o+1),46===a?-1===t?t=o:1!==s&&(s=1):-1!==t&&(s=-1);else if(!i){n=o+1;break}}return-1===t||-1===r||0===s||1===s&&t===r-1&&t===n+1?"":e.slice(t,r)},parse(e){l(e);const t={root:"",dir:"",base:"",ext:"",name:""};if(0===e.length)return t;e=c(this.toPosix(e));let n=e.charCodeAt(0);const r=this.isAbsolute(e);let i;const s="";t.root=this.rootname(e),i=r||this.hasProtocol(e)?1:0;let o=-1,a=0,u=-1,d=!0,h=e.length-1,p=0;for(;h>=i;--h)if(n=e.charCodeAt(h),47!==n)-1===u&&(d=!1,u=h+1),46===n?-1===o?o=h:1!==p&&(p=1):-1!==o&&(p=-1);else if(!d){a=h+1;break}return-1===o||-1===u||0===p||1===p&&o===u-1&&o===a+1?-1!==u&&(t.base=t.name=0===a&&r?e.slice(1,u):e.slice(a,u)):(0===a&&r?(t.name=e.slice(1,o),t.base=e.slice(1,u)):(t.name=e.slice(a,o),t.base=e.slice(a,u)),t.ext=e.slice(o,u)),t.dir=this.dirname(e),s&&(t.dir=s+t.dir),t},sep:"/",delimiter:":"},f={};function g(e,t,n=3){if(f[t])return;let r=(new Error).stack;"undefined"===typeof r?console.warn("PixiJS Deprecation Warning: ",`${t}\nDeprecated since v${e}`):(r=r.split("\n").splice(n).join("\n"),console.groupCollapsed?(console.groupCollapsed("%cPixiJS Deprecation Warning: %c%s","color:#614108;background:#fffbe6","font-weight:normal;color:#614108;background:#fffbe6",`${t}\nDeprecated since v${e}`),console.warn(r),console.groupEnd()):(console.warn("PixiJS Deprecation Warning: ",`${t}\nDeprecated since v${e}`),console.warn(r))),f[t]=!0}function m(){g("7.0.0","skipHello is deprecated, please use settings.RENDER_OPTIONS.hello")}function b(){g("7.0.0",'sayHello is deprecated, please use Renderer\'s "hello" option')}let _;function y(){return"undefined"===typeof _&&(_=function(){const e={stencil:!0,failIfMajorPerformanceCaveat:r.Xd.FAIL_IF_MAJOR_PERFORMANCE_CAVEAT};try{if(!r.Xd.ADAPTER.getWebGLRenderingContext())return!1;const t=r.Xd.ADAPTER.createCanvas();let n=t.getContext("webgl",e)||t.getContext("experimental-webgl",e);const i=!!n?.getContextAttributes()?.stencil;if(n){const e=n.getExtension("WEBGL_lose_context");e&&e.loseContext()}return n=null,i}catch(t){return!1}}()),_}var v=n(6278);function E(e,t=[]){return g("7.2.0","utils.hex2rgb is deprecated, use Color#toRgbArray instead"),v.I.shared.setValue(e).toRgbArray(t)}function x(e){return g("7.2.0","utils.hex2string is deprecated, use Color#toHex instead"),v.I.shared.setValue(e).toHex()}function S(e){return g("7.2.0","utils.string2hex is deprecated, use Color#toNumber instead"),v.I.shared.setValue(e).toNumber()}function w(e){return g("7.2.0","utils.rgb2hex is deprecated, use Color#toNumber instead"),v.I.shared.setValue(e).toNumber()}var T=n(7361);function A(){const e=[],t=[];for(let r=0;r<32;r++)e[r]=r,t[r]=r;e[T.T$.NORMAL_NPM]=T.T$.NORMAL,e[T.T$.ADD_NPM]=T.T$.ADD,e[T.T$.SCREEN_NPM]=T.T$.SCREEN,t[T.T$.NORMAL]=T.T$.NORMAL_NPM,t[T.T$.ADD]=T.T$.ADD_NPM,t[T.T$.SCREEN]=T.T$.SCREEN_NPM;const n=[];return n.push(t),n.push(e),n}const C=A();function I(e,t){return C[t?1:0][e]}function R(e,t,n,r=!0){return g("7.2.0","utils.premultiplyRgba has moved to Color.premultiply"),v.I.shared.setValue(e).premultiply(t,r).toArray(n??new Float32Array(4))}function k(e,t){return g("7.2.0","utils.premultiplyTint has moved to Color.toPremultiplied"),v.I.shared.setValue(e).toPremultiplied(t)}function P(e,t,n,r=!0){return g("7.2.0","utils.premultiplyTintToRgba has moved to Color.premultiply"),v.I.shared.setValue(e).premultiply(t,r).toArray(n??new Float32Array(4))}const O=/^\s*data:(?:([\w-]+)\/([\w+.-]+))?(?:;charset=([\w-]+))?(?:;(base64))?,(.*)/i;function N(e,t=null){const n=6*e;if(t=t||new Uint16Array(n),t.length!==n)throw new Error(`Out buffer length is incorrect, got ${t.length} and expected ${n}`);for(let r=0,i=0;r>>1,e|=e>>>2,e|=e>>>4,e|=e>>>8,e|=e>>>16,e+1}function B(e){return!(e&e-1)&&!!e}function U(e){let t=(e>65535?1:0)<<4;e>>>=t;let n=(e>255?1:0)<<3;return e>>>=n,t|=n,n=(e>15?1:0)<<2,e>>>=n,t|=n,n=(e>3?1:0)<<1,e>>>=n,t|=n,t|e>>1}function G(e,t,n){const r=e.length;let i;if(t>=r||0===n)return;n=t+n>r?r-t:n;const s=r-n;for(i=t;it=>{const n=i.call(t);return e[n]||(e[n]=n.slice(8,-1).toLowerCase())})(Object.create(null)),a=e=>(e=e.toLowerCase(),t=>o(t)===e),l=e=>t=>typeof t===e,{isArray:c}=Array,u=l("undefined");function d(e){return null!==e&&!u(e)&&null!==e.constructor&&!u(e.constructor)&&g(e.constructor.isBuffer)&&e.constructor.isBuffer(e)}const h=a("ArrayBuffer");function p(e){let t;return t="undefined"!==typeof ArrayBuffer&&ArrayBuffer.isView?ArrayBuffer.isView(e):e&&e.buffer&&h(e.buffer),t}const f=l("string"),g=l("function"),m=l("number"),b=e=>null!==e&&"object"===typeof e,_=e=>!0===e||!1===e,y=e=>{if("object"!==o(e))return!1;const t=s(e);return(null===t||t===Object.prototype||null===Object.getPrototypeOf(t))&&!(Symbol.toStringTag in e)&&!(Symbol.iterator in e)},v=a("Date"),E=a("File"),x=a("Blob"),S=a("FileList"),w=e=>b(e)&&g(e.pipe),T=e=>{let t;return e&&("function"===typeof FormData&&e instanceof FormData||g(e.append)&&("formdata"===(t=o(e))||"object"===t&&g(e.toString)&&"[object FormData]"===e.toString()))},A=a("URLSearchParams"),C=e=>e.trim?e.trim():e.replace(/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,"");function I(e,t,{allOwnKeys:n=!1}={}){if(null===e||"undefined"===typeof e)return;let r,i;if("object"!==typeof e&&(e=[e]),c(e))for(r=0,i=e.length;r0)if(r=n[i],t===r.toLowerCase())return r;return null}const k=(()=>"undefined"!==typeof globalThis?globalThis:"undefined"!==typeof self?self:"undefined"!==typeof window?window:global)(),P=e=>!u(e)&&e!==k;function O(){const{caseless:e}=P(this)&&this||{},t={},n=(n,r)=>{const i=e&&R(t,r)||r;y(t[i])&&y(n)?t[i]=O(t[i],n):y(n)?t[i]=O({},n):c(n)?t[i]=n.slice():t[i]=n};for(let r=0,i=arguments.length;r(I(t,((t,i)=>{n&&g(t)?e[i]=r(t,n):e[i]=t}),{allOwnKeys:i}),e),M=e=>(65279===e.charCodeAt(0)&&(e=e.slice(1)),e),D=(e,t,n,r)=>{e.prototype=Object.create(t.prototype,r),e.prototype.constructor=e,Object.defineProperty(e,"super",{value:t.prototype}),n&&Object.assign(e.prototype,n)},L=(e,t,n,r)=>{let i,o,a;const l={};if(t=t||{},null==e)return t;do{i=Object.getOwnPropertyNames(e),o=i.length;while(o-- >0)a=i[o],r&&!r(a,e,t)||l[a]||(t[a]=e[a],l[a]=!0);e=!1!==n&&s(e)}while(e&&(!n||n(e,t))&&e!==Object.prototype);return t},F=(e,t,n)=>{e=String(e),(void 0===n||n>e.length)&&(n=e.length),n-=t.length;const r=e.indexOf(t,n);return-1!==r&&r===n},B=e=>{if(!e)return null;if(c(e))return e;let t=e.length;if(!m(t))return null;const n=new Array(t);while(t-- >0)n[t]=e[t];return n},U=(e=>t=>e&&t instanceof e)("undefined"!==typeof Uint8Array&&s(Uint8Array)),G=(e,t)=>{const n=e&&e[Symbol.iterator],r=n.call(e);let i;while((i=r.next())&&!i.done){const n=i.value;t.call(e,n[0],n[1])}},$=(e,t)=>{let n;const r=[];while(null!==(n=e.exec(t)))r.push(n);return r},z=a("HTMLFormElement"),H=e=>e.toLowerCase().replace(/[-_\s]([a-z\d])(\w*)/g,(function(e,t,n){return t.toUpperCase()+n})),V=(({hasOwnProperty:e})=>(t,n)=>e.call(t,n))(Object.prototype),j=a("RegExp"),W=(e,t)=>{const n=Object.getOwnPropertyDescriptors(e),r={};I(n,((n,i)=>{!1!==t(n,i,e)&&(r[i]=n)})),Object.defineProperties(e,r)},q=e=>{W(e,((t,n)=>{if(g(e)&&-1!==["arguments","caller","callee"].indexOf(n))return!1;const r=e[n];g(r)&&(t.enumerable=!1,"writable"in t?t.writable=!1:t.set||(t.set=()=>{throw Error("Can not rewrite read-only method '"+n+"'")}))}))},X=(e,t)=>{const n={},r=e=>{e.forEach((e=>{n[e]=!0}))};return c(e)?r(e):r(String(e).split(t)),n},Y=()=>{},K=(e,t)=>(e=+e,Number.isFinite(e)?e:t),Z="abcdefghijklmnopqrstuvwxyz",Q="0123456789",J={DIGIT:Q,ALPHA:Z,ALPHA_DIGIT:Z+Z.toUpperCase()+Q},ee=(e=16,t=J.ALPHA_DIGIT)=>{let n="";const{length:r}=t;while(e--)n+=t[Math.random()*r|0];return n};function te(e){return!!(e&&g(e.append)&&"FormData"===e[Symbol.toStringTag]&&e[Symbol.iterator])}const ne=e=>{const t=new Array(10),n=(e,r)=>{if(b(e)){if(t.indexOf(e)>=0)return;if(!("toJSON"in e)){t[r]=e;const i=c(e)?[]:{};return I(e,((e,t)=>{const s=n(e,r+1);!u(s)&&(i[t]=s)})),t[r]=void 0,i}}return e};return n(e,0)},re=a("AsyncFunction"),ie=e=>e&&(b(e)||g(e))&&g(e.then)&&g(e.catch);var se={isArray:c,isArrayBuffer:h,isBuffer:d,isFormData:T,isArrayBufferView:p,isString:f,isNumber:m,isBoolean:_,isObject:b,isPlainObject:y,isUndefined:u,isDate:v,isFile:E,isBlob:x,isRegExp:j,isFunction:g,isStream:w,isURLSearchParams:A,isTypedArray:U,isFileList:S,forEach:I,merge:O,extend:N,trim:C,stripBOM:M,inherits:D,toFlatObject:L,kindOf:o,kindOfTest:a,endsWith:F,toArray:B,forEachEntry:G,matchAll:$,isHTMLForm:z,hasOwnProperty:V,hasOwnProp:V,reduceDescriptors:W,freezeMethods:q,toObjectSet:X,toCamelCase:H,noop:Y,toFiniteNumber:K,findKey:R,global:k,isContextDefined:P,ALPHABET:J,generateString:ee,isSpecCompliantForm:te,toJSONObject:ne,isAsyncFn:re,isThenable:ie};function oe(e,t,n,r,i){Error.call(this),Error.captureStackTrace?Error.captureStackTrace(this,this.constructor):this.stack=(new Error).stack,this.message=e,this.name="AxiosError",t&&(this.code=t),n&&(this.config=n),r&&(this.request=r),i&&(this.response=i)}se.inherits(oe,Error,{toJSON:function(){return{message:this.message,name:this.name,description:this.description,number:this.number,fileName:this.fileName,lineNumber:this.lineNumber,columnNumber:this.columnNumber,stack:this.stack,config:se.toJSONObject(this.config),code:this.code,status:this.response&&this.response.status?this.response.status:null}}});const ae=oe.prototype,le={};["ERR_BAD_OPTION_VALUE","ERR_BAD_OPTION","ECONNABORTED","ETIMEDOUT","ERR_NETWORK","ERR_FR_TOO_MANY_REDIRECTS","ERR_DEPRECATED","ERR_BAD_RESPONSE","ERR_BAD_REQUEST","ERR_CANCELED","ERR_NOT_SUPPORT","ERR_INVALID_URL"].forEach((e=>{le[e]={value:e}})),Object.defineProperties(oe,le),Object.defineProperty(ae,"isAxiosError",{value:!0}),oe.from=(e,t,n,r,i,s)=>{const o=Object.create(ae);return se.toFlatObject(e,o,(function(e){return e!==Error.prototype}),(e=>"isAxiosError"!==e)),oe.call(o,e.message,t,n,r,i),o.cause=e,o.name=e.name,s&&Object.assign(o,s),o};var ce=oe,ue=null;function de(e){return se.isPlainObject(e)||se.isArray(e)}function he(e){return se.endsWith(e,"[]")?e.slice(0,-2):e}function pe(e,t,n){return e?e.concat(t).map((function(e,t){return e=he(e),!n&&t?"["+e+"]":e})).join(n?".":""):t}function fe(e){return se.isArray(e)&&!e.some(de)}const ge=se.toFlatObject(se,{},null,(function(e){return/^is[A-Z]/.test(e)}));function me(e,t,n){if(!se.isObject(e))throw new TypeError("target must be an object");t=t||new(ue||FormData),n=se.toFlatObject(n,{metaTokens:!0,dots:!1,indexes:!1},!1,(function(e,t){return!se.isUndefined(t[e])}));const r=n.metaTokens,i=n.visitor||u,s=n.dots,o=n.indexes,a=n.Blob||"undefined"!==typeof Blob&&Blob,l=a&&se.isSpecCompliantForm(t);if(!se.isFunction(i))throw new TypeError("visitor must be a function");function c(e){if(null===e)return"";if(se.isDate(e))return e.toISOString();if(!l&&se.isBlob(e))throw new ce("Blob is not supported. Use a Buffer instead.");return se.isArrayBuffer(e)||se.isTypedArray(e)?l&&"function"===typeof Blob?new Blob([e]):Buffer.from(e):e}function u(e,n,i){let a=e;if(e&&!i&&"object"===typeof e)if(se.endsWith(n,"{}"))n=r?n:n.slice(0,-2),e=JSON.stringify(e);else if(se.isArray(e)&&fe(e)||(se.isFileList(e)||se.endsWith(n,"[]"))&&(a=se.toArray(e)))return n=he(n),a.forEach((function(e,r){!se.isUndefined(e)&&null!==e&&t.append(!0===o?pe([n],r,s):null===o?n:n+"[]",c(e))})),!1;return!!de(e)||(t.append(pe(i,n,s),c(e)),!1)}const d=[],h=Object.assign(ge,{defaultVisitor:u,convertValue:c,isVisitable:de});function p(e,n){if(!se.isUndefined(e)){if(-1!==d.indexOf(e))throw Error("Circular reference detected in "+n.join("."));d.push(e),se.forEach(e,(function(e,r){const s=!(se.isUndefined(e)||null===e)&&i.call(t,e,se.isString(r)?r.trim():r,n,h);!0===s&&p(e,n?n.concat(r):[r])})),d.pop()}}if(!se.isObject(e))throw new TypeError("data must be an object");return p(e),t}var be=me;function _e(e){const t={"!":"%21","'":"%27","(":"%28",")":"%29","~":"%7E","%20":"+","%00":"\0"};return encodeURIComponent(e).replace(/[!'()~]|%20|%00/g,(function(e){return t[e]}))}function ye(e,t){this._pairs=[],e&&be(e,this,t)}const ve=ye.prototype;ve.append=function(e,t){this._pairs.push([e,t])},ve.toString=function(e){const t=e?function(t){return e.call(this,t,_e)}:_e;return this._pairs.map((function(e){return t(e[0])+"="+t(e[1])}),"").join("&")};var Ee=ye;function xe(e){return encodeURIComponent(e).replace(/%3A/gi,":").replace(/%24/g,"$").replace(/%2C/gi,",").replace(/%20/g,"+").replace(/%5B/gi,"[").replace(/%5D/gi,"]")}function Se(e,t,n){if(!t)return e;const r=n&&n.encode||xe,i=n&&n.serialize;let s;if(s=i?i(t,n):se.isURLSearchParams(t)?t.toString():new Ee(t,n).toString(r),s){const t=e.indexOf("#");-1!==t&&(e=e.slice(0,t)),e+=(-1===e.indexOf("?")?"?":"&")+s}return e}class we{constructor(){this.handlers=[]}use(e,t,n){return this.handlers.push({fulfilled:e,rejected:t,synchronous:!!n&&n.synchronous,runWhen:n?n.runWhen:null}),this.handlers.length-1}eject(e){this.handlers[e]&&(this.handlers[e]=null)}clear(){this.handlers&&(this.handlers=[])}forEach(e){se.forEach(this.handlers,(function(t){null!==t&&e(t)}))}}var Te=we,Ae={silentJSONParsing:!0,forcedJSONParsing:!0,clarifyTimeoutError:!1},Ce="undefined"!==typeof URLSearchParams?URLSearchParams:Ee,Ie="undefined"!==typeof FormData?FormData:null,Re="undefined"!==typeof Blob?Blob:null;const ke=(()=>{let e;return("undefined"===typeof navigator||"ReactNative"!==(e=navigator.product)&&"NativeScript"!==e&&"NS"!==e)&&("undefined"!==typeof window&&"undefined"!==typeof document)})(),Pe=(()=>"undefined"!==typeof WorkerGlobalScope&&self instanceof WorkerGlobalScope&&"function"===typeof self.importScripts)();var Oe={isBrowser:!0,classes:{URLSearchParams:Ce,FormData:Ie,Blob:Re},isStandardBrowserEnv:ke,isStandardBrowserWebWorkerEnv:Pe,protocols:["http","https","file","blob","url","data"]};function Ne(e,t){return be(e,new Oe.classes.URLSearchParams,Object.assign({visitor:function(e,t,n,r){return Oe.isNode&&se.isBuffer(e)?(this.append(t,e.toString("base64")),!1):r.defaultVisitor.apply(this,arguments)}},t))}function Me(e){return se.matchAll(/\w+|\[(\w*)]/g,e).map((e=>"[]"===e[0]?"":e[1]||e[0]))}function De(e){const t={},n=Object.keys(e);let r;const i=n.length;let s;for(r=0;r=e.length;if(s=!s&&se.isArray(r)?r.length:s,a)return se.hasOwnProp(r,s)?r[s]=[r[s],n]:r[s]=n,!o;r[s]&&se.isObject(r[s])||(r[s]=[]);const l=t(e,n,r[s],i);return l&&se.isArray(r[s])&&(r[s]=De(r[s])),!o}if(se.isFormData(e)&&se.isFunction(e.entries)){const n={};return se.forEachEntry(e,((e,r)=>{t(Me(e),r,n,0)})),n}return null}var Fe=Le;const Be={"Content-Type":void 0};function Ue(e,t,n){if(se.isString(e))try{return(t||JSON.parse)(e),se.trim(e)}catch(r){if("SyntaxError"!==r.name)throw r}return(n||JSON.stringify)(e)}const Ge={transitional:Ae,adapter:["xhr","http"],transformRequest:[function(e,t){const n=t.getContentType()||"",r=n.indexOf("application/json")>-1,i=se.isObject(e);i&&se.isHTMLForm(e)&&(e=new FormData(e));const s=se.isFormData(e);if(s)return r&&r?JSON.stringify(Fe(e)):e;if(se.isArrayBuffer(e)||se.isBuffer(e)||se.isStream(e)||se.isFile(e)||se.isBlob(e))return e;if(se.isArrayBufferView(e))return e.buffer;if(se.isURLSearchParams(e))return t.setContentType("application/x-www-form-urlencoded;charset=utf-8",!1),e.toString();let o;if(i){if(n.indexOf("application/x-www-form-urlencoded")>-1)return Ne(e,this.formSerializer).toString();if((o=se.isFileList(e))||n.indexOf("multipart/form-data")>-1){const t=this.env&&this.env.FormData;return be(o?{"files[]":e}:e,t&&new t,this.formSerializer)}}return i||r?(t.setContentType("application/json",!1),Ue(e)):e}],transformResponse:[function(e){const t=this.transitional||Ge.transitional,n=t&&t.forcedJSONParsing,r="json"===this.responseType;if(e&&se.isString(e)&&(n&&!this.responseType||r)){const n=t&&t.silentJSONParsing,s=!n&&r;try{return JSON.parse(e)}catch(i){if(s){if("SyntaxError"===i.name)throw ce.from(i,ce.ERR_BAD_RESPONSE,this,null,this.response);throw i}}}return e}],timeout:0,xsrfCookieName:"XSRF-TOKEN",xsrfHeaderName:"X-XSRF-TOKEN",maxContentLength:-1,maxBodyLength:-1,env:{FormData:Oe.classes.FormData,Blob:Oe.classes.Blob},validateStatus:function(e){return e>=200&&e<300},headers:{common:{Accept:"application/json, text/plain, */*"}}};se.forEach(["delete","get","head"],(function(e){Ge.headers[e]={}})),se.forEach(["post","put","patch"],(function(e){Ge.headers[e]=se.merge(Be)}));var $e=Ge;const ze=se.toObjectSet(["age","authorization","content-length","content-type","etag","expires","from","host","if-modified-since","if-unmodified-since","last-modified","location","max-forwards","proxy-authorization","referer","retry-after","user-agent"]);var He=e=>{const t={};let n,r,i;return e&&e.split("\n").forEach((function(e){i=e.indexOf(":"),n=e.substring(0,i).trim().toLowerCase(),r=e.substring(i+1).trim(),!n||t[n]&&ze[n]||("set-cookie"===n?t[n]?t[n].push(r):t[n]=[r]:t[n]=t[n]?t[n]+", "+r:r)})),t};const Ve=Symbol("internals");function je(e){return e&&String(e).trim().toLowerCase()}function We(e){return!1===e||null==e?e:se.isArray(e)?e.map(We):String(e)}function qe(e){const t=Object.create(null),n=/([^\s,;=]+)\s*(?:=\s*([^,;]+))?/g;let r;while(r=n.exec(e))t[r[1]]=r[2];return t}const Xe=e=>/^[-_a-zA-Z0-9^`|~,!#$%&'*+.]+$/.test(e.trim());function Ye(e,t,n,r,i){return se.isFunction(r)?r.call(this,t,n):(i&&(t=n),se.isString(t)?se.isString(r)?-1!==t.indexOf(r):se.isRegExp(r)?r.test(t):void 0:void 0)}function Ke(e){return e.trim().toLowerCase().replace(/([a-z\d])(\w*)/g,((e,t,n)=>t.toUpperCase()+n))}function Ze(e,t){const n=se.toCamelCase(" "+t);["get","set","has"].forEach((r=>{Object.defineProperty(e,r+n,{value:function(e,n,i){return this[r].call(this,t,e,n,i)},configurable:!0})}))}class Qe{constructor(e){e&&this.set(e)}set(e,t,n){const r=this;function i(e,t,n){const i=je(t);if(!i)throw new Error("header name must be a non-empty string");const s=se.findKey(r,i);(!s||void 0===r[s]||!0===n||void 0===n&&!1!==r[s])&&(r[s||t]=We(e))}const s=(e,t)=>se.forEach(e,((e,n)=>i(e,n,t)));return se.isPlainObject(e)||e instanceof this.constructor?s(e,t):se.isString(e)&&(e=e.trim())&&!Xe(e)?s(He(e),t):null!=e&&i(t,e,n),this}get(e,t){if(e=je(e),e){const n=se.findKey(this,e);if(n){const e=this[n];if(!t)return e;if(!0===t)return qe(e);if(se.isFunction(t))return t.call(this,e,n);if(se.isRegExp(t))return t.exec(e);throw new TypeError("parser must be boolean|regexp|function")}}}has(e,t){if(e=je(e),e){const n=se.findKey(this,e);return!(!n||void 0===this[n]||t&&!Ye(this,this[n],n,t))}return!1}delete(e,t){const n=this;let r=!1;function i(e){if(e=je(e),e){const i=se.findKey(n,e);!i||t&&!Ye(n,n[i],i,t)||(delete n[i],r=!0)}}return se.isArray(e)?e.forEach(i):i(e),r}clear(e){const t=Object.keys(this);let n=t.length,r=!1;while(n--){const i=t[n];e&&!Ye(this,this[i],i,e,!0)||(delete this[i],r=!0)}return r}normalize(e){const t=this,n={};return se.forEach(this,((r,i)=>{const s=se.findKey(n,i);if(s)return t[s]=We(r),void delete t[i];const o=e?Ke(i):String(i).trim();o!==i&&delete t[i],t[o]=We(r),n[o]=!0})),this}concat(...e){return this.constructor.concat(this,...e)}toJSON(e){const t=Object.create(null);return se.forEach(this,((n,r)=>{null!=n&&!1!==n&&(t[r]=e&&se.isArray(n)?n.join(", "):n)})),t}[Symbol.iterator](){return Object.entries(this.toJSON())[Symbol.iterator]()}toString(){return Object.entries(this.toJSON()).map((([e,t])=>e+": "+t)).join("\n")}get[Symbol.toStringTag](){return"AxiosHeaders"}static from(e){return e instanceof this?e:new this(e)}static concat(e,...t){const n=new this(e);return t.forEach((e=>n.set(e))),n}static accessor(e){const t=this[Ve]=this[Ve]={accessors:{}},n=t.accessors,r=this.prototype;function i(e){const t=je(e);n[t]||(Ze(r,e),n[t]=!0)}return se.isArray(e)?e.forEach(i):i(e),this}}Qe.accessor(["Content-Type","Content-Length","Accept","Accept-Encoding","User-Agent","Authorization"]),se.freezeMethods(Qe.prototype),se.freezeMethods(Qe);var Je=Qe;function et(e,t){const n=this||$e,r=t||n,i=Je.from(r.headers);let s=r.data;return se.forEach(e,(function(e){s=e.call(n,s,i.normalize(),t?t.status:void 0)})),i.normalize(),s}function tt(e){return!(!e||!e.__CANCEL__)}function nt(e,t,n){ce.call(this,null==e?"canceled":e,ce.ERR_CANCELED,t,n),this.name="CanceledError"}se.inherits(nt,ce,{__CANCEL__:!0});var rt=nt;function it(e,t,n){const r=n.config.validateStatus;n.status&&r&&!r(n.status)?t(new ce("Request failed with status code "+n.status,[ce.ERR_BAD_REQUEST,ce.ERR_BAD_RESPONSE][Math.floor(n.status/100)-4],n.config,n.request,n)):e(n)}var st=Oe.isStandardBrowserEnv?function(){return{write:function(e,t,n,r,i,s){const o=[];o.push(e+"="+encodeURIComponent(t)),se.isNumber(n)&&o.push("expires="+new Date(n).toGMTString()),se.isString(r)&&o.push("path="+r),se.isString(i)&&o.push("domain="+i),!0===s&&o.push("secure"),document.cookie=o.join("; ")},read:function(e){const t=document.cookie.match(new RegExp("(^|;\\s*)("+e+")=([^;]*)"));return t?decodeURIComponent(t[3]):null},remove:function(e){this.write(e,"",Date.now()-864e5)}}}():function(){return{write:function(){},read:function(){return null},remove:function(){}}}();function ot(e){return/^([a-z][a-z\d+\-.]*:)?\/\//i.test(e)}function at(e,t){return t?e.replace(/\/+$/,"")+"/"+t.replace(/^\/+/,""):e}function lt(e,t){return e&&!ot(t)?at(e,t):t}var ct=Oe.isStandardBrowserEnv?function(){const e=/(msie|trident)/i.test(navigator.userAgent),t=document.createElement("a");let n;function r(n){let r=n;return e&&(t.setAttribute("href",r),r=t.href),t.setAttribute("href",r),{href:t.href,protocol:t.protocol?t.protocol.replace(/:$/,""):"",host:t.host,search:t.search?t.search.replace(/^\?/,""):"",hash:t.hash?t.hash.replace(/^#/,""):"",hostname:t.hostname,port:t.port,pathname:"/"===t.pathname.charAt(0)?t.pathname:"/"+t.pathname}}return n=r(window.location.href),function(e){const t=se.isString(e)?r(e):e;return t.protocol===n.protocol&&t.host===n.host}}():function(){return function(){return!0}}();function ut(e){const t=/^([-+\w]{1,25})(:?\/\/|:)/.exec(e);return t&&t[1]||""}function dt(e,t){e=e||10;const n=new Array(e),r=new Array(e);let i,s=0,o=0;return t=void 0!==t?t:1e3,function(a){const l=Date.now(),c=r[o];i||(i=l),n[s]=a,r[s]=l;let u=o,d=0;while(u!==s)d+=n[u++],u%=e;if(s=(s+1)%e,s===o&&(o=(o+1)%e),l-i{const s=i.loaded,o=i.lengthComputable?i.total:void 0,a=s-n,l=r(a),c=s<=o;n=s;const u={loaded:s,total:o,progress:o?s/o:void 0,bytes:a,rate:l||void 0,estimated:l&&o&&c?(o-s)/l:void 0,event:i};u[t?"download":"upload"]=!0,e(u)}}const ft="undefined"!==typeof XMLHttpRequest;var gt=ft&&function(e){return new Promise((function(t,n){let r=e.data;const i=Je.from(e.headers).normalize(),s=e.responseType;let o;function a(){e.cancelToken&&e.cancelToken.unsubscribe(o),e.signal&&e.signal.removeEventListener("abort",o)}se.isFormData(r)&&(Oe.isStandardBrowserEnv||Oe.isStandardBrowserWebWorkerEnv?i.setContentType(!1):i.setContentType("multipart/form-data;",!1));let l=new XMLHttpRequest;if(e.auth){const t=e.auth.username||"",n=e.auth.password?unescape(encodeURIComponent(e.auth.password)):"";i.set("Authorization","Basic "+btoa(t+":"+n))}const c=lt(e.baseURL,e.url);function u(){if(!l)return;const r=Je.from("getAllResponseHeaders"in l&&l.getAllResponseHeaders()),i=s&&"text"!==s&&"json"!==s?l.response:l.responseText,o={data:i,status:l.status,statusText:l.statusText,headers:r,config:e,request:l};it((function(e){t(e),a()}),(function(e){n(e),a()}),o),l=null}if(l.open(e.method.toUpperCase(),Se(c,e.params,e.paramsSerializer),!0),l.timeout=e.timeout,"onloadend"in l?l.onloadend=u:l.onreadystatechange=function(){l&&4===l.readyState&&(0!==l.status||l.responseURL&&0===l.responseURL.indexOf("file:"))&&setTimeout(u)},l.onabort=function(){l&&(n(new ce("Request aborted",ce.ECONNABORTED,e,l)),l=null)},l.onerror=function(){n(new ce("Network Error",ce.ERR_NETWORK,e,l)),l=null},l.ontimeout=function(){let t=e.timeout?"timeout of "+e.timeout+"ms exceeded":"timeout exceeded";const r=e.transitional||Ae;e.timeoutErrorMessage&&(t=e.timeoutErrorMessage),n(new ce(t,r.clarifyTimeoutError?ce.ETIMEDOUT:ce.ECONNABORTED,e,l)),l=null},Oe.isStandardBrowserEnv){const t=(e.withCredentials||ct(c))&&e.xsrfCookieName&&st.read(e.xsrfCookieName);t&&i.set(e.xsrfHeaderName,t)}void 0===r&&i.setContentType(null),"setRequestHeader"in l&&se.forEach(i.toJSON(),(function(e,t){l.setRequestHeader(t,e)})),se.isUndefined(e.withCredentials)||(l.withCredentials=!!e.withCredentials),s&&"json"!==s&&(l.responseType=e.responseType),"function"===typeof e.onDownloadProgress&&l.addEventListener("progress",pt(e.onDownloadProgress,!0)),"function"===typeof e.onUploadProgress&&l.upload&&l.upload.addEventListener("progress",pt(e.onUploadProgress)),(e.cancelToken||e.signal)&&(o=t=>{l&&(n(!t||t.type?new rt(null,e,l):t),l.abort(),l=null)},e.cancelToken&&e.cancelToken.subscribe(o),e.signal&&(e.signal.aborted?o():e.signal.addEventListener("abort",o)));const d=ut(c);d&&-1===Oe.protocols.indexOf(d)?n(new ce("Unsupported protocol "+d+":",ce.ERR_BAD_REQUEST,e)):l.send(r||null)}))};const mt={http:ue,xhr:gt};se.forEach(mt,((e,t)=>{if(e){try{Object.defineProperty(e,"name",{value:t})}catch(n){}Object.defineProperty(e,"adapterName",{value:t})}}));var bt={getAdapter:e=>{e=se.isArray(e)?e:[e];const{length:t}=e;let n,r;for(let i=0;ie instanceof Je?e.toJSON():e;function Et(e,t){t=t||{};const n={};function r(e,t,n){return se.isPlainObject(e)&&se.isPlainObject(t)?se.merge.call({caseless:n},e,t):se.isPlainObject(t)?se.merge({},t):se.isArray(t)?t.slice():t}function i(e,t,n){return se.isUndefined(t)?se.isUndefined(e)?void 0:r(void 0,e,n):r(e,t,n)}function s(e,t){if(!se.isUndefined(t))return r(void 0,t)}function o(e,t){return se.isUndefined(t)?se.isUndefined(e)?void 0:r(void 0,e):r(void 0,t)}function a(n,i,s){return s in t?r(n,i):s in e?r(void 0,n):void 0}const l={url:s,method:s,data:s,baseURL:o,transformRequest:o,transformResponse:o,paramsSerializer:o,timeout:o,timeoutMessage:o,withCredentials:o,adapter:o,responseType:o,xsrfCookieName:o,xsrfHeaderName:o,onUploadProgress:o,onDownloadProgress:o,decompress:o,maxContentLength:o,maxBodyLength:o,beforeRedirect:o,transport:o,httpAgent:o,httpsAgent:o,cancelToken:o,socketPath:o,responseEncoding:o,validateStatus:a,headers:(e,t)=>i(vt(e),vt(t),!0)};return se.forEach(Object.keys(Object.assign({},e,t)),(function(r){const s=l[r]||i,o=s(e[r],t[r],r);se.isUndefined(o)&&s!==a||(n[r]=o)})),n}const xt="1.4.0",St={};["object","boolean","number","function","string","symbol"].forEach(((e,t)=>{St[e]=function(n){return typeof n===e||"a"+(t<1?"n ":" ")+e}}));const wt={};function Tt(e,t,n){if("object"!==typeof e)throw new ce("options must be an object",ce.ERR_BAD_OPTION_VALUE);const r=Object.keys(e);let i=r.length;while(i-- >0){const s=r[i],o=t[s];if(o){const t=e[s],n=void 0===t||o(t,s,e);if(!0!==n)throw new ce("option "+s+" must be "+n,ce.ERR_BAD_OPTION_VALUE)}else if(!0!==n)throw new ce("Unknown option "+s,ce.ERR_BAD_OPTION)}}St.transitional=function(e,t,n){function r(e,t){return"[Axios v"+xt+"] Transitional option '"+e+"'"+t+(n?". "+n:"")}return(n,i,s)=>{if(!1===e)throw new ce(r(i," has been removed"+(t?" in "+t:"")),ce.ERR_DEPRECATED);return t&&!wt[i]&&(wt[i]=!0,console.warn(r(i," has been deprecated since v"+t+" and will be removed in the near future"))),!e||e(n,i,s)}};var At={assertOptions:Tt,validators:St};const Ct=At.validators;class It{constructor(e){this.defaults=e,this.interceptors={request:new Te,response:new Te}}request(e,t){"string"===typeof e?(t=t||{},t.url=e):t=e||{},t=Et(this.defaults,t);const{transitional:n,paramsSerializer:r,headers:i}=t;let s;void 0!==n&&At.assertOptions(n,{silentJSONParsing:Ct.transitional(Ct.boolean),forcedJSONParsing:Ct.transitional(Ct.boolean),clarifyTimeoutError:Ct.transitional(Ct.boolean)},!1),null!=r&&(se.isFunction(r)?t.paramsSerializer={serialize:r}:At.assertOptions(r,{encode:Ct.function,serialize:Ct.function},!0)),t.method=(t.method||this.defaults.method||"get").toLowerCase(),s=i&&se.merge(i.common,i[t.method]),s&&se.forEach(["delete","get","head","post","put","patch","common"],(e=>{delete i[e]})),t.headers=Je.concat(s,i);const o=[];let a=!0;this.interceptors.request.forEach((function(e){"function"===typeof e.runWhen&&!1===e.runWhen(t)||(a=a&&e.synchronous,o.unshift(e.fulfilled,e.rejected))}));const l=[];let c;this.interceptors.response.forEach((function(e){l.push(e.fulfilled,e.rejected)}));let u,d=0;if(!a){const e=[yt.bind(this),void 0];e.unshift.apply(e,o),e.push.apply(e,l),u=e.length,c=Promise.resolve(t);while(d{if(!n._listeners)return;let t=n._listeners.length;while(t-- >0)n._listeners[t](e);n._listeners=null})),this.promise.then=e=>{let t;const r=new Promise((e=>{n.subscribe(e),t=e})).then(e);return r.cancel=function(){n.unsubscribe(t)},r},e((function(e,r,i){n.reason||(n.reason=new rt(e,r,i),t(n.reason))}))}throwIfRequested(){if(this.reason)throw this.reason}subscribe(e){this.reason?e(this.reason):this._listeners?this._listeners.push(e):this._listeners=[e]}unsubscribe(e){if(!this._listeners)return;const t=this._listeners.indexOf(e);-1!==t&&this._listeners.splice(t,1)}static source(){let e;const t=new kt((function(t){e=t}));return{token:t,cancel:e}}}var Pt=kt;function Ot(e){return function(t){return e.apply(null,t)}}function Nt(e){return se.isObject(e)&&!0===e.isAxiosError}const Mt={Continue:100,SwitchingProtocols:101,Processing:102,EarlyHints:103,Ok:200,Created:201,Accepted:202,NonAuthoritativeInformation:203,NoContent:204,ResetContent:205,PartialContent:206,MultiStatus:207,AlreadyReported:208,ImUsed:226,MultipleChoices:300,MovedPermanently:301,Found:302,SeeOther:303,NotModified:304,UseProxy:305,Unused:306,TemporaryRedirect:307,PermanentRedirect:308,BadRequest:400,Unauthorized:401,PaymentRequired:402,Forbidden:403,NotFound:404,MethodNotAllowed:405,NotAcceptable:406,ProxyAuthenticationRequired:407,RequestTimeout:408,Conflict:409,Gone:410,LengthRequired:411,PreconditionFailed:412,PayloadTooLarge:413,UriTooLong:414,UnsupportedMediaType:415,RangeNotSatisfiable:416,ExpectationFailed:417,ImATeapot:418,MisdirectedRequest:421,UnprocessableEntity:422,Locked:423,FailedDependency:424,TooEarly:425,UpgradeRequired:426,PreconditionRequired:428,TooManyRequests:429,RequestHeaderFieldsTooLarge:431,UnavailableForLegalReasons:451,InternalServerError:500,NotImplemented:501,BadGateway:502,ServiceUnavailable:503,GatewayTimeout:504,HttpVersionNotSupported:505,VariantAlsoNegotiates:506,InsufficientStorage:507,LoopDetected:508,NotExtended:510,NetworkAuthenticationRequired:511};Object.entries(Mt).forEach((([e,t])=>{Mt[t]=e}));var Dt=Mt;function Lt(e){const t=new Rt(e),n=r(Rt.prototype.request,t);return se.extend(n,Rt.prototype,t,{allOwnKeys:!0}),se.extend(n,t,null,{allOwnKeys:!0}),n.create=function(t){return Lt(Et(e,t))},n}const Ft=Lt($e);Ft.Axios=Rt,Ft.CanceledError=rt,Ft.CancelToken=Pt,Ft.isCancel=tt,Ft.VERSION=xt,Ft.toFormData=be,Ft.AxiosError=ce,Ft.Cancel=Ft.CanceledError,Ft.all=function(e){return Promise.all(e)},Ft.spread=Ot,Ft.isAxiosError=Nt,Ft.mergeConfig=Et,Ft.AxiosHeaders=Je,Ft.formToJSON=e=>Fe(se.isHTMLForm(e)?new FormData(e):e),Ft.HttpStatusCode=Dt,Ft.default=Ft;var Bt=Ft},5750:function(e,t,n){"use strict"; -/*! - * @kurkle/color v0.3.2 - * https://github.com/kurkle/color#readme - * (c) 2023 Jukka Kurkela - * Released under the MIT License - */ -function r(e){return e+.5|0}n.d(t,{uw:function(){return ra},kL:function(){return ro},De:function(){return Ro},ST:function(){return Xr},jn:function(){return _o},f$:function(){return aa},od:function(){return vo},Dx:function(){return Oo},u:function(){return Qo}});const i=(e,t,n)=>Math.max(Math.min(e,n),t);function s(e){return i(r(2.55*e),0,255)}function o(e){return i(r(255*e),0,255)}function a(e){return i(r(e/2.55)/100,0,1)}function l(e){return i(r(100*e),0,100)}const c={0:0,1:1,2:2,3:3,4:4,5:5,6:6,7:7,8:8,9:9,A:10,B:11,C:12,D:13,E:14,F:15,a:10,b:11,c:12,d:13,e:14,f:15},u=[..."0123456789ABCDEF"],d=e=>u[15&e],h=e=>u[(240&e)>>4]+u[15&e],p=e=>(240&e)>>4===(15&e),f=e=>p(e.r)&&p(e.g)&&p(e.b)&&p(e.a);function g(e){var t,n=e.length;return"#"===e[0]&&(4===n||5===n?t={r:255&17*c[e[1]],g:255&17*c[e[2]],b:255&17*c[e[3]],a:5===n?17*c[e[4]]:255}:7!==n&&9!==n||(t={r:c[e[1]]<<4|c[e[2]],g:c[e[3]]<<4|c[e[4]],b:c[e[5]]<<4|c[e[6]],a:9===n?c[e[7]]<<4|c[e[8]]:255})),t}const m=(e,t)=>e<255?t(e):"";function b(e){var t=f(e)?d:h;return e?"#"+t(e.r)+t(e.g)+t(e.b)+m(e.a,t):void 0}const _=/^(hsla?|hwb|hsv)\(\s*([-+.e\d]+)(?:deg)?[\s,]+([-+.e\d]+)%[\s,]+([-+.e\d]+)%(?:[\s,]+([-+.e\d]+)(%)?)?\s*\)$/;function y(e,t,n){const r=t*Math.min(n,1-n),i=(t,i=(t+e/30)%12)=>n-r*Math.max(Math.min(i-3,9-i,1),-1);return[i(0),i(8),i(4)]}function v(e,t,n){const r=(r,i=(r+e/60)%6)=>n-n*t*Math.max(Math.min(i,4-i,1),0);return[r(5),r(3),r(1)]}function E(e,t,n){const r=y(e,1,.5);let i;for(t+n>1&&(i=1/(t+n),t*=i,n*=i),i=0;i<3;i++)r[i]*=1-t-n,r[i]+=t;return r}function x(e,t,n,r,i){return e===i?(t-n)/r+(t.5?u/(2-s-o):u/(s+o),l=x(n,r,i,u,s),l=60*l+.5),[0|l,c||0,a]}function w(e,t,n,r){return(Array.isArray(t)?e(t[0],t[1],t[2]):e(t,n,r)).map(o)}function T(e,t,n){return w(y,e,t,n)}function A(e,t,n){return w(E,e,t,n)}function C(e,t,n){return w(v,e,t,n)}function I(e){return(e%360+360)%360}function R(e){const t=_.exec(e);let n,r=255;if(!t)return;t[5]!==n&&(r=t[6]?s(+t[5]):o(+t[5]));const i=I(+t[2]),a=+t[3]/100,l=+t[4]/100;return n="hwb"===t[1]?A(i,a,l):"hsv"===t[1]?C(i,a,l):T(i,a,l),{r:n[0],g:n[1],b:n[2],a:r}}function k(e,t){var n=S(e);n[0]=I(n[0]+t),n=T(n),e.r=n[0],e.g=n[1],e.b=n[2]}function P(e){if(!e)return;const t=S(e),n=t[0],r=l(t[1]),i=l(t[2]);return e.a<255?`hsla(${n}, ${r}%, ${i}%, ${a(e.a)})`:`hsl(${n}, ${r}%, ${i}%)`}const O={x:"dark",Z:"light",Y:"re",X:"blu",W:"gr",V:"medium",U:"slate",A:"ee",T:"ol",S:"or",B:"ra",C:"lateg",D:"ights",R:"in",Q:"turquois",E:"hi",P:"ro",O:"al",N:"le",M:"de",L:"yello",F:"en",K:"ch",G:"arks",H:"ea",I:"ightg",J:"wh"},N={OiceXe:"f0f8ff",antiquewEte:"faebd7",aqua:"ffff",aquamarRe:"7fffd4",azuY:"f0ffff",beige:"f5f5dc",bisque:"ffe4c4",black:"0",blanKedOmond:"ffebcd",Xe:"ff",XeviTet:"8a2be2",bPwn:"a52a2a",burlywood:"deb887",caMtXe:"5f9ea0",KartYuse:"7fff00",KocTate:"d2691e",cSO:"ff7f50",cSnflowerXe:"6495ed",cSnsilk:"fff8dc",crimson:"dc143c",cyan:"ffff",xXe:"8b",xcyan:"8b8b",xgTMnPd:"b8860b",xWay:"a9a9a9",xgYF:"6400",xgYy:"a9a9a9",xkhaki:"bdb76b",xmagFta:"8b008b",xTivegYF:"556b2f",xSange:"ff8c00",xScEd:"9932cc",xYd:"8b0000",xsOmon:"e9967a",xsHgYF:"8fbc8f",xUXe:"483d8b",xUWay:"2f4f4f",xUgYy:"2f4f4f",xQe:"ced1",xviTet:"9400d3",dAppRk:"ff1493",dApskyXe:"bfff",dimWay:"696969",dimgYy:"696969",dodgerXe:"1e90ff",fiYbrick:"b22222",flSOwEte:"fffaf0",foYstWAn:"228b22",fuKsia:"ff00ff",gaRsbSo:"dcdcdc",ghostwEte:"f8f8ff",gTd:"ffd700",gTMnPd:"daa520",Way:"808080",gYF:"8000",gYFLw:"adff2f",gYy:"808080",honeyMw:"f0fff0",hotpRk:"ff69b4",RdianYd:"cd5c5c",Rdigo:"4b0082",ivSy:"fffff0",khaki:"f0e68c",lavFMr:"e6e6fa",lavFMrXsh:"fff0f5",lawngYF:"7cfc00",NmoncEffon:"fffacd",ZXe:"add8e6",ZcSO:"f08080",Zcyan:"e0ffff",ZgTMnPdLw:"fafad2",ZWay:"d3d3d3",ZgYF:"90ee90",ZgYy:"d3d3d3",ZpRk:"ffb6c1",ZsOmon:"ffa07a",ZsHgYF:"20b2aa",ZskyXe:"87cefa",ZUWay:"778899",ZUgYy:"778899",ZstAlXe:"b0c4de",ZLw:"ffffe0",lime:"ff00",limegYF:"32cd32",lRF:"faf0e6",magFta:"ff00ff",maPon:"800000",VaquamarRe:"66cdaa",VXe:"cd",VScEd:"ba55d3",VpurpN:"9370db",VsHgYF:"3cb371",VUXe:"7b68ee",VsprRggYF:"fa9a",VQe:"48d1cc",VviTetYd:"c71585",midnightXe:"191970",mRtcYam:"f5fffa",mistyPse:"ffe4e1",moccasR:"ffe4b5",navajowEte:"ffdead",navy:"80",Tdlace:"fdf5e6",Tive:"808000",TivedBb:"6b8e23",Sange:"ffa500",SangeYd:"ff4500",ScEd:"da70d6",pOegTMnPd:"eee8aa",pOegYF:"98fb98",pOeQe:"afeeee",pOeviTetYd:"db7093",papayawEp:"ffefd5",pHKpuff:"ffdab9",peru:"cd853f",pRk:"ffc0cb",plum:"dda0dd",powMrXe:"b0e0e6",purpN:"800080",YbeccapurpN:"663399",Yd:"ff0000",Psybrown:"bc8f8f",PyOXe:"4169e1",saddNbPwn:"8b4513",sOmon:"fa8072",sandybPwn:"f4a460",sHgYF:"2e8b57",sHshell:"fff5ee",siFna:"a0522d",silver:"c0c0c0",skyXe:"87ceeb",UXe:"6a5acd",UWay:"708090",UgYy:"708090",snow:"fffafa",sprRggYF:"ff7f",stAlXe:"4682b4",tan:"d2b48c",teO:"8080",tEstN:"d8bfd8",tomato:"ff6347",Qe:"40e0d0",viTet:"ee82ee",JHt:"f5deb3",wEte:"ffffff",wEtesmoke:"f5f5f5",Lw:"ffff00",LwgYF:"9acd32"};function M(){const e={},t=Object.keys(N),n=Object.keys(O);let r,i,s,o,a;for(r=0;r>16&255,s>>8&255,255&s]}return e}let D;function L(e){D||(D=M(),D.transparent=[0,0,0,0]);const t=D[e.toLowerCase()];return t&&{r:t[0],g:t[1],b:t[2],a:4===t.length?t[3]:255}}const F=/^rgba?\(\s*([-+.\d]+)(%)?[\s,]+([-+.e\d]+)(%)?[\s,]+([-+.e\d]+)(%)?(?:[\s,/]+([-+.e\d]+)(%)?)?\s*\)$/;function B(e){const t=F.exec(e);let n,r,o,a=255;if(t){if(t[7]!==n){const e=+t[7];a=t[8]?s(e):i(255*e,0,255)}return n=+t[1],r=+t[3],o=+t[5],n=255&(t[2]?s(n):i(n,0,255)),r=255&(t[4]?s(r):i(r,0,255)),o=255&(t[6]?s(o):i(o,0,255)),{r:n,g:r,b:o,a:a}}}function U(e){return e&&(e.a<255?`rgba(${e.r}, ${e.g}, ${e.b}, ${a(e.a)})`:`rgb(${e.r}, ${e.g}, ${e.b})`)}const G=e=>e<=.0031308?12.92*e:1.055*Math.pow(e,1/2.4)-.055,$=e=>e<=.04045?e/12.92:Math.pow((e+.055)/1.055,2.4);function z(e,t,n){const r=$(a(e.r)),i=$(a(e.g)),s=$(a(e.b));return{r:o(G(r+n*($(a(t.r))-r))),g:o(G(i+n*($(a(t.g))-i))),b:o(G(s+n*($(a(t.b))-s))),a:e.a+n*(t.a-e.a)}}function H(e,t,n){if(e){let r=S(e);r[t]=Math.max(0,Math.min(r[t]+r[t]*n,0===t?360:1)),r=T(r),e.r=r[0],e.g=r[1],e.b=r[2]}}function V(e,t){return e?Object.assign(t||{},e):e}function j(e){var t={r:0,g:0,b:0,a:255};return Array.isArray(e)?e.length>=3&&(t={r:e[0],g:e[1],b:e[2],a:255},e.length>3&&(t.a=o(e[3]))):(t=V(e,{r:0,g:0,b:0,a:1}),t.a=o(t.a)),t}function W(e){return"r"===e.charAt(0)?B(e):R(e)}class q{constructor(e){if(e instanceof q)return e;const t=typeof e;let n;"object"===t?n=j(e):"string"===t&&(n=g(e)||L(e)||W(e)),this._rgb=n,this._valid=!!n}get valid(){return this._valid}get rgb(){var e=V(this._rgb);return e&&(e.a=a(e.a)),e}set rgb(e){this._rgb=j(e)}rgbString(){return this._valid?U(this._rgb):void 0}hexString(){return this._valid?b(this._rgb):void 0}hslString(){return this._valid?P(this._rgb):void 0}mix(e,t){if(e){const n=this.rgb,r=e.rgb;let i;const s=t===i?.5:t,o=2*s-1,a=n.a-r.a,l=((o*a===-1?o:(o+a)/(1+o*a))+1)/2;i=1-l,n.r=255&l*n.r+i*r.r+.5,n.g=255&l*n.g+i*r.g+.5,n.b=255&l*n.b+i*r.b+.5,n.a=s*n.a+(1-s)*r.a,this.rgb=n}return this}interpolate(e,t){return e&&(this._rgb=z(this._rgb,e._rgb,t)),this}clone(){return new q(this.rgb)}alpha(e){return this._rgb.a=o(e),this}clearer(e){const t=this._rgb;return t.a*=1-e,this}greyscale(){const e=this._rgb,t=r(.3*e.r+.59*e.g+.11*e.b);return e.r=e.g=e.b=t,this}opaquer(e){const t=this._rgb;return t.a*=1+e,this}negate(){const e=this._rgb;return e.r=255-e.r,e.g=255-e.g,e.b=255-e.b,this}lighten(e){return H(this._rgb,2,e),this}darken(e){return H(this._rgb,2,-e),this}saturate(e){return H(this._rgb,1,e),this}desaturate(e){return H(this._rgb,1,-e),this}rotate(e){return k(this._rgb,e),this}} -/*! - * Chart.js v4.3.0 - * https://www.chartjs.org - * (c) 2023 Chart.js Contributors - * Released under the MIT License - */ -function X(){}const Y=(()=>{let e=0;return()=>e++})();function K(e){return null===e||"undefined"===typeof e}function Z(e){if(Array.isArray&&Array.isArray(e))return!0;const t=Object.prototype.toString.call(e);return"[object"===t.slice(0,7)&&"Array]"===t.slice(-6)}function Q(e){return null!==e&&"[object Object]"===Object.prototype.toString.call(e)}function J(e){return("number"===typeof e||e instanceof Number)&&isFinite(+e)}function ee(e,t){return J(e)?e:t}function te(e,t){return"undefined"===typeof e?t:e}const ne=(e,t)=>"string"===typeof e&&e.endsWith("%")?parseFloat(e)/100*t:+e;function re(e,t,n){if(e&&"function"===typeof e.call)return e.apply(n,t)}function ie(e,t,n,r){let i,s,o;if(Z(e))if(s=e.length,r)for(i=s-1;i>=0;i--)t.call(n,e[i],i);else for(i=0;ie,x:e=>e.x,y:e=>e.y};function pe(e){const t=e.split("."),n=[];let r="";for(const i of t)r+=i,r.endsWith("\\")?r=r.slice(0,-1)+".":(n.push(r),r="");return n}function fe(e){const t=pe(e);return e=>{for(const n of t){if(""===n)break;e=e&&e[n]}return e}}function ge(e,t){const n=he[t]||(he[t]=fe(t));return n(e)}function me(e){return e.charAt(0).toUpperCase()+e.slice(1)}const be=e=>"undefined"!==typeof e,_e=e=>"function"===typeof e,ye=(e,t)=>{if(e.size!==t.size)return!1;for(const n of e)if(!t.has(n))return!1;return!0};function ve(e){return"mouseup"===e.type||"click"===e.type||"contextmenu"===e.type}const Ee=Math.PI,xe=2*Ee,Se=xe+Ee,we=Number.POSITIVE_INFINITY,Te=Ee/180,Ae=Ee/2,Ce=Ee/4,Ie=2*Ee/3,Re=Math.log10,ke=Math.sign;function Pe(e,t,n){return Math.abs(e-t)e-t)).pop(),t}function Me(e){return!isNaN(parseFloat(e))&&isFinite(e)}function De(e,t){const n=Math.round(e);return n-t<=e&&n+t>=e}function Le(e,t,n){let r,i,s;for(r=0,i=e.length;rl&&c=Math.min(t,n)-r&&e<=Math.max(t,n)+r}function Xe(e,t,n){n=n||(n=>e[n]1)r=s+i>>1,n(r)?s=r:i=r;return{lo:s,hi:i}}const Ye=(e,t,n,r)=>Xe(e,n,r?r=>{const i=e[r][t];return ie[r][t]Xe(e,n,(r=>e[r][t]>=n));function Ze(e,t,n){let r=0,i=e.length;while(rr&&e[i-1]>n)i--;return r>0||i{const n="_onData"+me(t),r=e[t];Object.defineProperty(e,t,{configurable:!0,enumerable:!1,value(...t){const i=r.apply(this,t);return e._chartjs.listeners.forEach((e=>{"function"===typeof e[n]&&e[n](...t)})),i}})})))}function et(e,t){const n=e._chartjs;if(!n)return;const r=n.listeners,i=r.indexOf(t);-1!==i&&r.splice(i,1),r.length>0||(Qe.forEach((t=>{delete e[t]})),delete e._chartjs)}function tt(e){const t=new Set(e);return t.size===e.length?e:Array.from(t)}const nt=function(){return"undefined"===typeof window?function(e){return e()}:window.requestAnimationFrame}();function rt(e,t){let n=[],r=!1;return function(...i){n=i,r||(r=!0,nt.call(window,(()=>{r=!1,e.apply(t,n)})))}}function it(e,t){let n;return function(...r){return t?(clearTimeout(n),n=setTimeout(e,t,r)):e.apply(this,r),t}}const st=e=>"start"===e?"left":"end"===e?"right":"center",ot=(e,t,n)=>"start"===e?t:"end"===e?n:(t+n)/2,at=(e,t,n,r)=>{const i=r?"left":"right";return e===i?n:"center"===e?(t+n)/2:t};function lt(e,t,n){const r=t.length;let i=0,s=r;if(e._sorted){const{iScale:o,_parsed:a}=e,l=o.axis,{min:c,max:u,minDefined:d,maxDefined:h}=o.getUserBounds();d&&(i=je(Math.min(Ye(a,o.axis,c).lo,n?r:Ye(t,l,o.getPixelForValue(c)).lo),0,r-1)),s=h?je(Math.max(Ye(a,o.axis,u,!0).hi+1,n?0:Ye(t,l,o.getPixelForValue(u),!0).hi+1),i,r)-i:r-i}return{start:i,count:s}}function ct(e){const{xScale:t,yScale:n,_scaleRanges:r}=e,i={xmin:t.min,xmax:t.max,ymin:n.min,ymax:n.max};if(!r)return e._scaleRanges=i,!0;const s=r.xmin!==t.min||r.xmax!==t.max||r.ymin!==n.min||r.ymax!==n.max;return Object.assign(r,i),s}const ut=e=>0===e||1===e,dt=(e,t,n)=>-Math.pow(2,10*(e-=1))*Math.sin((e-t)*xe/n),ht=(e,t,n)=>Math.pow(2,-10*e)*Math.sin((e-t)*xe/n)+1,pt={linear:e=>e,easeInQuad:e=>e*e,easeOutQuad:e=>-e*(e-2),easeInOutQuad:e=>(e/=.5)<1?.5*e*e:-.5*(--e*(e-2)-1),easeInCubic:e=>e*e*e,easeOutCubic:e=>(e-=1)*e*e+1,easeInOutCubic:e=>(e/=.5)<1?.5*e*e*e:.5*((e-=2)*e*e+2),easeInQuart:e=>e*e*e*e,easeOutQuart:e=>-((e-=1)*e*e*e-1),easeInOutQuart:e=>(e/=.5)<1?.5*e*e*e*e:-.5*((e-=2)*e*e*e-2),easeInQuint:e=>e*e*e*e*e,easeOutQuint:e=>(e-=1)*e*e*e*e+1,easeInOutQuint:e=>(e/=.5)<1?.5*e*e*e*e*e:.5*((e-=2)*e*e*e*e+2),easeInSine:e=>1-Math.cos(e*Ae),easeOutSine:e=>Math.sin(e*Ae),easeInOutSine:e=>-.5*(Math.cos(Ee*e)-1),easeInExpo:e=>0===e?0:Math.pow(2,10*(e-1)),easeOutExpo:e=>1===e?1:1-Math.pow(2,-10*e),easeInOutExpo:e=>ut(e)?e:e<.5?.5*Math.pow(2,10*(2*e-1)):.5*(2-Math.pow(2,-10*(2*e-1))),easeInCirc:e=>e>=1?e:-(Math.sqrt(1-e*e)-1),easeOutCirc:e=>Math.sqrt(1-(e-=1)*e),easeInOutCirc:e=>(e/=.5)<1?-.5*(Math.sqrt(1-e*e)-1):.5*(Math.sqrt(1-(e-=2)*e)+1),easeInElastic:e=>ut(e)?e:dt(e,.075,.3),easeOutElastic:e=>ut(e)?e:ht(e,.075,.3),easeInOutElastic(e){const t=.1125,n=.45;return ut(e)?e:e<.5?.5*dt(2*e,t,n):.5+.5*ht(2*e-1,t,n)},easeInBack(e){const t=1.70158;return e*e*((t+1)*e-t)},easeOutBack(e){const t=1.70158;return(e-=1)*e*((t+1)*e+t)+1},easeInOutBack(e){let t=1.70158;return(e/=.5)<1?e*e*((1+(t*=1.525))*e-t)*.5:.5*((e-=2)*e*((1+(t*=1.525))*e+t)+2)},easeInBounce:e=>1-pt.easeOutBounce(1-e),easeOutBounce(e){const t=7.5625,n=2.75;return e<1/n?t*e*e:e<2/n?t*(e-=1.5/n)*e+.75:e<2.5/n?t*(e-=2.25/n)*e+.9375:t*(e-=2.625/n)*e+.984375},easeInOutBounce:e=>e<.5?.5*pt.easeInBounce(2*e):.5*pt.easeOutBounce(2*e-1)+.5};function ft(e){if(e&&"object"===typeof e){const t=e.toString();return"[object CanvasPattern]"===t||"[object CanvasGradient]"===t}return!1}function gt(e){return ft(e)?e:new q(e)}function mt(e){return ft(e)?e:new q(e).saturate(.5).darken(.1).hexString()}const bt=["x","y","borderWidth","radius","tension"],_t=["color","borderColor","backgroundColor"];function yt(e){e.set("animation",{delay:void 0,duration:1e3,easing:"easeOutQuart",fn:void 0,from:void 0,loop:void 0,to:void 0,type:void 0}),e.describe("animation",{_fallback:!1,_indexable:!1,_scriptable:e=>"onProgress"!==e&&"onComplete"!==e&&"fn"!==e}),e.set("animations",{colors:{type:"color",properties:_t},numbers:{type:"number",properties:bt}}),e.describe("animations",{_fallback:"animation"}),e.set("transitions",{active:{animation:{duration:400}},resize:{animation:{duration:0}},show:{animations:{colors:{from:"transparent"},visible:{type:"boolean",duration:0}}},hide:{animations:{colors:{to:"transparent"},visible:{type:"boolean",easing:"linear",fn:e=>0|e}}}})}function vt(e){e.set("layout",{autoPadding:!0,padding:{top:0,right:0,bottom:0,left:0}})}const Et=new Map;function xt(e,t){t=t||{};const n=e+JSON.stringify(t);let r=Et.get(n);return r||(r=new Intl.NumberFormat(e,t),Et.set(n,r)),r}function St(e,t,n){return xt(t,n).format(e)}const wt={values(e){return Z(e)?e:""+e},numeric(e,t,n){if(0===e)return"0";const r=this.chart.options.locale;let i,s=e;if(n.length>1){const t=Math.max(Math.abs(n[0].value),Math.abs(n[n.length-1].value));(t<1e-4||t>1e15)&&(i="scientific"),s=Tt(e,n)}const o=Re(Math.abs(s)),a=isNaN(o)?1:Math.max(Math.min(-1*Math.floor(o),20),0),l={notation:i,minimumFractionDigits:a,maximumFractionDigits:a};return Object.assign(l,this.options.ticks.format),St(e,r,l)},logarithmic(e,t,n){if(0===e)return"0";const r=n[t].significand||e/Math.pow(10,Math.floor(Re(e)));return[1,2,3,5,10,15].includes(r)||t>.8*n.length?wt.numeric.call(this,e,t,n):""}};function Tt(e,t){let n=t.length>3?t[2].value-t[1].value:t[1].value-t[0].value;return Math.abs(n)>=1&&e!==Math.floor(e)&&(n=e-Math.floor(e)),n}var At={formatters:wt};function Ct(e){e.set("scale",{display:!0,offset:!1,reverse:!1,beginAtZero:!1,bounds:"ticks",grace:0,grid:{display:!0,lineWidth:1,drawOnChartArea:!0,drawTicks:!0,tickLength:8,tickWidth:(e,t)=>t.lineWidth,tickColor:(e,t)=>t.color,offset:!1},border:{display:!0,dash:[],dashOffset:0,width:1},title:{display:!1,text:"",padding:{top:4,bottom:4}},ticks:{minRotation:0,maxRotation:50,mirror:!1,textStrokeWidth:0,textStrokeColor:"",padding:3,display:!0,autoSkip:!0,autoSkipPadding:3,labelOffset:0,callback:At.formatters.values,minor:{},major:{},align:"center",crossAlign:"near",showLabelBackdrop:!1,backdropColor:"rgba(255, 255, 255, 0.75)",backdropPadding:2}}),e.route("scale.ticks","color","","color"),e.route("scale.grid","color","","borderColor"),e.route("scale.border","color","","borderColor"),e.route("scale.title","color","","color"),e.describe("scale",{_fallback:!1,_scriptable:e=>!e.startsWith("before")&&!e.startsWith("after")&&"callback"!==e&&"parser"!==e,_indexable:e=>"borderDash"!==e&&"tickBorderDash"!==e&&"dash"!==e}),e.describe("scales",{_fallback:"scale"}),e.describe("scale.ticks",{_scriptable:e=>"backdropPadding"!==e&&"callback"!==e,_indexable:e=>"backdropPadding"!==e})}const It=Object.create(null),Rt=Object.create(null);function kt(e,t){if(!t)return e;const n=t.split(".");for(let r=0,i=n.length;re.chart.platform.getDevicePixelRatio(),this.elements={},this.events=["mousemove","mouseout","click","touchstart","touchmove"],this.font={family:"'Helvetica Neue', 'Helvetica', 'Arial', sans-serif",size:12,style:"normal",lineHeight:1.2,weight:null},this.hover={},this.hoverBackgroundColor=(e,t)=>mt(t.backgroundColor),this.hoverBorderColor=(e,t)=>mt(t.borderColor),this.hoverColor=(e,t)=>mt(t.color),this.indexAxis="x",this.interaction={mode:"nearest",intersect:!0,includeInvisible:!1},this.maintainAspectRatio=!0,this.onHover=null,this.onClick=null,this.parsing=!0,this.plugins={},this.responsive=!0,this.scale=void 0,this.scales={},this.showLine=!0,this.drawActiveElementsOnTop=!0,this.describe(e),this.apply(t)}set(e,t){return Pt(this,e,t)}get(e){return kt(this,e)}describe(e,t){return Pt(Rt,e,t)}override(e,t){return Pt(It,e,t)}route(e,t,n,r){const i=kt(this,e),s=kt(this,n),o="_"+t;Object.defineProperties(i,{[o]:{value:i[t],writable:!0},[t]:{enumerable:!0,get(){const e=this[o],t=s[r];return Q(e)?Object.assign({},t,e):te(e,t)},set(e){this[o]=e}}})}apply(e){e.forEach((e=>e(this)))}}var Nt=new Ot({_scriptable:e=>!e.startsWith("on"),_indexable:e=>"events"!==e,hover:{_fallback:"interaction"},interaction:{_scriptable:!1,_indexable:!1}},[yt,vt,Ct]);function Mt(e){return!e||K(e.size)||K(e.family)?null:(e.style?e.style+" ":"")+(e.weight?e.weight+" ":"")+e.size+"px "+e.family}function Dt(e,t,n,r,i){let s=t[i];return s||(s=t[i]=e.measureText(i).width,n.push(i)),s>r&&(r=s),r}function Lt(e,t,n){const r=e.currentDevicePixelRatio,i=0!==n?Math.max(n/2,.5):0;return Math.round((t-i)*r)/r+i}function Ft(e,t){t=t||e.getContext("2d"),t.save(),t.resetTransform(),t.clearRect(0,0,e.width,e.height),t.restore()}function Bt(e,t,n,r){Ut(e,t,n,r,null)}function Ut(e,t,n,r,i){let s,o,a,l,c,u,d,h;const p=t.pointStyle,f=t.rotation,g=t.radius;let m=(f||0)*Te;if(p&&"object"===typeof p&&(s=p.toString(),"[object HTMLImageElement]"===s||"[object HTMLCanvasElement]"===s))return e.save(),e.translate(n,r),e.rotate(m),e.drawImage(p,-p.width/2,-p.height/2,p.width,p.height),void e.restore();if(!(isNaN(g)||g<=0)){switch(e.beginPath(),p){default:i?e.ellipse(n,r,i/2,g,0,0,xe):e.arc(n,r,g,0,xe),e.closePath();break;case"triangle":u=i?i/2:g,e.moveTo(n+Math.sin(m)*u,r-Math.cos(m)*g),m+=Ie,e.lineTo(n+Math.sin(m)*u,r-Math.cos(m)*g),m+=Ie,e.lineTo(n+Math.sin(m)*u,r-Math.cos(m)*g),e.closePath();break;case"rectRounded":c=.516*g,l=g-c,o=Math.cos(m+Ce)*l,d=Math.cos(m+Ce)*(i?i/2-c:l),a=Math.sin(m+Ce)*l,h=Math.sin(m+Ce)*(i?i/2-c:l),e.arc(n-d,r-a,c,m-Ee,m-Ae),e.arc(n+h,r-o,c,m-Ae,m),e.arc(n+d,r+a,c,m,m+Ae),e.arc(n-h,r+o,c,m+Ae,m+Ee),e.closePath();break;case"rect":if(!f){l=Math.SQRT1_2*g,u=i?i/2:l,e.rect(n-u,r-l,2*u,2*l);break}m+=Ce;case"rectRot":d=Math.cos(m)*(i?i/2:g),o=Math.cos(m)*g,a=Math.sin(m)*g,h=Math.sin(m)*(i?i/2:g),e.moveTo(n-d,r-a),e.lineTo(n+h,r-o),e.lineTo(n+d,r+a),e.lineTo(n-h,r+o),e.closePath();break;case"crossRot":m+=Ce;case"cross":d=Math.cos(m)*(i?i/2:g),o=Math.cos(m)*g,a=Math.sin(m)*g,h=Math.sin(m)*(i?i/2:g),e.moveTo(n-d,r-a),e.lineTo(n+d,r+a),e.moveTo(n+h,r-o),e.lineTo(n-h,r+o);break;case"star":d=Math.cos(m)*(i?i/2:g),o=Math.cos(m)*g,a=Math.sin(m)*g,h=Math.sin(m)*(i?i/2:g),e.moveTo(n-d,r-a),e.lineTo(n+d,r+a),e.moveTo(n+h,r-o),e.lineTo(n-h,r+o),m+=Ce,d=Math.cos(m)*(i?i/2:g),o=Math.cos(m)*g,a=Math.sin(m)*g,h=Math.sin(m)*(i?i/2:g),e.moveTo(n-d,r-a),e.lineTo(n+d,r+a),e.moveTo(n+h,r-o),e.lineTo(n-h,r+o);break;case"line":o=i?i/2:Math.cos(m)*g,a=Math.sin(m)*g,e.moveTo(n-o,r-a),e.lineTo(n+o,r+a);break;case"dash":e.moveTo(n,r),e.lineTo(n+Math.cos(m)*(i?i/2:g),r+Math.sin(m)*g);break;case!1:e.closePath();break}e.fill(),t.borderWidth>0&&e.stroke()}}function Gt(e,t,n){return n=n||.5,!t||e&&e.x>t.left-n&&e.xt.top-n&&e.y0&&""!==s.strokeColor;let l,c;for(e.save(),e.font=i.string,jt(e,s),l=0;l+e||0;function en(e,t){const n={},r=Q(t),i=r?Object.keys(t):t,s=Q(e)?r?n=>te(e[n],e[t[n]]):t=>e[t]:()=>e;for(const o of i)n[o]=Jt(s(o));return n}function tn(e){return en(e,{top:"y",right:"x",bottom:"y",left:"x"})}function nn(e){return en(e,["topLeft","topRight","bottomLeft","bottomRight"])}function rn(e){const t=tn(e);return t.width=t.left+t.right,t.height=t.top+t.bottom,t}function sn(e,t){e=e||{},t=t||Nt.font;let n=te(e.size,t.size);"string"===typeof n&&(n=parseInt(n,10));let r=te(e.style,t.style);r&&!(""+r).match(Zt)&&(console.warn('Invalid font style specified: "'+r+'"'),r=void 0);const i={family:te(e.family,t.family),lineHeight:Qt(te(e.lineHeight,t.lineHeight),n),size:n,style:r,weight:te(e.weight,t.weight),string:""};return i.string=Mt(i),i}function on(e,t,n,r){let i,s,o,a=!0;for(i=0,s=e.length;in&&0===e?0:e+t;return{min:o(r,-Math.abs(s)),max:o(i,s)}}function ln(e,t){return Object.assign(Object.create(e),t)}function cn(e,t=[""],n,r,i=(()=>e[0])){const s=n||e;"undefined"===typeof r&&(r=Tn("_fallback",e));const o={[Symbol.toStringTag]:"Object",_cacheable:!0,_scopes:e,_rootScopes:s,_fallback:r,_getTarget:i,override:n=>cn([n,...e],t,s,r)};return new Proxy(o,{deleteProperty(t,n){return delete t[n],delete t._keys,delete e[0][n],!0},get(n,r){return fn(n,r,(()=>wn(r,t,e,n)))},getOwnPropertyDescriptor(e,t){return Reflect.getOwnPropertyDescriptor(e._scopes[0],t)},getPrototypeOf(){return Reflect.getPrototypeOf(e[0])},has(e,t){return An(e).includes(t)},ownKeys(e){return An(e)},set(e,t,n){const r=e._storage||(e._storage=i());return e[t]=r[t]=n,delete e._keys,!0}})}function un(e,t,n,r){const i={_cacheable:!1,_proxy:e,_context:t,_subProxy:n,_stack:new Set,_descriptors:dn(e,r),setContext:t=>un(e,t,n,r),override:i=>un(e.override(i),t,n,r)};return new Proxy(i,{deleteProperty(t,n){return delete t[n],delete e[n],!0},get(e,t,n){return fn(e,t,(()=>gn(e,t,n)))},getOwnPropertyDescriptor(t,n){return t._descriptors.allKeys?Reflect.has(e,n)?{enumerable:!0,configurable:!0}:void 0:Reflect.getOwnPropertyDescriptor(e,n)},getPrototypeOf(){return Reflect.getPrototypeOf(e)},has(t,n){return Reflect.has(e,n)},ownKeys(){return Reflect.ownKeys(e)},set(t,n,r){return e[n]=r,delete t[n],!0}})}function dn(e,t={scriptable:!0,indexable:!0}){const{_scriptable:n=t.scriptable,_indexable:r=t.indexable,_allKeys:i=t.allKeys}=e;return{allKeys:i,scriptable:n,indexable:r,isScriptable:_e(n)?n:()=>n,isIndexable:_e(r)?r:()=>r}}const hn=(e,t)=>e?e+me(t):t,pn=(e,t)=>Q(t)&&"adapters"!==e&&(null===Object.getPrototypeOf(t)||t.constructor===Object);function fn(e,t,n){if(Object.prototype.hasOwnProperty.call(e,t))return e[t];const r=n();return e[t]=r,r}function gn(e,t,n){const{_proxy:r,_context:i,_subProxy:s,_descriptors:o}=e;let a=r[t];return _e(a)&&o.isScriptable(t)&&(a=mn(t,a,e,n)),Z(a)&&a.length&&(a=bn(t,a,e,o.isIndexable)),pn(t,a)&&(a=un(a,i,s&&s[t],o)),a}function mn(e,t,n,r){const{_proxy:i,_context:s,_subProxy:o,_stack:a}=n;if(a.has(e))throw new Error("Recursion detected: "+Array.from(a).join("->")+"->"+e);a.add(e);let l=t(s,o||r);return a.delete(e),pn(e,l)&&(l=En(i._scopes,i,e,l)),l}function bn(e,t,n,r){const{_proxy:i,_context:s,_subProxy:o,_descriptors:a}=n;if("undefined"!==typeof s.index&&r(e))return t[s.index%t.length];if(Q(t[0])){const n=t,r=i._scopes.filter((e=>e!==n));t=[];for(const l of n){const n=En(r,i,e,l);t.push(un(n,s,o&&o[e],a))}}return t}function _n(e,t,n){return _e(e)?e(t,n):e}const yn=(e,t)=>!0===e?t:"string"===typeof e?ge(t,e):void 0;function vn(e,t,n,r,i){for(const s of t){const t=yn(n,s);if(t){e.add(t);const s=_n(t._fallback,n,i);if("undefined"!==typeof s&&s!==n&&s!==r)return s}else if(!1===t&&"undefined"!==typeof r&&n!==r)return null}return!1}function En(e,t,n,r){const i=t._rootScopes,s=_n(t._fallback,n,r),o=[...e,...i],a=new Set;a.add(r);let l=xn(a,o,n,s||n,r);return null!==l&&(("undefined"===typeof s||s===n||(l=xn(a,o,s,l,r),null!==l))&&cn(Array.from(a),[""],i,s,(()=>Sn(t,n,r))))}function xn(e,t,n,r,i){while(n)n=vn(e,t,n,r,i);return n}function Sn(e,t,n){const r=e._getTarget();t in r||(r[t]={});const i=r[t];return Z(i)&&Q(n)?n:i||{}}function wn(e,t,n,r){let i;for(const s of t)if(i=Tn(hn(s,e),n),"undefined"!==typeof i)return pn(e,i)?En(n,r,e,i):i}function Tn(e,t){for(const n of t){if(!n)continue;const t=n[e];if("undefined"!==typeof t)return t}}function An(e){let t=e._keys;return t||(t=e._keys=Cn(e._scopes)),t}function Cn(e){const t=new Set;for(const n of e)for(const e of Object.keys(n).filter((e=>!e.startsWith("_"))))t.add(e);return Array.from(t)}const In=Number.EPSILON||1e-14,Rn=(e,t)=>t"x"===e?"y":"x";function Pn(e,t,n,r){const i=e.skip?t:e,s=t,o=n.skip?t:n,a=$e(s,i),l=$e(o,s);let c=a/(a+l),u=l/(a+l);c=isNaN(c)?0:c,u=isNaN(u)?0:u;const d=r*c,h=r*u;return{previous:{x:s.x-d*(o.x-i.x),y:s.y-d*(o.y-i.y)},next:{x:s.x+h*(o.x-i.x),y:s.y+h*(o.y-i.y)}}}function On(e,t,n){const r=e.length;let i,s,o,a,l,c=Rn(e,0);for(let u=0;u!e.skip))),"monotone"===t.cubicInterpolationMode)Mn(e,i);else{let n=r?e[e.length-1]:e[0];for(s=0,o=e.length;se.ownerDocument.defaultView.getComputedStyle(e,null);function zn(e,t){return $n(e).getPropertyValue(t)}const Hn=["top","right","bottom","left"];function Vn(e,t,n){const r={};n=n?"-"+n:"";for(let i=0;i<4;i++){const s=Hn[i];r[s]=parseFloat(e[t+"-"+s+n])||0}return r.width=r.left+r.right,r.height=r.top+r.bottom,r}const jn=(e,t,n)=>(e>0||t>0)&&(!n||!n.shadowRoot);function Wn(e,t){const n=e.touches,r=n&&n.length?n[0]:e,{offsetX:i,offsetY:s}=r;let o,a,l=!1;if(jn(i,s,e.target))o=i,a=s;else{const e=t.getBoundingClientRect();o=r.clientX-e.left,a=r.clientY-e.top,l=!0}return{x:o,y:a,box:l}}function qn(e,t){if("native"in e)return e;const{canvas:n,currentDevicePixelRatio:r}=t,i=$n(n),s="border-box"===i.boxSizing,o=Vn(i,"padding"),a=Vn(i,"border","width"),{x:l,y:c,box:u}=Wn(e,n),d=o.left+(u&&a.left),h=o.top+(u&&a.top);let{width:p,height:f}=t;return s&&(p-=o.width+a.width,f-=o.height+a.height),{x:Math.round((l-d)/p*n.width/r),y:Math.round((c-h)/f*n.height/r)}}function Xn(e,t,n){let r,i;if(void 0===t||void 0===n){const s=Un(e);if(s){const e=s.getBoundingClientRect(),o=$n(s),a=Vn(o,"border","width"),l=Vn(o,"padding");t=e.width-l.width-a.width,n=e.height-l.height-a.height,r=Gn(o.maxWidth,s,"clientWidth"),i=Gn(o.maxHeight,s,"clientHeight")}else t=e.clientWidth,n=e.clientHeight}return{width:t,height:n,maxWidth:r||we,maxHeight:i||we}}const Yn=e=>Math.round(10*e)/10;function Kn(e,t,n,r){const i=$n(e),s=Vn(i,"margin"),o=Gn(i.maxWidth,e,"clientWidth")||we,a=Gn(i.maxHeight,e,"clientHeight")||we,l=Xn(e,t,n);let{width:c,height:u}=l;if("content-box"===i.boxSizing){const e=Vn(i,"border","width"),t=Vn(i,"padding");c-=t.width+e.width,u-=t.height+e.height}c=Math.max(0,c-s.width),u=Math.max(0,r?c/r:u-s.height),c=Yn(Math.min(c,o,l.maxWidth)),u=Yn(Math.min(u,a,l.maxHeight)),c&&!u&&(u=Yn(c/2));const d=void 0!==t||void 0!==n;return d&&r&&l.height&&u>l.height&&(u=l.height,c=Yn(Math.floor(u*r))),{width:c,height:u}}function Zn(e,t,n){const r=t||1,i=Math.floor(e.height*r),s=Math.floor(e.width*r);e.height=Math.floor(e.height),e.width=Math.floor(e.width);const o=e.canvas;return o.style&&(n||!o.style.height&&!o.style.width)&&(o.style.height=`${e.height}px`,o.style.width=`${e.width}px`),(e.currentDevicePixelRatio!==r||o.height!==i||o.width!==s)&&(e.currentDevicePixelRatio=r,o.height=i,o.width=s,e.ctx.setTransform(r,0,0,r,0,0),!0)}const Qn=function(){let e=!1;try{const t={get passive(){return e=!0,!1}};window.addEventListener("test",null,t),window.removeEventListener("test",null,t)}catch(t){}return e}();function Jn(e,t){const n=zn(e,t),r=n&&n.match(/^(\d+)(\.\d+)?px$/);return r?+r[1]:void 0}function er(e,t,n,r){return{x:e.x+n*(t.x-e.x),y:e.y+n*(t.y-e.y)}}function tr(e,t,n,r){return{x:e.x+n*(t.x-e.x),y:"middle"===r?n<.5?e.y:t.y:"after"===r?n<1?e.y:t.y:n>0?t.y:e.y}}function nr(e,t,n,r){const i={x:e.cp2x,y:e.cp2y},s={x:t.cp1x,y:t.cp1y},o=er(e,i,n),a=er(i,s,n),l=er(s,t,n),c=er(o,a,n),u=er(a,l,n);return er(c,u,n)}const rr=function(e,t){return{x(n){return e+e+t-n},setWidth(e){t=e},textAlign(e){return"center"===e?e:"right"===e?"left":"right"},xPlus(e,t){return e-t},leftForLtr(e,t){return e-t}}},ir=function(){return{x(e){return e},setWidth(e){},textAlign(e){return e},xPlus(e,t){return e+t},leftForLtr(e,t){return e}}};function sr(e,t,n){return e?rr(t,n):ir()}function or(e,t){let n,r;"ltr"!==t&&"rtl"!==t||(n=e.canvas.style,r=[n.getPropertyValue("direction"),n.getPropertyPriority("direction")],n.setProperty("direction",t,"important"),e.prevTextDirection=r)}function ar(e,t){void 0!==t&&(delete e.prevTextDirection,e.canvas.style.setProperty("direction",t[0],t[1]))}function lr(e){return"angle"===e?{between:Ve,compare:ze,normalize:He}:{between:qe,compare:(e,t)=>e-t,normalize:e=>e}}function cr({start:e,end:t,count:n,loop:r,style:i}){return{start:e%n,end:t%n,loop:r&&(t-e+1)%n===0,style:i}}function ur(e,t,n){const{property:r,start:i,end:s}=n,{between:o,normalize:a}=lr(r),l=t.length;let c,u,{start:d,end:h,loop:p}=e;if(p){for(d+=l,h+=l,c=0,u=l;cl(i,b,g)&&0!==a(i,b),E=()=>0===a(s,g)||l(s,b,g),x=()=>_||v(),S=()=>!_||E();for(let w=u,T=u;w<=d;++w)m=t[w%o],m.skip||(g=c(m[r]),g!==b&&(_=l(g,i,s),null===y&&x()&&(y=0===a(g,i)?w:T),null!==y&&S()&&(f.push(cr({start:y,end:w,loop:h,count:o,style:p})),y=null),T=w,b=g));return null!==y&&f.push(cr({start:y,end:d,loop:h,count:o,style:p})),f}function hr(e,t){const n=[],r=e.segments;for(let i=0;ii&&e[s%t].skip)s--;return s%=t,{start:i,end:s}}function fr(e,t,n,r){const i=e.length,s=[];let o,a=t,l=e[t];for(o=t+1;o<=n;++o){const n=e[o%i];n.skip||n.stop?l.skip||(r=!1,s.push({start:t%i,end:(o-1)%i,loop:r}),t=a=n.stop?o:null):(a=o,l.skip&&(t=o)),l=n}return null!==a&&s.push({start:t%i,end:a%i,loop:r}),s}function gr(e,t){const n=e.points,r=e.options.spanGaps,i=n.length;if(!i)return[];const s=!!e._loop,{start:o,end:a}=pr(n,i,s,r);if(!0===r)return mr(e,[{start:o,end:a,loop:s}],n,t);const l=ar({chart:e,initial:t.initial,numSteps:s,currentStep:Math.min(n-t.start,s)})))}_refresh(){this._request||(this._running=!0,this._request=nt.call(window,(()=>{this._update(),this._request=null,this._running&&this._refresh()})))}_update(e=Date.now()){let t=0;this._charts.forEach(((n,r)=>{if(!n.running||!n.items.length)return;const i=n.items;let s,o=i.length-1,a=!1;for(;o>=0;--o)s=i[o],s._active?(s._total>n.duration&&(n.duration=s._total),s.tick(e),a=!0):(i[o]=i[i.length-1],i.pop());a&&(r.draw(),this._notify(r,n,e,"progress")),i.length||(n.running=!1,this._notify(r,n,e,"complete"),n.initial=!1),t+=i.length})),this._lastDate=e,0===t&&(this._running=!1)}_getAnims(e){const t=this._charts;let n=t.get(e);return n||(n={running:!1,initial:!0,items:[],listeners:{complete:[],progress:[]}},t.set(e,n)),n}listen(e,t,n){this._getAnims(e).listeners[t].push(n)}add(e,t){t&&t.length&&this._getAnims(e).items.push(...t)}has(e){return this._getAnims(e).items.length>0}start(e){const t=this._charts.get(e);t&&(t.running=!0,t.start=Date.now(),t.duration=t.items.reduce(((e,t)=>Math.max(e,t._duration)),0),this._refresh())}running(e){if(!this._running)return!1;const t=this._charts.get(e);return!!(t&&t.running&&t.items.length)}stop(e){const t=this._charts.get(e);if(!t||!t.items.length)return;const n=t.items;let r=n.length-1;for(;r>=0;--r)n[r].cancel();t.items=[],this._notify(e,t,Date.now(),"complete")}remove(e){return this._charts.delete(e)}}var Er=new vr;const xr="transparent",Sr={boolean(e,t,n){return n>.5?t:e},color(e,t,n){const r=gt(e||xr),i=r.valid&>(t||xr);return i&&i.valid?i.mix(r,n).hexString():t},number(e,t,n){return e+(t-e)*n}};class wr{constructor(e,t,n,r){const i=t[n];r=on([e.to,r,i,e.from]);const s=on([e.from,i,r]);this._active=!0,this._fn=e.fn||Sr[e.type||typeof s],this._easing=pt[e.easing]||pt.linear,this._start=Math.floor(Date.now()+(e.delay||0)),this._duration=this._total=Math.floor(e.duration),this._loop=!!e.loop,this._target=t,this._prop=n,this._from=s,this._to=r,this._promises=void 0}active(){return this._active}update(e,t,n){if(this._active){this._notify(!1);const r=this._target[this._prop],i=n-this._start,s=this._duration-i;this._start=n,this._duration=Math.floor(Math.max(s,e.duration)),this._total+=i,this._loop=!!e.loop,this._to=on([e.to,t,r,e.from]),this._from=on([e.from,r,t])}}cancel(){this._active&&(this.tick(Date.now()),this._active=!1,this._notify(!1))}tick(e){const t=e-this._start,n=this._duration,r=this._prop,i=this._from,s=this._loop,o=this._to;let a;if(this._active=i!==o&&(s||t1?2-a:a,a=this._easing(Math.min(1,Math.max(0,a))),this._target[r]=this._fn(i,o,a))}wait(){const e=this._promises||(this._promises=[]);return new Promise(((t,n)=>{e.push({res:t,rej:n})}))}_notify(e){const t=e?"res":"rej",n=this._promises||[];for(let r=0;r{const i=e[r];if(!Q(i))return;const s={};for(const e of t)s[e]=i[e];(Z(i.properties)&&i.properties||[r]).forEach((e=>{e!==r&&n.has(e)||n.set(e,s)}))}))}_animateOptions(e,t){const n=t.options,r=Cr(e,n);if(!r)return[];const i=this._createAnimations(r,n);return n.$shared&&Ar(e.options.$animations,n).then((()=>{e.options=n}),(()=>{})),i}_createAnimations(e,t){const n=this._properties,r=[],i=e.$animations||(e.$animations={}),s=Object.keys(t),o=Date.now();let a;for(a=s.length-1;a>=0;--a){const l=s[a];if("$"===l.charAt(0))continue;if("options"===l){r.push(...this._animateOptions(e,t));continue}const c=t[l];let u=i[l];const d=n.get(l);if(u){if(d&&u.active()){u.update(d,c,o);continue}u.cancel()}d&&d.duration?(i[l]=u=new wr(d,e,l,c),r.push(u)):e[l]=c}return r}update(e,t){if(0===this._properties.size)return void Object.assign(e,t);const n=this._createAnimations(e,t);return n.length?(Er.add(this._chart,n),!0):void 0}}function Ar(e,t){const n=[],r=Object.keys(t);for(let i=0;i0||!n&&t<0)return i.index}return null}function Ur(e,t){const{chart:n,_cachedMeta:r}=e,i=n._stacks||(n._stacks={}),{iScale:s,vScale:o,index:a}=r,l=s.axis,c=o.axis,u=Dr(s,o,r),d=t.length;let h;for(let p=0;pn[e].axis===t)).shift()}function $r(e,t){return ln(e,{active:!1,dataset:void 0,datasetIndex:t,index:t,mode:"default",type:"dataset"})}function zr(e,t,n){return ln(e,{active:!1,dataIndex:t,parsed:void 0,raw:void 0,element:n,index:t,mode:"default",type:"data"})}function Hr(e,t){const n=e.controller.index,r=e.vScale&&e.vScale.axis;if(r){t=t||e._parsed;for(const e of t){const t=e._stacks;if(!t||void 0===t[r]||void 0===t[r][n])return;delete t[r][n],void 0!==t[r]._visualValues&&void 0!==t[r]._visualValues[n]&&delete t[r]._visualValues[n]}}}const Vr=e=>"reset"===e||"none"===e,jr=(e,t)=>t?e:Object.assign({},e),Wr=(e,t,n)=>e&&!t.hidden&&t._stacked&&{keys:Pr(n,!0),values:null};class qr{static defaults={};static datasetElementType=null;static dataElementType=null;constructor(e,t){this.chart=e,this._ctx=e.ctx,this.index=t,this._cachedDataOpts={},this._cachedMeta=this.getMeta(),this._type=this._cachedMeta.type,this.options=void 0,this._parsing=!1,this._data=void 0,this._objectData=void 0,this._sharedOptions=void 0,this._drawStart=void 0,this._drawCount=void 0,this.enableOptionSharing=!1,this.supportsDecimation=!1,this.$context=void 0,this._syncList=[],this.datasetElementType=new.target.datasetElementType,this.dataElementType=new.target.dataElementType,this.initialize()}initialize(){const e=this._cachedMeta;this.configure(),this.linkScales(),e._stacked=Mr(e.vScale,e),this.addElements(),this.options.fill&&!this.chart.isPluginEnabled("filler")&&console.warn("Tried to use the 'fill' option without the 'Filler' plugin enabled. Please import and register the 'Filler' plugin and make sure it is not disabled in the options")}updateIndex(e){this.index!==e&&Hr(this._cachedMeta),this.index=e}linkScales(){const e=this.chart,t=this._cachedMeta,n=this.getDataset(),r=(e,t,n,r)=>"x"===e?t:"r"===e?r:n,i=t.xAxisID=te(n.xAxisID,Gr(e,"x")),s=t.yAxisID=te(n.yAxisID,Gr(e,"y")),o=t.rAxisID=te(n.rAxisID,Gr(e,"r")),a=t.indexAxis,l=t.iAxisID=r(a,i,s,o),c=t.vAxisID=r(a,s,i,o);t.xScale=this.getScaleForId(i),t.yScale=this.getScaleForId(s),t.rScale=this.getScaleForId(o),t.iScale=this.getScaleForId(l),t.vScale=this.getScaleForId(c)}getDataset(){return this.chart.data.datasets[this.index]}getMeta(){return this.chart.getDatasetMeta(this.index)}getScaleForId(e){return this.chart.scales[e]}_getOtherScale(e){const t=this._cachedMeta;return e===t.iScale?t.vScale:t.iScale}reset(){this._update("reset")}_destroy(){const e=this._cachedMeta;this._data&&et(this._data,this),e._stacked&&Hr(e)}_dataCheck(){const e=this.getDataset(),t=e.data||(e.data=[]),n=this._data;if(Q(t))this._data=Nr(t);else if(n!==t){if(n){et(n,this);const e=this._cachedMeta;Hr(e),e._parsed=[]}t&&Object.isExtensible(t)&&Je(t,this),this._syncList=[],this._data=t}}addElements(){const e=this._cachedMeta;this._dataCheck(),this.datasetElementType&&(e.dataset=new this.datasetElementType)}buildOrUpdateElements(e){const t=this._cachedMeta,n=this.getDataset();let r=!1;this._dataCheck();const i=t._stacked;t._stacked=Mr(t.vScale,t),t.stack!==n.stack&&(r=!0,Hr(t),t.stack=n.stack),this._resyncElements(e),(r||i!==t._stacked)&&Ur(this,t._parsed)}configure(){const e=this.chart.config,t=e.datasetScopeKeys(this._type),n=e.getOptionScopes(this.getDataset(),t,!0);this.options=e.createResolver(n,this.getContext()),this._parsing=this.options.parsing,this._cachedDataOpts={}}parse(e,t){const{_cachedMeta:n,_data:r}=this,{iScale:i,_stacked:s}=n,o=i.axis;let a,l,c,u=0===e&&t===r.length||n._sorted,d=e>0&&n._parsed[e-1];if(!1===this._parsing)n._parsed=r,n._sorted=!0,c=r;else{c=Z(r[e])?this.parseArrayData(n,r,e,t):Q(r[e])?this.parseObjectData(n,r,e,t):this.parsePrimitiveData(n,r,e,t);const i=()=>null===l[o]||d&&l[o]t||u=0;--d)if(!p()){this.updateRangeFromParsed(l,e,h,a);break}return l}getAllParsedValues(e){const t=this._cachedMeta._parsed,n=[];let r,i,s;for(r=0,i=t.length;r=0&&ethis.getContext(n,r,t),f=l.resolveNamedOptions(d,h,p,u);return f.$shared&&(f.$shared=a,i[s]=Object.freeze(jr(f,a))),f}_resolveAnimations(e,t,n){const r=this.chart,i=this._cachedDataOpts,s=`animation-${t}`,o=i[s];if(o)return o;let a;if(!1!==r.options.animation){const r=this.chart.config,i=r.datasetAnimationScopeKeys(this._type,t),s=r.getOptionScopes(this.getDataset(),i);a=r.createResolver(s,this.getContext(e,n,t))}const l=new Tr(r,a&&a.animations);return a&&a._cacheable&&(i[s]=Object.freeze(l)),l}getSharedOptions(e){if(e.$shared)return this._sharedOptions||(this._sharedOptions=Object.assign({},e))}includeOptions(e,t){return!t||Vr(e)||this.chart._animationsDisabled}_getSharedOptions(e,t){const n=this.resolveDataElementOptions(e,t),r=this._sharedOptions,i=this.getSharedOptions(n),s=this.includeOptions(t,i)||i!==r;return this.updateSharedOptions(i,t,n),{sharedOptions:i,includeOptions:s}}updateElement(e,t,n,r){Vr(r)?Object.assign(e,n):this._resolveAnimations(t,r).update(e,n)}updateSharedOptions(e,t,n){e&&!Vr(t)&&this._resolveAnimations(void 0,t).update(e,n)}_setStyle(e,t,n,r){e.active=r;const i=this.getStyle(t,r);this._resolveAnimations(t,n,r).update(e,{options:!r&&this.getSharedOptions(i)||i})}removeHoverStyle(e,t,n){this._setStyle(e,n,"active",!1)}setHoverStyle(e,t,n){this._setStyle(e,n,"active",!0)}_removeDatasetHoverStyle(){const e=this._cachedMeta.dataset;e&&this._setStyle(e,void 0,"active",!1)}_setDatasetHoverStyle(){const e=this._cachedMeta.dataset;e&&this._setStyle(e,void 0,"active",!0)}_resyncElements(e){const t=this._data,n=this._cachedMeta.data;for(const[o,a,l]of this._syncList)this[o](a,l);this._syncList=[];const r=n.length,i=t.length,s=Math.min(i,r);s&&this.parse(0,s),i>r?this._insertElements(r,i-r,e):i{for(e.length+=t,o=e.length-1;o>=s;o--)e[o]=e[o-t]};for(a(i),o=e;o0&&this.getParsed(t-1);for(let v=0;v<_;++v){const n=e[v],p=m?n:{};if(v=b){p.skip=!0;continue}const _=this.getParsed(v),E=K(_[h]),x=p[d]=s.getPixelForValue(_[d],v),S=p[h]=i||E?o.getBasePixel():o.getPixelForValue(a?this.applyStack(o,_,a):_[h],v);p.skip=isNaN(x)||isNaN(S)||E,p.stop=v>0&&Math.abs(_[d]-y[d])>g,f&&(p.parsed=_,p.raw=l.data[v]),u&&(p.options=c||this.resolveDataElementOptions(v,n.active?"active":r)),m||this.updateElement(n,v,p,r),y=_}}getMaxOverflow(){const e=this._cachedMeta,t=e.dataset,n=t.options&&t.options.borderWidth||0,r=e.data||[];if(!r.length)return n;const i=r[0].size(this.resolveDataElementOptions(0)),s=r[r.length-1].size(this.resolveDataElementOptions(r.length-1));return Math.max(n,i,s)/2}draw(){const e=this._cachedMeta;e.dataset.updateControlPoints(this.chart.chartArea,e.iScale.axis),super.draw()}}function Yr(){throw new Error("This method is not implemented: Check that a complete date adapter is provided.")}class Kr{static override(e){Object.assign(Kr.prototype,e)}options;constructor(e){this.options=e||{}}init(){}formats(){return Yr()}parse(){return Yr()}format(){return Yr()}add(){return Yr()}diff(){return Yr()}startOf(){return Yr()}endOf(){return Yr()}}var Zr={_date:Kr};function Qr(e,t,n,r){const{controller:i,data:s,_sorted:o}=e,a=i._cachedMeta.iScale;if(a&&t===a.axis&&"r"!==t&&o&&s.length){const e=a._reversePixels?Ke:Ye;if(!r)return e(s,t,n);if(i._sharedOptions){const r=s[0],i="function"===typeof r.getRange&&r.getRange(t);if(i){const r=e(s,t,n-i),o=e(s,t,n+i);return{lo:r.lo,hi:o.hi}}}}return{lo:0,hi:s.length-1}}function Jr(e,t,n,r,i){const s=e.getSortedVisibleDatasetMetas(),o=n[t];for(let a=0,l=s.length;a{e[o](t[n],i)&&(s.push({element:e,datasetIndex:r,index:l}),a=a||e.inRange(t.x,t.y,i))})),r&&!a?[]:s}var oi={evaluateInteractionItems:Jr,modes:{index(e,t,n,r){const i=qn(t,e),s=n.axis||"x",o=n.includeInvisible||!1,a=n.intersect?ti(e,i,s,r,o):ii(e,i,s,!1,r,o),l=[];return a.length?(e.getSortedVisibleDatasetMetas().forEach((e=>{const t=a[0].index,n=e.data[t];n&&!n.skip&&l.push({element:n,datasetIndex:e.index,index:t})})),l):[]},dataset(e,t,n,r){const i=qn(t,e),s=n.axis||"xy",o=n.includeInvisible||!1;let a=n.intersect?ti(e,i,s,r,o):ii(e,i,s,!1,r,o);if(a.length>0){const t=a[0].datasetIndex,n=e.getDatasetMeta(t).data;a=[];for(let e=0;ee.pos===t))}function ci(e,t){return e.filter((e=>-1===ai.indexOf(e.pos)&&e.box.axis===t))}function ui(e,t){return e.sort(((e,n)=>{const r=t?n:e,i=t?e:n;return r.weight===i.weight?r.index-i.index:r.weight-i.weight}))}function di(e){const t=[];let n,r,i,s,o,a;for(n=0,r=(e||[]).length;ne.box.fullSize)),!0),r=ui(li(t,"left"),!0),i=ui(li(t,"right")),s=ui(li(t,"top"),!0),o=ui(li(t,"bottom")),a=ci(t,"x"),l=ci(t,"y");return{fullSize:n,leftAndTop:r.concat(s),rightAndBottom:i.concat(l).concat(o).concat(a),chartArea:li(t,"chartArea"),vertical:r.concat(i).concat(l),horizontal:s.concat(o).concat(a)}}function gi(e,t,n,r){return Math.max(e[n],t[n])+Math.max(e[r],t[r])}function mi(e,t){e.top=Math.max(e.top,t.top),e.left=Math.max(e.left,t.left),e.bottom=Math.max(e.bottom,t.bottom),e.right=Math.max(e.right,t.right)}function bi(e,t,n,r){const{pos:i,box:s}=n,o=e.maxPadding;if(!Q(i)){n.size&&(e[i]-=n.size);const t=r[n.stack]||{size:0,count:1};t.size=Math.max(t.size,n.horizontal?s.height:s.width),n.size=t.size/t.count,e[i]+=n.size}s.getPadding&&mi(o,s.getPadding());const a=Math.max(0,t.outerWidth-gi(o,e,"left","right")),l=Math.max(0,t.outerHeight-gi(o,e,"top","bottom")),c=a!==e.w,u=l!==e.h;return e.w=a,e.h=l,n.horizontal?{same:c,other:u}:{same:u,other:c}}function _i(e){const t=e.maxPadding;function n(n){const r=Math.max(t[n]-e[n],0);return e[n]+=r,r}e.y+=n("top"),e.x+=n("left"),n("right"),n("bottom")}function yi(e,t){const n=t.maxPadding;function r(e){const r={left:0,top:0,right:0,bottom:0};return e.forEach((e=>{r[e]=Math.max(t[e],n[e])})),r}return r(e?["left","right"]:["top","bottom"])}function vi(e,t,n,r){const i=[];let s,o,a,l,c,u;for(s=0,o=e.length,c=0;s{"function"===typeof e.beforeLayout&&e.beforeLayout()}));const u=l.reduce(((e,t)=>t.box.options&&!1===t.box.options.display?e:e+1),0)||1,d=Object.freeze({outerWidth:t,outerHeight:n,padding:i,availableWidth:s,availableHeight:o,vBoxMaxWidth:s/2/u,hBoxMaxHeight:o/2}),h=Object.assign({},i);mi(h,rn(r));const p=Object.assign({maxPadding:h,w:s,h:o,x:i.left,y:i.top},i),f=pi(l.concat(c),d);vi(a.fullSize,p,d,f),vi(l,p,d,f),vi(c,p,d,f)&&vi(l,p,d,f),_i(p),xi(a.leftAndTop,p,d,f),p.x+=p.w,p.y+=p.h,xi(a.rightAndBottom,p,d,f),e.chartArea={left:p.left,top:p.top,right:p.left+p.w,bottom:p.top+p.h,height:p.h,width:p.w},ie(a.chartArea,(t=>{const n=t.box;Object.assign(n,e.chartArea),n.update(p.w,p.h,{left:0,top:0,right:0,bottom:0})}))}};class wi{acquireContext(e,t){}releaseContext(e){return!1}addEventListener(e,t,n){}removeEventListener(e,t,n){}getDevicePixelRatio(){return 1}getMaximumSize(e,t,n,r){return t=Math.max(0,t||e.width),n=n||e.height,{width:t,height:Math.max(0,r?Math.floor(t/r):n)}}isAttached(e){return!0}updateConfig(e){}}class Ti extends wi{acquireContext(e){return e&&e.getContext&&e.getContext("2d")||null}updateConfig(e){e.options.animation=!1}}const Ai="$chartjs",Ci={touchstart:"mousedown",touchmove:"mousemove",touchend:"mouseup",pointerenter:"mouseenter",pointerdown:"mousedown",pointermove:"mousemove",pointerup:"mouseup",pointerleave:"mouseout",pointerout:"mouseout"},Ii=e=>null===e||""===e;function Ri(e,t){const n=e.style,r=e.getAttribute("height"),i=e.getAttribute("width");if(e[Ai]={initial:{height:r,width:i,style:{display:n.display,height:n.height,width:n.width}}},n.display=n.display||"block",n.boxSizing=n.boxSizing||"border-box",Ii(i)){const t=Jn(e,"width");void 0!==t&&(e.width=t)}if(Ii(r))if(""===e.style.height)e.height=e.width/(t||2);else{const t=Jn(e,"height");void 0!==t&&(e.height=t)}return e}const ki=!!Qn&&{passive:!0};function Pi(e,t,n){e.addEventListener(t,n,ki)}function Oi(e,t,n){e.canvas.removeEventListener(t,n,ki)}function Ni(e,t){const n=Ci[e.type]||e.type,{x:r,y:i}=qn(e,t);return{type:n,chart:t,native:e,x:void 0!==r?r:null,y:void 0!==i?i:null}}function Mi(e,t){for(const n of e)if(n===t||n.contains(t))return!0}function Di(e,t,n){const r=e.canvas,i=new MutationObserver((e=>{let t=!1;for(const n of e)t=t||Mi(n.addedNodes,r),t=t&&!Mi(n.removedNodes,r);t&&n()}));return i.observe(document,{childList:!0,subtree:!0}),i}function Li(e,t,n){const r=e.canvas,i=new MutationObserver((e=>{let t=!1;for(const n of e)t=t||Mi(n.removedNodes,r),t=t&&!Mi(n.addedNodes,r);t&&n()}));return i.observe(document,{childList:!0,subtree:!0}),i}const Fi=new Map;let Bi=0;function Ui(){const e=window.devicePixelRatio;e!==Bi&&(Bi=e,Fi.forEach(((t,n)=>{n.currentDevicePixelRatio!==e&&t()})))}function Gi(e,t){Fi.size||window.addEventListener("resize",Ui),Fi.set(e,t)}function $i(e){Fi.delete(e),Fi.size||window.removeEventListener("resize",Ui)}function zi(e,t,n){const r=e.canvas,i=r&&Un(r);if(!i)return;const s=rt(((e,t)=>{const r=i.clientWidth;n(e,t),r{const t=e[0],n=t.contentRect.width,r=t.contentRect.height;0===n&&0===r||s(n,r)}));return o.observe(i),Gi(e,s),o}function Hi(e,t,n){n&&n.disconnect(),"resize"===t&&$i(e)}function Vi(e,t,n){const r=e.canvas,i=rt((t=>{null!==e.ctx&&n(Ni(t,e))}),e);return Pi(r,t,i),i}class ji extends wi{acquireContext(e,t){const n=e&&e.getContext&&e.getContext("2d");return n&&n.canvas===e?(Ri(e,t),n):null}releaseContext(e){const t=e.canvas;if(!t[Ai])return!1;const n=t[Ai].initial;["height","width"].forEach((e=>{const r=n[e];K(r)?t.removeAttribute(e):t.setAttribute(e,r)}));const r=n.style||{};return Object.keys(r).forEach((e=>{t.style[e]=r[e]})),t.width=t.width,delete t[Ai],!0}addEventListener(e,t,n){this.removeEventListener(e,t);const r=e.$proxies||(e.$proxies={}),i={attach:Di,detach:Li,resize:zi},s=i[t]||Vi;r[t]=s(e,t,n)}removeEventListener(e,t){const n=e.$proxies||(e.$proxies={}),r=n[t];if(!r)return;const i={attach:Hi,detach:Hi,resize:Hi},s=i[t]||Oi;s(e,t,r),n[t]=void 0}getDevicePixelRatio(){return window.devicePixelRatio}getMaximumSize(e,t,n,r){return Kn(e,t,n,r)}isAttached(e){const t=Un(e);return!(!t||!t.isConnected)}}function Wi(e){return!Bn()||"undefined"!==typeof OffscreenCanvas&&e instanceof OffscreenCanvas?Ti:ji}class qi{static defaults={};static defaultRoutes=void 0;x;y;active=!1;options;$animations;tooltipPosition(e){const{x:t,y:n}=this.getProps(["x","y"],e);return{x:t,y:n}}hasValue(){return Me(this.x)&&Me(this.y)}getProps(e,t){const n=this.$animations;if(!t||!n)return this;const r={};return e.forEach((e=>{r[e]=n[e]&&n[e].active()?n[e]._to:this[e]})),r}}function Xi(e,t){const n=e.options.ticks,r=Yi(e),i=Math.min(n.maxTicksLimit||r,r),s=n.major.enabled?Zi(t):[],o=s.length,a=s[0],l=s[o-1],c=[];if(o>i)return Qi(t,c,s,o/i),c;const u=Ki(s,t,i);if(o>0){let e,n;const r=o>1?Math.round((l-a)/(o-1)):null;for(Ji(t,c,u,K(r)?0:a-r,a),e=0,n=o-1;ei)return e}return Math.max(i,1)}function Zi(e){const t=[];let n,r;for(n=0,r=e.length;n"left"===e?"right":"right"===e?"left":e,ns=(e,t,n)=>"top"===t||"left"===t?e[t]+n:e[t]-n,rs=(e,t)=>Math.min(t||e,e);function is(e,t){const n=[],r=e.length/t,i=e.length;let s=0;for(;so+a)))return c}function os(e,t){ie(e,(e=>{const n=e.gc,r=n.length/2;let i;if(r>t){for(i=0;ir?r:n,r=i&&n>r?n:r,{min:ee(n,ee(r,n)),max:ee(r,ee(n,r))}}getPadding(){return{left:this.paddingLeft||0,top:this.paddingTop||0,right:this.paddingRight||0,bottom:this.paddingBottom||0}}getTicks(){return this.ticks}getLabels(){const e=this.chart.data;return this.options.labels||(this.isHorizontal()?e.xLabels:e.yLabels)||e.labels||[]}getLabelItems(e=this.chart.chartArea){const t=this._labelItems||(this._labelItems=this._computeLabelItems(e));return t}beforeLayout(){this._cache={},this._dataLimitsCached=!1}beforeUpdate(){re(this.options.beforeUpdate,[this])}update(e,t,n){const{beginAtZero:r,grace:i,ticks:s}=this.options,o=s.sampleSize;this.beforeUpdate(),this.maxWidth=e,this.maxHeight=t,this._margins=n=Object.assign({left:0,right:0,top:0,bottom:0},n),this.ticks=null,this._labelSizes=null,this._gridLineItems=null,this._labelItems=null,this.beforeSetDimensions(),this.setDimensions(),this.afterSetDimensions(),this._maxLength=this.isHorizontal()?this.width+n.left+n.right:this.height+n.top+n.bottom,this._dataLimitsCached||(this.beforeDataLimits(),this.determineDataLimits(),this.afterDataLimits(),this._range=an(this,i,r),this._dataLimitsCached=!0),this.beforeBuildTicks(),this.ticks=this.buildTicks()||[],this.afterBuildTicks();const a=o=i||n<=1||!this.isHorizontal())return void(this.labelRotation=r);const c=this._getLabelSizes(),u=c.widest.width,d=c.highest.height,h=je(this.chart.width-u,0,this.maxWidth);s=e.offset?this.maxWidth/n:h/(n-1),u+6>s&&(s=h/(n-(e.offset?.5:1)),o=this.maxHeight-as(e.grid)-t.padding-ls(e.title,this.chart.options.font),a=Math.sqrt(u*u+d*d),l=Be(Math.min(Math.asin(je((c.highest.height+6)/s,-1,1)),Math.asin(je(o/a,-1,1))-Math.asin(je(d/a,-1,1)))),l=Math.max(r,Math.min(i,l))),this.labelRotation=l}afterCalculateLabelRotation(){re(this.options.afterCalculateLabelRotation,[this])}afterAutoSkip(){}beforeFit(){re(this.options.beforeFit,[this])}fit(){const e={width:0,height:0},{chart:t,options:{ticks:n,title:r,grid:i}}=this,s=this._isVisible(),o=this.isHorizontal();if(s){const s=ls(r,t.options.font);if(o?(e.width=this.maxWidth,e.height=as(i)+s):(e.height=this.maxHeight,e.width=as(i)+s),n.display&&this.ticks.length){const{first:t,last:r,widest:i,highest:s}=this._getLabelSizes(),a=2*n.padding,l=Fe(this.labelRotation),c=Math.cos(l),u=Math.sin(l);if(o){const t=n.mirror?0:u*i.width+c*s.height;e.height=Math.min(this.maxHeight,e.height+t+a)}else{const t=n.mirror?0:c*i.width+u*s.height;e.width=Math.min(this.maxWidth,e.width+t+a)}this._calculatePadding(t,r,u,c)}}this._handleMargins(),o?(this.width=this._length=t.width-this._margins.left-this._margins.right,this.height=e.height):(this.width=e.width,this.height=this._length=t.height-this._margins.top-this._margins.bottom)}_calculatePadding(e,t,n,r){const{ticks:{align:i,padding:s},position:o}=this.options,a=0!==this.labelRotation,l="top"!==o&&"x"===this.axis;if(this.isHorizontal()){const o=this.getPixelForTick(0)-this.left,c=this.right-this.getPixelForTick(this.ticks.length-1);let u=0,d=0;a?l?(u=r*e.width,d=n*t.height):(u=n*e.height,d=r*t.width):"start"===i?d=t.width:"end"===i?u=e.width:"inner"!==i&&(u=e.width/2,d=t.width/2),this.paddingLeft=Math.max((u-o+s)*this.width/(this.width-o),0),this.paddingRight=Math.max((d-c+s)*this.width/(this.width-c),0)}else{let n=t.height/2,r=e.height/2;"start"===i?(n=0,r=e.height):"end"===i&&(n=t.height,r=0),this.paddingTop=n+s,this.paddingBottom=r+s}}_handleMargins(){this._margins&&(this._margins.left=Math.max(this.paddingLeft,this._margins.left),this._margins.top=Math.max(this.paddingTop,this._margins.top),this._margins.right=Math.max(this.paddingRight,this._margins.right),this._margins.bottom=Math.max(this.paddingBottom,this._margins.bottom))}afterFit(){re(this.options.afterFit,[this])}isHorizontal(){const{axis:e,position:t}=this.options;return"top"===t||"bottom"===t||"x"===e}isFullSize(){return this.options.fullSize}_convertTicksToLabels(e){let t,n;for(this.beforeTickToLabelConversion(),this.generateTickLabels(e),t=0,n=e.length;t({width:s[e]||0,height:o[e]||0});return{first:S(0),last:S(t-1),widest:S(E),highest:S(x),widths:s,heights:o}}getLabelForValue(e){return e}getPixelForValue(e,t){return NaN}getValueForPixel(e){}getPixelForTick(e){const t=this.ticks;return e<0||e>t.length-1?null:this.getPixelForValue(t[e].value)}getPixelForDecimal(e){this._reversePixels&&(e=1-e);const t=this._startPixel+e*this._length;return We(this._alignToPixels?Lt(this.chart,t,0):t)}getDecimalForPixel(e){const t=(e-this._startPixel)/this._length;return this._reversePixels?1-t:t}getBasePixel(){return this.getPixelForValue(this.getBaseValue())}getBaseValue(){const{min:e,max:t}=this;return e<0&&t<0?t:e>0&&t>0?e:0}getContext(e){const t=this.ticks||[];if(e>=0&&eo*r?o/n:a/r:a*r0}_computeGridLineItems(e){const t=this.axis,n=this.chart,r=this.options,{grid:i,position:s,border:o}=r,a=i.offset,l=this.isHorizontal(),c=this.ticks,u=c.length+(a?1:0),d=as(i),h=[],p=o.setContext(this.getContext()),f=p.display?p.width:0,g=f/2,m=function(e){return Lt(n,e,f)};let b,_,y,v,E,x,S,w,T,A,C,I;if("top"===s)b=m(this.bottom),x=this.bottom-d,w=b-g,A=m(e.top)+g,I=e.bottom;else if("bottom"===s)b=m(this.top),A=e.top,I=m(e.bottom)-g,x=b+g,w=this.top+d;else if("left"===s)b=m(this.right),E=this.right-d,S=b-g,T=m(e.left)+g,C=e.right;else if("right"===s)b=m(this.left),T=e.left,C=m(e.right)-g,E=b+g,S=this.left+d;else if("x"===t){if("center"===s)b=m((e.top+e.bottom)/2+.5);else if(Q(s)){const e=Object.keys(s)[0],t=s[e];b=m(this.chart.scales[e].getPixelForValue(t))}A=e.top,I=e.bottom,x=b+g,w=x+d}else if("y"===t){if("center"===s)b=m((e.left+e.right)/2);else if(Q(s)){const e=Object.keys(s)[0],t=s[e];b=m(this.chart.scales[e].getPixelForValue(t))}E=b-g,S=E-d,T=e.left,C=e.right}const R=te(r.ticks.maxTicksLimit,u),k=Math.max(1,Math.ceil(u/R));for(_=0;_t.value===e));if(r>=0){const e=t.setContext(this.getContext(r));return e.lineWidth}return 0}drawGrid(e){const t=this.options.grid,n=this.ctx,r=this._gridLineItems||(this._gridLineItems=this._computeGridLineItems(e));let i,s;const o=(e,t,r)=>{r.width&&r.color&&(n.save(),n.lineWidth=r.width,n.strokeStyle=r.color,n.setLineDash(r.borderDash||[]),n.lineDashOffset=r.borderDashOffset,n.beginPath(),n.moveTo(e.x,e.y),n.lineTo(t.x,t.y),n.stroke(),n.restore())};if(t.display)for(i=0,s=r.length;i{this.drawBackground(),this.drawGrid(e),this.drawTitle()}},{z:r,draw:()=>{this.drawBorder()}},{z:t,draw:e=>{this.drawLabels(e)}}]:[{z:t,draw:e=>{this.draw(e)}}]}getMatchingVisibleMetas(e){const t=this.chart.getSortedVisibleDatasetMetas(),n=this.axis+"AxisID",r=[];let i,s;for(i=0,s=t.length;i{const r=n.split("."),i=r.pop(),s=[e].concat(r).join("."),o=t[n].split("."),a=o.pop(),l=o.join(".");Nt.route(s,i,l,a)}))}function bs(e){return"id"in e&&"defaults"in e}class _s{constructor(){this.controllers=new fs(qr,"datasets",!0),this.elements=new fs(qi,"elements"),this.plugins=new fs(Object,"plugins"),this.scales=new fs(ps,"scales"),this._typedRegistries=[this.controllers,this.scales,this.elements]}add(...e){this._each("register",e)}remove(...e){this._each("unregister",e)}addControllers(...e){this._each("register",e,this.controllers)}addElements(...e){this._each("register",e,this.elements)}addPlugins(...e){this._each("register",e,this.plugins)}addScales(...e){this._each("register",e,this.scales)}getController(e){return this._get(e,this.controllers,"controller")}getElement(e){return this._get(e,this.elements,"element")}getPlugin(e){return this._get(e,this.plugins,"plugin")}getScale(e){return this._get(e,this.scales,"scale")}removeControllers(...e){this._each("unregister",e,this.controllers)}removeElements(...e){this._each("unregister",e,this.elements)}removePlugins(...e){this._each("unregister",e,this.plugins)}removeScales(...e){this._each("unregister",e,this.scales)}_each(e,t,n){[...t].forEach((t=>{const r=n||this._getRegistryForType(t);n||r.isForType(t)||r===this.plugins&&t.id?this._exec(e,r,t):ie(t,(t=>{const r=n||this._getRegistryForType(t);this._exec(e,r,t)}))}))}_exec(e,t,n){const r=me(e);re(n["before"+r],[],n),t[e](n),re(n["after"+r],[],n)}_getRegistryForType(e){for(let t=0;te.filter((e=>!t.some((t=>e.plugin.id===t.plugin.id))));this._notify(r(t,n),e,"stop"),this._notify(r(n,t),e,"start")}}function Es(e){const t={},n=[],r=Object.keys(ys.plugins.items);for(let s=0;s1&&Is(e[0].toLowerCase());if(t)return t}throw new Error(`Cannot determine type of '${e}' axis. Please provide 'axis' or 'position' option.`)}function Ps(e,t,n){if(n[t+"AxisID"]===e)return{axis:t}}function Os(e,t){if(t.data&&t.data.datasets){const n=t.data.datasets.filter((t=>t.xAxisID===e||t.yAxisID===e));if(n.length)return Ps(e,"x",n[0])||Ps(e,"y",n[0])}return{}}function Ns(e,t){const n=It[e.type]||{scales:{}},r=t.scales||{},i=Ts(e.type,t),s=Object.create(null);return Object.keys(r).forEach((t=>{const o=r[t];if(!Q(o))return console.error(`Invalid scale configuration for scale: ${t}`);if(o._proxy)return console.warn(`Ignoring resolver passed as options for scale: ${t}`);const a=ks(t,o,Os(t,e),Nt.scales[o.type]),l=Cs(a,i),c=n.scales||{};s[t]=ue(Object.create(null),[{axis:a},o,c[a],c[l]])})),e.data.datasets.forEach((n=>{const i=n.type||e.type,o=n.indexAxis||Ts(i,t),a=It[i]||{},l=a.scales||{};Object.keys(l).forEach((e=>{const t=As(e,o),i=n[t+"AxisID"]||t;s[i]=s[i]||Object.create(null),ue(s[i],[{axis:t},r[i],l[e]])}))})),Object.keys(s).forEach((e=>{const t=s[e];ue(t,[Nt.scales[t.type],Nt.scale])})),s}function Ms(e){const t=e.options||(e.options={});t.plugins=te(t.plugins,{}),t.scales=Ns(e,t)}function Ds(e){return e=e||{},e.datasets=e.datasets||[],e.labels=e.labels||[],e}function Ls(e){return e=e||{},e.data=Ds(e.data),Ms(e),e}const Fs=new Map,Bs=new Set;function Us(e,t){let n=Fs.get(e);return n||(n=t(),Fs.set(e,n),Bs.add(n)),n}const Gs=(e,t,n)=>{const r=ge(t,n);void 0!==r&&e.add(r)};class $s{constructor(e){this._config=Ls(e),this._scopeCache=new Map,this._resolverCache=new Map}get platform(){return this._config.platform}get type(){return this._config.type}set type(e){this._config.type=e}get data(){return this._config.data}set data(e){this._config.data=Ds(e)}get options(){return this._config.options}set options(e){this._config.options=e}get plugins(){return this._config.plugins}update(){const e=this._config;this.clearCache(),Ms(e)}clearCache(){this._scopeCache.clear(),this._resolverCache.clear()}datasetScopeKeys(e){return Us(e,(()=>[[`datasets.${e}`,""]]))}datasetAnimationScopeKeys(e,t){return Us(`${e}.transition.${t}`,(()=>[[`datasets.${e}.transitions.${t}`,`transitions.${t}`],[`datasets.${e}`,""]]))}datasetElementScopeKeys(e,t){return Us(`${e}-${t}`,(()=>[[`datasets.${e}.elements.${t}`,`datasets.${e}`,`elements.${t}`,""]]))}pluginScopeKeys(e){const t=e.id,n=this.type;return Us(`${n}-plugin-${t}`,(()=>[[`plugins.${t}`,...e.additionalOptionScopes||[]]]))}_cachedScopes(e,t){const n=this._scopeCache;let r=n.get(e);return r&&!t||(r=new Map,n.set(e,r)),r}getOptionScopes(e,t,n){const{options:r,type:i}=this,s=this._cachedScopes(e,n),o=s.get(t);if(o)return o;const a=new Set;t.forEach((t=>{e&&(a.add(e),t.forEach((t=>Gs(a,e,t)))),t.forEach((e=>Gs(a,r,e))),t.forEach((e=>Gs(a,It[i]||{},e))),t.forEach((e=>Gs(a,Nt,e))),t.forEach((e=>Gs(a,Rt,e)))}));const l=Array.from(a);return 0===l.length&&l.push(Object.create(null)),Bs.has(t)&&s.set(t,l),l}chartOptionScopes(){const{options:e,type:t}=this;return[e,It[t]||{},Nt.datasets[t]||{},{type:t},Nt,Rt]}resolveNamedOptions(e,t,n,r=[""]){const i={$shared:!0},{resolver:s,subPrefixes:o}=zs(this._resolverCache,e,r);let a=s;if(Vs(s,t)){i.$shared=!1,n=_e(n)?n():n;const t=this.createResolver(e,n,o);a=un(s,n,t)}for(const l of t)i[l]=a[l];return i}createResolver(e,t,n=[""],r){const{resolver:i}=zs(this._resolverCache,e,n);return Q(t)?un(i,t,void 0,r):i}}function zs(e,t,n){let r=e.get(t);r||(r=new Map,e.set(t,r));const i=n.join();let s=r.get(i);if(!s){const e=cn(t,n);s={resolver:e,subPrefixes:n.filter((e=>!e.toLowerCase().includes("hover")))},r.set(i,s)}return s}const Hs=e=>Q(e)&&Object.getOwnPropertyNames(e).reduce(((t,n)=>t||_e(e[n])),!1);function Vs(e,t){const{isScriptable:n,isIndexable:r}=dn(e);for(const i of t){const t=n(i),s=r(i),o=(s||t)&&e[i];if(t&&(_e(o)||Hs(o))||s&&Z(o))return!0}return!1}var js="4.3.0";const Ws=["top","bottom","left","right","chartArea"];function qs(e,t){return"top"===e||"bottom"===e||-1===Ws.indexOf(e)&&"x"===t}function Xs(e,t){return function(n,r){return n[e]===r[e]?n[t]-r[t]:n[e]-r[e]}}function Ys(e){const t=e.chart,n=t.options.animation;t.notifyPlugins("afterRender"),re(n&&n.onComplete,[e],t)}function Ks(e){const t=e.chart,n=t.options.animation;re(n&&n.onProgress,[e],t)}function Zs(e){return Bn()&&"string"===typeof e?e=document.getElementById(e):e&&e.length&&(e=e[0]),e&&e.canvas&&(e=e.canvas),e}const Qs={},Js=e=>{const t=Zs(e);return Object.values(Qs).filter((e=>e.canvas===t)).pop()};function eo(e,t,n){const r=Object.keys(e);for(const i of r){const r=+i;if(r>=t){const s=e[i];delete e[i],(n>0||r>t)&&(e[r+n]=s)}}}function to(e,t,n,r){return n&&"mouseout"!==e.type?r?t:e:null}function no(e){const{xScale:t,yScale:n}=e;if(t&&n)return{left:t.left,right:t.right,top:n.top,bottom:n.bottom}}class ro{static defaults=Nt;static instances=Qs;static overrides=It;static registry=ys;static version=js;static getChart=Js;static register(...e){ys.add(...e),io()}static unregister(...e){ys.remove(...e),io()}constructor(e,t){const n=this.config=new $s(t),r=Zs(e),i=Js(r);if(i)throw new Error("Canvas is already in use. Chart with ID '"+i.id+"' must be destroyed before the canvas with ID '"+i.canvas.id+"' can be reused.");const s=n.createResolver(n.chartOptionScopes(),this.getContext());this.platform=new(n.platform||Wi(r)),this.platform.updateConfig(n);const o=this.platform.acquireContext(r,s.aspectRatio),a=o&&o.canvas,l=a&&a.height,c=a&&a.width;this.id=Y(),this.ctx=o,this.canvas=a,this.width=c,this.height=l,this._options=s,this._aspectRatio=this.aspectRatio,this._layers=[],this._metasets=[],this._stacks=void 0,this.boxes=[],this.currentDevicePixelRatio=void 0,this.chartArea=void 0,this._active=[],this._lastEvent=void 0,this._listeners={},this._responsiveListeners=void 0,this._sortedMetasets=[],this.scales={},this._plugins=new vs,this.$proxies={},this._hiddenIndices={},this.attached=!1,this._animationsDisabled=void 0,this.$context=void 0,this._doResize=it((e=>this.update(e)),s.resizeDelay||0),this._dataChanges=[],Qs[this.id]=this,o&&a?(Er.listen(this,"complete",Ys),Er.listen(this,"progress",Ks),this._initialize(),this.attached&&this.update()):console.error("Failed to create chart: can't acquire context from the given item")}get aspectRatio(){const{options:{aspectRatio:e,maintainAspectRatio:t},width:n,height:r,_aspectRatio:i}=this;return K(e)?t&&i?i:r?n/r:null:e}get data(){return this.config.data}set data(e){this.config.data=e}get options(){return this._options}set options(e){this.config.options=e}get registry(){return ys}_initialize(){return this.notifyPlugins("beforeInit"),this.options.responsive?this.resize():Zn(this,this.options.devicePixelRatio),this.bindEvents(),this.notifyPlugins("afterInit"),this}clear(){return Ft(this.canvas,this.ctx),this}stop(){return Er.stop(this),this}resize(e,t){Er.running(this)?this._resizeBeforeDraw={width:e,height:t}:this._resize(e,t)}_resize(e,t){const n=this.options,r=this.canvas,i=n.maintainAspectRatio&&this.aspectRatio,s=this.platform.getMaximumSize(r,e,t,i),o=n.devicePixelRatio||this.platform.getDevicePixelRatio(),a=this.width?"resize":"attach";this.width=s.width,this.height=s.height,this._aspectRatio=this.aspectRatio,Zn(this,o,!0)&&(this.notifyPlugins("resize",{size:s}),re(n.onResize,[this,s],this),this.attached&&this._doResize(a)&&this.render())}ensureScalesHaveIDs(){const e=this.options,t=e.scales||{};ie(t,((e,t)=>{e.id=t}))}buildOrUpdateScales(){const e=this.options,t=e.scales,n=this.scales,r=Object.keys(n).reduce(((e,t)=>(e[t]=!1,e)),{});let i=[];t&&(i=i.concat(Object.keys(t).map((e=>{const n=t[e],r=ks(e,n),i="r"===r,s="x"===r;return{options:n,dposition:i?"chartArea":s?"bottom":"left",dtype:i?"radialLinear":s?"category":"linear"}})))),ie(i,(t=>{const i=t.options,s=i.id,o=ks(s,i),a=te(i.type,t.dtype);void 0!==i.position&&qs(i.position,o)===qs(t.dposition)||(i.position=t.dposition),r[s]=!0;let l=null;if(s in n&&n[s].type===a)l=n[s];else{const e=ys.getScale(a);l=new e({id:s,type:a,ctx:this.ctx,chart:this}),n[l.id]=l}l.init(i,e)})),ie(r,((e,t)=>{e||delete n[t]})),ie(n,(e=>{Si.configure(this,e,e.options),Si.addBox(this,e)}))}_updateMetasets(){const e=this._metasets,t=this.data.datasets.length,n=e.length;if(e.sort(((e,t)=>e.index-t.index)),n>t){for(let e=t;et.length&&delete this._stacks,e.forEach(((e,n)=>{0===t.filter((t=>t===e._dataset)).length&&this._destroyDatasetMeta(n)}))}buildOrUpdateControllers(){const e=[],t=this.data.datasets;let n,r;for(this._removeUnreferencedMetasets(),n=0,r=t.length;n{this.getDatasetMeta(t).controller.reset()}),this)}reset(){this._resetElements(),this.notifyPlugins("reset")}update(e){const t=this.config;t.update();const n=this._options=t.createResolver(t.chartOptionScopes(),this.getContext()),r=this._animationsDisabled=!n.animation;if(this._updateScales(),this._checkEventBindings(),this._updateHiddenIndices(),this._plugins.invalidate(),!1===this.notifyPlugins("beforeUpdate",{mode:e,cancelable:!0}))return;const i=this.buildOrUpdateControllers();this.notifyPlugins("beforeElementsUpdate");let s=0;for(let l=0,c=this.data.datasets.length;l{e.reset()})),this._updateDatasets(e),this.notifyPlugins("afterUpdate",{mode:e}),this._layers.sort(Xs("z","_idx"));const{_active:o,_lastEvent:a}=this;a?this._eventHandler(a,!0):o.length&&this._updateHoverStyles(o,o,!0),this.render()}_updateScales(){ie(this.scales,(e=>{Si.removeBox(this,e)})),this.ensureScalesHaveIDs(),this.buildOrUpdateScales()}_checkEventBindings(){const e=this.options,t=new Set(Object.keys(this._listeners)),n=new Set(e.events);ye(t,n)&&!!this._responsiveListeners===e.responsive||(this.unbindEvents(),this.bindEvents())}_updateHiddenIndices(){const{_hiddenIndices:e}=this,t=this._getUniformDataChanges()||[];for(const{method:n,start:r,count:i}of t){const t="_removeElements"===n?-i:i;eo(e,r,t)}}_getUniformDataChanges(){const e=this._dataChanges;if(!e||!e.length)return;this._dataChanges=[];const t=this.data.datasets.length,n=t=>new Set(e.filter((e=>e[0]===t)).map(((e,t)=>t+","+e.splice(1).join(",")))),r=n(0);for(let i=1;ie.split(","))).map((e=>({method:e[1],start:+e[2],count:+e[3]})))}_updateLayout(e){if(!1===this.notifyPlugins("beforeLayout",{cancelable:!0}))return;Si.update(this,this.width,this.height,e);const t=this.chartArea,n=t.width<=0||t.height<=0;this._layers=[],ie(this.boxes,(e=>{n&&"chartArea"===e.position||(e.configure&&e.configure(),this._layers.push(...e._layers()))}),this),this._layers.forEach(((e,t)=>{e._idx=t})),this.notifyPlugins("afterLayout")}_updateDatasets(e){if(!1!==this.notifyPlugins("beforeDatasetsUpdate",{mode:e,cancelable:!0})){for(let e=0,t=this.data.datasets.length;e=0;--t)this._drawDataset(e[t]);this.notifyPlugins("afterDatasetsDraw")}_drawDataset(e){const t=this.ctx,n=e._clip,r=!n.disabled,i=no(e)||this.chartArea,s={meta:e,index:e.index,cancelable:!0};!1!==this.notifyPlugins("beforeDatasetDraw",s)&&(r&&$t(t,{left:!1===n.left?0:i.left-n.left,right:!1===n.right?this.width:i.right+n.right,top:!1===n.top?0:i.top-n.top,bottom:!1===n.bottom?this.height:i.bottom+n.bottom}),e.controller.draw(),r&&zt(t),s.cancelable=!1,this.notifyPlugins("afterDatasetDraw",s))}isPointInArea(e){return Gt(e,this.chartArea,this._minPadding)}getElementsAtEventForMode(e,t,n,r){const i=oi.modes[t];return"function"===typeof i?i(this,e,n,r):[]}getDatasetMeta(e){const t=this.data.datasets[e],n=this._metasets;let r=n.filter((e=>e&&e._dataset===t)).pop();return r||(r={type:null,data:[],dataset:null,controller:null,hidden:null,xAxisID:null,yAxisID:null,order:t&&t.order||0,index:e,_dataset:t,_parsed:[],_sorted:!1},n.push(r)),r}getContext(){return this.$context||(this.$context=ln(null,{chart:this,type:"chart"}))}getVisibleDatasetCount(){return this.getSortedVisibleDatasetMetas().length}isDatasetVisible(e){const t=this.data.datasets[e];if(!t)return!1;const n=this.getDatasetMeta(e);return"boolean"===typeof n.hidden?!n.hidden:!t.hidden}setDatasetVisibility(e,t){const n=this.getDatasetMeta(e);n.hidden=!t}toggleDataVisibility(e){this._hiddenIndices[e]=!this._hiddenIndices[e]}getDataVisibility(e){return!this._hiddenIndices[e]}_updateVisibility(e,t,n){const r=n?"show":"hide",i=this.getDatasetMeta(e),s=i.controller._resolveAnimations(void 0,r);be(t)?(i.data[t].hidden=!n,this.update()):(this.setDatasetVisibility(e,n),s.update(i,{visible:n}),this.update((t=>t.datasetIndex===e?r:void 0)))}hide(e,t){this._updateVisibility(e,t,!1)}show(e,t){this._updateVisibility(e,t,!0)}_destroyDatasetMeta(e){const t=this._metasets[e];t&&t.controller&&t.controller._destroy(),delete this._metasets[e]}_stop(){let e,t;for(this.stop(),Er.remove(this),e=0,t=this.data.datasets.length;e{t.addEventListener(this,n,r),e[n]=r},r=(e,t,n)=>{e.offsetX=t,e.offsetY=n,this._eventHandler(e)};ie(this.options.events,(e=>n(e,r)))}bindResponsiveEvents(){this._responsiveListeners||(this._responsiveListeners={});const e=this._responsiveListeners,t=this.platform,n=(n,r)=>{t.addEventListener(this,n,r),e[n]=r},r=(n,r)=>{e[n]&&(t.removeEventListener(this,n,r),delete e[n])},i=(e,t)=>{this.canvas&&this.resize(e,t)};let s;const o=()=>{r("attach",o),this.attached=!0,this.resize(),n("resize",i),n("detach",s)};s=()=>{this.attached=!1,r("resize",i),this._stop(),this._resize(0,0),n("attach",o)},t.isAttached(this.canvas)?o():s()}unbindEvents(){ie(this._listeners,((e,t)=>{this.platform.removeEventListener(this,t,e)})),this._listeners={},ie(this._responsiveListeners,((e,t)=>{this.platform.removeEventListener(this,t,e)})),this._responsiveListeners=void 0}updateHoverStyle(e,t,n){const r=n?"set":"remove";let i,s,o,a;for("dataset"===t&&(i=this.getDatasetMeta(e[0].datasetIndex),i.controller["_"+r+"DatasetHoverStyle"]()),o=0,a=e.length;o{const n=this.getDatasetMeta(e);if(!n)throw new Error("No dataset found at index "+e);return{datasetIndex:e,element:n.data[t],index:t}})),r=!se(n,t);r&&(this._active=n,this._lastEvent=null,this._updateHoverStyles(n,t))}notifyPlugins(e,t,n){return this._plugins.notify(this,e,t,n)}isPluginEnabled(e){return 1===this._plugins._cache.filter((t=>t.plugin.id===e)).length}_updateHoverStyles(e,t,n){const r=this.options.hover,i=(e,t)=>e.filter((e=>!t.some((t=>e.datasetIndex===t.datasetIndex&&e.index===t.index)))),s=i(t,e),o=n?e:i(e,t);s.length&&this.updateHoverStyle(s,r.mode,!1),o.length&&r.mode&&this.updateHoverStyle(o,r.mode,!0)}_eventHandler(e,t){const n={event:e,replay:t,cancelable:!0,inChartArea:this.isPointInArea(e)},r=t=>(t.options.events||this.options.events).includes(e.native.type);if(!1===this.notifyPlugins("beforeEvent",n,r))return;const i=this._handleEvent(e,t,n.inChartArea);return n.cancelable=!1,this.notifyPlugins("afterEvent",n,r),(i||n.changed)&&this.render(),this}_handleEvent(e,t,n){const{_active:r=[],options:i}=this,s=t,o=this._getActiveElements(e,r,n,s),a=ve(e),l=to(e,this._lastEvent,n,a);n&&(this._lastEvent=null,re(i.onHover,[e,o,this],this),a&&re(i.onClick,[e,o,this],this));const c=!se(o,r);return(c||t)&&(this._active=o,this._updateHoverStyles(o,r,t)),this._lastEvent=l,c}_getActiveElements(e,t,n,r){if("mouseout"===e.type)return[];if(!n)return t;const i=this.options.hover;return this.getElementsAtEventForMode(e,i.mode,i,r)}}function io(){return ie(ro.instances,(e=>e._plugins.invalidate()))}function so(e,t,n=t){e.lineCap=te(n.borderCapStyle,t.borderCapStyle),e.setLineDash(te(n.borderDash,t.borderDash)),e.lineDashOffset=te(n.borderDashOffset,t.borderDashOffset),e.lineJoin=te(n.borderJoinStyle,t.borderJoinStyle),e.lineWidth=te(n.borderWidth,t.borderWidth),e.strokeStyle=te(n.borderColor,t.borderColor)}function oo(e,t,n){e.lineTo(n.x,n.y)}function ao(e){return e.stepped?Ht:e.tension||"monotone"===e.cubicInterpolationMode?Vt:oo}function lo(e,t,n={}){const r=e.length,{start:i=0,end:s=r-1}=n,{start:o,end:a}=t,l=Math.max(i,o),c=Math.min(s,a),u=ia&&s>a;return{count:r,start:l,loop:t.loop,ilen:c(o+(c?a-e:e))%s,y=()=>{p!==f&&(e.lineTo(m,f),e.lineTo(m,p),e.lineTo(m,g))};for(l&&(d=i[_(0)],e.moveTo(d.x,d.y)),u=0;u<=a;++u){if(d=i[_(u)],d.skip)continue;const t=d.x,n=d.y,r=0|t;r===h?(nf&&(f=n),m=(b*m+t)/++b):(y(),e.lineTo(t,n),h=r,b=0,p=f=n),g=n}y()}function ho(e){const t=e.options,n=t.borderDash&&t.borderDash.length,r=!e._decimated&&!e._loop&&!t.tension&&"monotone"!==t.cubicInterpolationMode&&!t.stepped&&!n;return r?uo:co}function po(e){return e.stepped?tr:e.tension||"monotone"===e.cubicInterpolationMode?nr:er}function fo(e,t,n,r){let i=t._path;i||(i=t._path=new Path2D,t.path(i,n,r)&&i.closePath()),so(e,t.options),e.stroke(i)}function go(e,t,n,r){const{segments:i,options:s}=t,o=ho(t);for(const a of i)so(e,s,a.style),e.beginPath(),o(e,t,a,{start:n,end:n+r-1})&&e.closePath(),e.stroke()}const mo="function"===typeof Path2D;function bo(e,t,n,r){mo&&!t.options.segment?fo(e,t,n,r):go(e,t,n,r)}class _o extends qi{static id="line";static defaults={borderCapStyle:"butt",borderDash:[],borderDashOffset:0,borderJoinStyle:"miter",borderWidth:3,capBezierPoints:!0,cubicInterpolationMode:"default",fill:!1,spanGaps:!1,stepped:!1,tension:0};static defaultRoutes={backgroundColor:"backgroundColor",borderColor:"borderColor"};static descriptors={_scriptable:!0,_indexable:e=>"borderDash"!==e&&"fill"!==e};constructor(e){super(),this.animated=!0,this.options=void 0,this._chart=void 0,this._loop=void 0,this._fullLoop=void 0,this._path=void 0,this._points=void 0,this._segments=void 0,this._decimated=!1,this._pointsUpdated=!1,this._datasetIndex=void 0,e&&Object.assign(this,e)}updateControlPoints(e,t){const n=this.options;if((n.tension||"monotone"===n.cubicInterpolationMode)&&!n.stepped&&!this._pointsUpdated){const r=n.spanGaps?this._loop:this._fullLoop;Fn(this._points,n,e,r,t),this._pointsUpdated=!0}}set points(e){this._points=e,delete this._segments,delete this._path,this._pointsUpdated=!1}get points(){return this._points}get segments(){return this._segments||(this._segments=gr(this,this.options.segment))}first(){const e=this.segments,t=this.points;return e.length&&t[e[0].start]}last(){const e=this.segments,t=this.points,n=e.length;return n&&t[e[n-1].end]}interpolate(e,t){const n=this.options,r=e[t],i=this.points,s=hr(this,{property:t,start:r,end:r});if(!s.length)return;const o=[],a=po(n);let l,c;for(l=0,c=s.length;l{let{boxHeight:n=t,boxWidth:r=t}=e;return e.usePointStyle&&(n=Math.min(n,t),r=e.pointStyleWidth||Math.min(r,t)),{boxWidth:r,boxHeight:n,itemHeight:Math.max(t,n)}},xo=(e,t)=>null!==e&&null!==t&&e.datasetIndex===t.datasetIndex&&e.index===t.index;class So extends qi{constructor(e){super(),this._added=!1,this.legendHitBoxes=[],this._hoveredItem=null,this.doughnutMode=!1,this.chart=e.chart,this.options=e.options,this.ctx=e.ctx,this.legendItems=void 0,this.columnSizes=void 0,this.lineWidths=void 0,this.maxHeight=void 0,this.maxWidth=void 0,this.top=void 0,this.bottom=void 0,this.left=void 0,this.right=void 0,this.height=void 0,this.width=void 0,this._margins=void 0,this.position=void 0,this.weight=void 0,this.fullSize=void 0}update(e,t,n){this.maxWidth=e,this.maxHeight=t,this._margins=n,this.setDimensions(),this.buildLabels(),this.fit()}setDimensions(){this.isHorizontal()?(this.width=this.maxWidth,this.left=this._margins.left,this.right=this.width):(this.height=this.maxHeight,this.top=this._margins.top,this.bottom=this.height)}buildLabels(){const e=this.options.labels||{};let t=re(e.generateLabels,[this.chart],this)||[];e.filter&&(t=t.filter((t=>e.filter(t,this.chart.data)))),e.sort&&(t=t.sort(((t,n)=>e.sort(t,n,this.chart.data)))),this.options.reverse&&t.reverse(),this.legendItems=t}fit(){const{options:e,ctx:t}=this;if(!e.display)return void(this.width=this.height=0);const n=e.labels,r=sn(n.font),i=r.size,s=this._computeTitleHeight(),{boxWidth:o,itemHeight:a}=Eo(n,i);let l,c;t.font=r.string,this.isHorizontal()?(l=this.maxWidth,c=this._fitRows(s,i,o,a)+10):(c=this.maxHeight,l=this._fitCols(s,r,o,a)+10),this.width=Math.min(l,e.maxWidth||this.maxWidth),this.height=Math.min(c,e.maxHeight||this.maxHeight)}_fitRows(e,t,n,r){const{ctx:i,maxWidth:s,options:{labels:{padding:o}}}=this,a=this.legendHitBoxes=[],l=this.lineWidths=[0],c=r+o;let u=e;i.textAlign="left",i.textBaseline="middle";let d=-1,h=-c;return this.legendItems.forEach(((e,p)=>{const f=n+t/2+i.measureText(e.text).width;(0===p||l[l.length-1]+f+2*o>s)&&(u+=c,l[l.length-(p>0?0:1)]=0,h+=c,d++),a[p]={left:0,top:h,row:d,width:f,height:r},l[l.length-1]+=f+o})),u}_fitCols(e,t,n,r){const{ctx:i,maxHeight:s,options:{labels:{padding:o}}}=this,a=this.legendHitBoxes=[],l=this.columnSizes=[],c=s-e;let u=o,d=0,h=0,p=0,f=0;return this.legendItems.forEach(((e,s)=>{const{itemWidth:g,itemHeight:m}=wo(n,t,i,e,r);s>0&&h+m+2*o>c&&(u+=d+o,l.push({width:d,height:h}),p+=d+o,f++,d=h=0),a[s]={left:p,top:h,col:f,width:g,height:m},d=Math.max(d,g),h+=m+o})),u+=d,l.push({width:d,height:h}),u}adjustHitBoxes(){if(!this.options.display)return;const e=this._computeTitleHeight(),{legendHitBoxes:t,options:{align:n,labels:{padding:r},rtl:i}}=this,s=sr(i,this.left,this.width);if(this.isHorizontal()){let i=0,o=ot(n,this.left+r,this.right-this.lineWidths[i]);for(const a of t)i!==a.row&&(i=a.row,o=ot(n,this.left+r,this.right-this.lineWidths[i])),a.top+=this.top+e+r,a.left=s.leftForLtr(s.x(o),a.width),o+=a.width+r}else{let i=0,o=ot(n,this.top+e+r,this.bottom-this.columnSizes[i].height);for(const a of t)a.col!==i&&(i=a.col,o=ot(n,this.top+e+r,this.bottom-this.columnSizes[i].height)),a.top=o,a.left+=this.left+r,a.left=s.leftForLtr(s.x(a.left),a.width),o+=a.height+r}}isHorizontal(){return"top"===this.options.position||"bottom"===this.options.position}draw(){if(this.options.display){const e=this.ctx;$t(e,this),this._draw(),zt(e)}}_draw(){const{options:e,columnSizes:t,lineWidths:n,ctx:r}=this,{align:i,labels:s}=e,o=Nt.color,a=sr(e.rtl,this.left,this.width),l=sn(s.font),{padding:c}=s,u=l.size,d=u/2;let h;this.drawTitle(),r.textAlign=a.textAlign("left"),r.textBaseline="middle",r.lineWidth=.5,r.font=l.string;const{boxWidth:p,boxHeight:f,itemHeight:g}=Eo(s,u),m=function(e,t,n){if(isNaN(p)||p<=0||isNaN(f)||f<0)return;r.save();const i=te(n.lineWidth,1);if(r.fillStyle=te(n.fillStyle,o),r.lineCap=te(n.lineCap,"butt"),r.lineDashOffset=te(n.lineDashOffset,0),r.lineJoin=te(n.lineJoin,"miter"),r.lineWidth=i,r.strokeStyle=te(n.strokeStyle,o),r.setLineDash(te(n.lineDash,[])),s.usePointStyle){const o={radius:f*Math.SQRT2/2,pointStyle:n.pointStyle,rotation:n.rotation,borderWidth:i},l=a.xPlus(e,p/2),c=t+d;Ut(r,o,l,c,s.pointStyleWidth&&p)}else{const s=t+Math.max((u-f)/2,0),o=a.leftForLtr(e,p),l=nn(n.borderRadius);r.beginPath(),Object.values(l).some((e=>0!==e))?Yt(r,{x:o,y:s,w:p,h:f,radius:l}):r.rect(o,s,p,f),r.fill(),0!==i&&r.stroke()}r.restore()},b=function(e,t,n){Xt(r,n.text,e,t+g/2,l,{strikethrough:n.hidden,textAlign:a.textAlign(n.textAlign)})},_=this.isHorizontal(),y=this._computeTitleHeight();h=_?{x:ot(i,this.left+c,this.right-n[0]),y:this.top+c+y,line:0}:{x:this.left+c,y:ot(i,this.top+y+c,this.bottom-t[0].height),line:0},or(this.ctx,e.textDirection);const v=g+c;this.legendItems.forEach(((o,u)=>{r.strokeStyle=o.fontColor,r.fillStyle=o.fontColor;const f=r.measureText(o.text).width,g=a.textAlign(o.textAlign||(o.textAlign=s.textAlign)),E=p+d+f;let x=h.x,S=h.y;a.setWidth(this.width),_?u>0&&x+E+c>this.right&&(S=h.y+=v,h.line++,x=h.x=ot(i,this.left+c,this.right-n[h.line])):u>0&&S+v>this.bottom&&(x=h.x=x+t[h.line].width+c,h.line++,S=h.y=ot(i,this.top+y+c,this.bottom-t[h.line].height));const w=a.x(x);if(m(w,S,o),x=at(g,x+p+d,_?x+E:this.right,e.rtl),b(a.x(x),S,o),_)h.x+=E+c;else if("string"!==typeof o.text){const e=l.lineHeight;h.y+=Co(o,e)}else h.y+=v})),ar(this.ctx,e.textDirection)}drawTitle(){const e=this.options,t=e.title,n=sn(t.font),r=rn(t.padding);if(!t.display)return;const i=sr(e.rtl,this.left,this.width),s=this.ctx,o=t.position,a=n.size/2,l=r.top+a;let c,u=this.left,d=this.width;if(this.isHorizontal())d=Math.max(...this.lineWidths),c=this.top+l,u=ot(e.align,u,this.right-d);else{const t=this.columnSizes.reduce(((e,t)=>Math.max(e,t.height)),0);c=l+ot(e.align,this.top,this.bottom-t-e.labels.padding-this._computeTitleHeight())}const h=ot(o,u,u+d);s.textAlign=i.textAlign(st(o)),s.textBaseline="middle",s.strokeStyle=t.color,s.fillStyle=t.color,s.font=n.string,Xt(s,t.text,h,c,n)}_computeTitleHeight(){const e=this.options.title,t=sn(e.font),n=rn(e.padding);return e.display?t.lineHeight+n.height:0}_getLegendItemAt(e,t){let n,r,i;if(qe(e,this.left,this.right)&&qe(t,this.top,this.bottom))for(i=this.legendHitBoxes,n=0;ne.length>t.length?e:t))),t+n.size/2+r.measureText(i).width}function Ao(e,t,n){let r=e;return"string"!==typeof t.text&&(r=Co(t,n)),r}function Co(e,t){const n=e.text?e.text.length+.5:0;return t*n}function Io(e,t){return!("mousemove"!==e&&"mouseout"!==e||!t.onHover&&!t.onLeave)||!(!t.onClick||"click"!==e&&"mouseup"!==e)}var Ro={id:"legend",_element:So,start(e,t,n){const r=e.legend=new So({ctx:e.ctx,options:n,chart:e});Si.configure(e,r,n),Si.addBox(e,r)},stop(e){Si.removeBox(e,e.legend),delete e.legend},beforeUpdate(e,t,n){const r=e.legend;Si.configure(e,r,n),r.options=n},afterUpdate(e){const t=e.legend;t.buildLabels(),t.adjustHitBoxes()},afterEvent(e,t){t.replay||e.legend.handleEvent(t.event)},defaults:{display:!0,position:"top",align:"center",fullSize:!0,reverse:!1,weight:1e3,onClick(e,t,n){const r=t.datasetIndex,i=n.chart;i.isDatasetVisible(r)?(i.hide(r),t.hidden=!0):(i.show(r),t.hidden=!1)},onHover:null,onLeave:null,labels:{color:e=>e.chart.options.color,boxWidth:40,padding:10,generateLabels(e){const t=e.data.datasets,{labels:{usePointStyle:n,pointStyle:r,textAlign:i,color:s,useBorderRadius:o,borderRadius:a}}=e.legend.options;return e._getSortedDatasetMetas().map((e=>{const l=e.controller.getStyle(n?0:void 0),c=rn(l.borderWidth);return{text:t[e.index].label,fillStyle:l.backgroundColor,fontColor:s,hidden:!e.visible,lineCap:l.borderCapStyle,lineDash:l.borderDash,lineDashOffset:l.borderDashOffset,lineJoin:l.borderJoinStyle,lineWidth:(c.width+c.height)/4,strokeStyle:l.borderColor,pointStyle:r||l.pointStyle,rotation:l.rotation,textAlign:i||l.textAlign,borderRadius:o&&(a||l.borderRadius),datasetIndex:e.index}}),this)}},title:{color:e=>e.chart.options.color,display:!1,position:"center",text:""}},descriptors:{_scriptable:e=>!e.startsWith("on"),labels:{_scriptable:e=>!["generateLabels","filter","sort"].includes(e)}}};class ko extends qi{constructor(e){super(),this.chart=e.chart,this.options=e.options,this.ctx=e.ctx,this._padding=void 0,this.top=void 0,this.bottom=void 0,this.left=void 0,this.right=void 0,this.width=void 0,this.height=void 0,this.position=void 0,this.weight=void 0,this.fullSize=void 0}update(e,t){const n=this.options;if(this.left=0,this.top=0,!n.display)return void(this.width=this.height=this.right=this.bottom=0);this.width=this.right=e,this.height=this.bottom=t;const r=Z(n.text)?n.text.length:1;this._padding=rn(n.padding);const i=r*sn(n.font).lineHeight+this._padding.height;this.isHorizontal()?this.height=i:this.width=i}isHorizontal(){const e=this.options.position;return"top"===e||"bottom"===e}_drawArgs(e){const{top:t,left:n,bottom:r,right:i,options:s}=this,o=s.align;let a,l,c,u=0;return this.isHorizontal()?(l=ot(o,n,i),c=t+e,a=i-n):("left"===s.position?(l=n+e,c=ot(o,r,t),u=-.5*Ee):(l=i-e,c=ot(o,t,r),u=.5*Ee),a=r-t),{titleX:l,titleY:c,maxWidth:a,rotation:u}}draw(){const e=this.ctx,t=this.options;if(!t.display)return;const n=sn(t.font),r=n.lineHeight,i=r/2+this._padding.top,{titleX:s,titleY:o,maxWidth:a,rotation:l}=this._drawArgs(i);Xt(e,t.text,0,0,n,{color:t.color,maxWidth:a,rotation:l,textAlign:st(t.align),textBaseline:"middle",translation:[s,o]})}}function Po(e,t){const n=new ko({ctx:e.ctx,options:t,chart:e});Si.configure(e,n,t),Si.addBox(e,n),e.titleBlock=n}var Oo={id:"title",_element:ko,start(e,t,n){Po(e,n)},stop(e){const t=e.titleBlock;Si.removeBox(e,t),delete e.titleBlock},beforeUpdate(e,t,n){const r=e.titleBlock;Si.configure(e,r,n),r.options=n},defaults:{align:"center",display:!1,font:{weight:"bold"},fullSize:!0,padding:10,position:"top",text:"",weight:2e3},defaultRoutes:{color:"color"},descriptors:{_scriptable:!0,_indexable:!1}};new WeakMap;const No={average(e){if(!e.length)return!1;let t,n,r=0,i=0,s=0;for(t=0,n=e.length;t-1?e.split("\n"):e}function Lo(e,t){const{element:n,datasetIndex:r,index:i}=t,s=e.getDatasetMeta(r).controller,{label:o,value:a}=s.getLabelAndValue(i);return{chart:e,label:o,parsed:s.getParsed(i),raw:e.data.datasets[r].data[i],formattedValue:a,dataset:s.getDataset(),dataIndex:i,datasetIndex:r,element:n}}function Fo(e,t){const n=e.chart.ctx,{body:r,footer:i,title:s}=e,{boxWidth:o,boxHeight:a}=t,l=sn(t.bodyFont),c=sn(t.titleFont),u=sn(t.footerFont),d=s.length,h=i.length,p=r.length,f=rn(t.padding);let g=f.height,m=0,b=r.reduce(((e,t)=>e+t.before.length+t.lines.length+t.after.length),0);if(b+=e.beforeBody.length+e.afterBody.length,d&&(g+=d*c.lineHeight+(d-1)*t.titleSpacing+t.titleMarginBottom),b){const e=t.displayColors?Math.max(a,l.lineHeight):l.lineHeight;g+=p*e+(b-p)*l.lineHeight+(b-1)*t.bodySpacing}h&&(g+=t.footerMarginTop+h*u.lineHeight+(h-1)*t.footerSpacing);let _=0;const y=function(e){m=Math.max(m,n.measureText(e).width+_)};return n.save(),n.font=c.string,ie(e.title,y),n.font=l.string,ie(e.beforeBody.concat(e.afterBody),y),_=t.displayColors?o+2+t.boxPadding:0,ie(r,(e=>{ie(e.before,y),ie(e.lines,y),ie(e.after,y)})),_=0,n.font=u.string,ie(e.footer,y),n.restore(),m+=f.width,{width:m,height:g}}function Bo(e,t){const{y:n,height:r}=t;return ne.height-r/2?"bottom":"center"}function Uo(e,t,n,r){const{x:i,width:s}=r,o=n.caretSize+n.caretPadding;return"left"===e&&i+s+o>t.width||("right"===e&&i-s-o<0||void 0)}function Go(e,t,n,r){const{x:i,width:s}=n,{width:o,chartArea:{left:a,right:l}}=e;let c="center";return"center"===r?c=i<=(a+l)/2?"left":"right":i<=s/2?c="left":i>=o-s/2&&(c="right"),Uo(c,e,t,n)&&(c="center"),c}function $o(e,t,n){const r=n.yAlign||t.yAlign||Bo(e,n);return{xAlign:n.xAlign||t.xAlign||Go(e,t,n,r),yAlign:r}}function zo(e,t){let{x:n,width:r}=e;return"right"===t?n-=r:"center"===t&&(n-=r/2),n}function Ho(e,t,n){let{y:r,height:i}=e;return"top"===t?r+=n:r-="bottom"===t?i+n:i/2,r}function Vo(e,t,n,r){const{caretSize:i,caretPadding:s,cornerRadius:o}=e,{xAlign:a,yAlign:l}=n,c=i+s,{topLeft:u,topRight:d,bottomLeft:h,bottomRight:p}=nn(o);let f=zo(t,a);const g=Ho(t,l,c);return"center"===l?"left"===a?f+=c:"right"===a&&(f-=c):"left"===a?f-=Math.max(u,h)+i:"right"===a&&(f+=Math.max(d,p)+i),{x:je(f,0,r.width-t.width),y:je(g,0,r.height-t.height)}}function jo(e,t,n){const r=rn(n.padding);return"center"===t?e.x+e.width/2:"right"===t?e.x+e.width-r.right:e.x+r.left}function Wo(e){return Mo([],Do(e))}function qo(e,t,n){return ln(e,{tooltip:t,tooltipItems:n,type:"tooltip"})}function Xo(e,t){const n=t&&t.dataset&&t.dataset.tooltip&&t.dataset.tooltip.callbacks;return n?e.override(n):e}const Yo={beforeTitle:X,title(e){if(e.length>0){const t=e[0],n=t.chart.data.labels,r=n?n.length:0;if(this&&this.options&&"dataset"===this.options.mode)return t.dataset.label||"";if(t.label)return t.label;if(r>0&&t.dataIndex{const t={before:[],lines:[],after:[]},i=Xo(n,e);Mo(t.before,Do(Ko(i,"beforeLabel",this,e))),Mo(t.lines,Ko(i,"label",this,e)),Mo(t.after,Do(Ko(i,"afterLabel",this,e))),r.push(t)})),r}getAfterBody(e,t){return Wo(Ko(t.callbacks,"afterBody",this,e))}getFooter(e,t){const{callbacks:n}=t,r=Ko(n,"beforeFooter",this,e),i=Ko(n,"footer",this,e),s=Ko(n,"afterFooter",this,e);let o=[];return o=Mo(o,Do(r)),o=Mo(o,Do(i)),o=Mo(o,Do(s)),o}_createItems(e){const t=this._active,n=this.chart.data,r=[],i=[],s=[];let o,a,l=[];for(o=0,a=t.length;oe.filter(t,r,i,n)))),e.itemSort&&(l=l.sort(((t,r)=>e.itemSort(t,r,n)))),ie(l,(t=>{const n=Xo(e.callbacks,t);r.push(Ko(n,"labelColor",this,t)),i.push(Ko(n,"labelPointStyle",this,t)),s.push(Ko(n,"labelTextColor",this,t))})),this.labelColors=r,this.labelPointStyles=i,this.labelTextColors=s,this.dataPoints=l,l}update(e,t){const n=this.options.setContext(this.getContext()),r=this._active;let i,s=[];if(r.length){const e=No[n.position].call(this,r,this._eventPosition);s=this._createItems(n),this.title=this.getTitle(s,n),this.beforeBody=this.getBeforeBody(s,n),this.body=this.getBody(s,n),this.afterBody=this.getAfterBody(s,n),this.footer=this.getFooter(s,n);const t=this._size=Fo(this,n),o=Object.assign({},e,t),a=$o(this.chart,n,o),l=Vo(n,o,a,this.chart);this.xAlign=a.xAlign,this.yAlign=a.yAlign,i={opacity:1,x:l.x,y:l.y,width:t.width,height:t.height,caretX:e.x,caretY:e.y}}else 0!==this.opacity&&(i={opacity:0});this._tooltipItems=s,this.$context=void 0,i&&this._resolveAnimations().update(this,i),e&&n.external&&n.external.call(this,{chart:this.chart,tooltip:this,replay:t})}drawCaret(e,t,n,r){const i=this.getCaretPosition(e,n,r);t.lineTo(i.x1,i.y1),t.lineTo(i.x2,i.y2),t.lineTo(i.x3,i.y3)}getCaretPosition(e,t,n){const{xAlign:r,yAlign:i}=this,{caretSize:s,cornerRadius:o}=n,{topLeft:a,topRight:l,bottomLeft:c,bottomRight:u}=nn(o),{x:d,y:h}=e,{width:p,height:f}=t;let g,m,b,_,y,v;return"center"===i?(y=h+f/2,"left"===r?(g=d,m=g-s,_=y+s,v=y-s):(g=d+p,m=g+s,_=y-s,v=y+s),b=g):(m="left"===r?d+Math.max(a,c)+s:"right"===r?d+p-Math.max(l,u)-s:this.caretX,"top"===i?(_=h,y=_-s,g=m-s,b=m+s):(_=h+f,y=_+s,g=m+s,b=m-s),v=_),{x1:g,x2:m,x3:b,y1:_,y2:y,y3:v}}drawTitle(e,t,n){const r=this.title,i=r.length;let s,o,a;if(i){const l=sr(n.rtl,this.x,this.width);for(e.x=jo(this,n.titleAlign,n),t.textAlign=l.textAlign(n.titleAlign),t.textBaseline="middle",s=sn(n.titleFont),o=n.titleSpacing,t.fillStyle=n.titleColor,t.font=s.string,a=0;a0!==e))?(e.beginPath(),e.fillStyle=i.multiKeyBackground,Yt(e,{x:t,y:p,w:l,h:a,radius:o}),e.fill(),e.stroke(),e.fillStyle=s.backgroundColor,e.beginPath(),Yt(e,{x:n,y:p+1,w:l-2,h:a-2,radius:o}),e.fill()):(e.fillStyle=i.multiKeyBackground,e.fillRect(t,p,l,a),e.strokeRect(t,p,l,a),e.fillStyle=s.backgroundColor,e.fillRect(n,p+1,l-2,a-2))}e.fillStyle=this.labelTextColors[n]}drawBody(e,t,n){const{body:r}=this,{bodySpacing:i,bodyAlign:s,displayColors:o,boxHeight:a,boxWidth:l,boxPadding:c}=n,u=sn(n.bodyFont);let d=u.lineHeight,h=0;const p=sr(n.rtl,this.x,this.width),f=function(n){t.fillText(n,p.x(e.x+h),e.y+d/2),e.y+=d+i},g=p.textAlign(s);let m,b,_,y,v,E,x;for(t.textAlign=s,t.textBaseline="middle",t.font=u.string,e.x=jo(this,g,n),t.fillStyle=n.bodyColor,ie(this.beforeBody,f),h=o&&"right"!==g?"center"===s?l/2+c:l+2+c:0,y=0,E=r.length;y0&&t.stroke()}_updateAnimationTarget(e){const t=this.chart,n=this.$animations,r=n&&n.x,i=n&&n.y;if(r||i){const n=No[e.position].call(this,this._active,this._eventPosition);if(!n)return;const s=this._size=Fo(this,e),o=Object.assign({},n,this._size),a=$o(t,e,o),l=Vo(e,o,a,t);r._to===l.x&&i._to===l.y||(this.xAlign=a.xAlign,this.yAlign=a.yAlign,this.width=s.width,this.height=s.height,this.caretX=n.x,this.caretY=n.y,this._resolveAnimations().update(this,l))}}_willRender(){return!!this.opacity}draw(e){const t=this.options.setContext(this.getContext());let n=this.opacity;if(!n)return;this._updateAnimationTarget(t);const r={width:this.width,height:this.height},i={x:this.x,y:this.y};n=Math.abs(n)<.001?0:n;const s=rn(t.padding),o=this.title.length||this.beforeBody.length||this.body.length||this.afterBody.length||this.footer.length;t.enabled&&o&&(e.save(),e.globalAlpha=n,this.drawBackground(i,e,r,t),or(e,t.textDirection),i.y+=s.top,this.drawTitle(i,e,t),this.drawBody(i,e,t),this.drawFooter(i,e,t),ar(e,t.textDirection),e.restore())}getActiveElements(){return this._active||[]}setActiveElements(e,t){const n=this._active,r=e.map((({datasetIndex:e,index:t})=>{const n=this.chart.getDatasetMeta(e);if(!n)throw new Error("Cannot find a dataset at index "+e);return{datasetIndex:e,element:n.data[t],index:t}})),i=!se(n,r),s=this._positionChanged(r,t);(i||s)&&(this._active=r,this._eventPosition=t,this._ignoreReplayEvents=!0,this.update(!0))}handleEvent(e,t,n=!0){if(t&&this._ignoreReplayEvents)return!1;this._ignoreReplayEvents=!1;const r=this.options,i=this._active||[],s=this._getActiveElements(e,i,t,n),o=this._positionChanged(s,e),a=t||!se(s,i)||o;return a&&(this._active=s,(r.enabled||r.external)&&(this._eventPosition={x:e.x,y:e.y},this.update(!0,t))),a}_getActiveElements(e,t,n,r){const i=this.options;if("mouseout"===e.type)return[];if(!r)return t;const s=this.chart.getElementsAtEventForMode(e,i.mode,i,n);return i.reverse&&s.reverse(),s}_positionChanged(e,t){const{caretX:n,caretY:r,options:i}=this,s=No[i.position].call(this,e,t);return!1!==s&&(n!==s.x||r!==s.y)}}var Qo={id:"tooltip",_element:Zo,positioners:No,afterInit(e,t,n){n&&(e.tooltip=new Zo({chart:e,options:n}))},beforeUpdate(e,t,n){e.tooltip&&e.tooltip.initialize(n)},reset(e,t,n){e.tooltip&&e.tooltip.initialize(n)},afterDraw(e){const t=e.tooltip;if(t&&t._willRender()){const n={tooltip:t};if(!1===e.notifyPlugins("beforeTooltipDraw",{...n,cancelable:!0}))return;t.draw(e.ctx),e.notifyPlugins("afterTooltipDraw",n)}},afterEvent(e,t){if(e.tooltip){const n=t.replay;e.tooltip.handleEvent(t.event,n,t.inChartArea)&&(t.changed=!0)}},defaults:{enabled:!0,external:null,position:"average",backgroundColor:"rgba(0,0,0,0.8)",titleColor:"#fff",titleFont:{weight:"bold"},titleSpacing:2,titleMarginBottom:6,titleAlign:"left",bodyColor:"#fff",bodySpacing:2,bodyFont:{},bodyAlign:"left",footerColor:"#fff",footerSpacing:2,footerMarginTop:6,footerFont:{weight:"bold"},footerAlign:"left",padding:6,caretPadding:2,caretSize:5,cornerRadius:6,boxHeight:(e,t)=>t.bodyFont.size,boxWidth:(e,t)=>t.bodyFont.size,multiKeyBackground:"#fff",displayColors:!0,boxPadding:0,borderColor:"rgba(0,0,0,0)",borderWidth:0,animation:{duration:400,easing:"easeOutQuart"},animations:{numbers:{type:"number",properties:["x","y","width","height","caretX","caretY"]},opacity:{easing:"linear",duration:200}},callbacks:Yo},defaultRoutes:{bodyFont:"font",footerFont:"font",titleFont:"font"},descriptors:{_scriptable:e=>"filter"!==e&&"itemSort"!==e&&"external"!==e,_indexable:!1,callbacks:{_scriptable:!1,_indexable:!1},animation:{_fallback:!1},animations:{_fallback:"animation"}},additionalOptionScopes:["interaction"]};const Jo=(e,t,n,r)=>("string"===typeof t?(n=e.push(t)-1,r.unshift({index:n,label:t})):isNaN(t)&&(n=null),n);function ea(e,t,n,r){const i=e.indexOf(t);if(-1===i)return Jo(e,t,n,r);const s=e.lastIndexOf(t);return i!==s?n:i}const ta=(e,t)=>null===e?null:je(Math.round(e),0,t);function na(e){const t=this.getLabels();return e>=0&&et.length-1?null:this.getPixelForValue(t[e].value)}getValueForPixel(e){return Math.round(this._startValue+this.getDecimalForPixel(e)*this._valueRange)}getBasePixel(){return this.bottom}}function ia(e,t){const n=[],r=1e-14,{bounds:i,step:s,min:o,max:a,precision:l,count:c,maxTicks:u,maxDigits:d,includeBounds:h}=e,p=s||1,f=u-1,{min:g,max:m}=t,b=!K(o),_=!K(a),y=!K(c),v=(m-g)/(d+1);let E,x,S,w,T=Oe((m-g)/f/p)*p;if(Tf&&(T=Oe(w*T/f/p)*p),K(l)||(E=Math.pow(10,l),T=Math.ceil(T*E)/E),"ticks"===i?(x=Math.floor(g/T)*T,S=Math.ceil(m/T)*T):(x=g,S=m),b&&_&&s&&De((a-o)/s,T/1e3)?(w=Math.round(Math.min((a-o)/T,u)),T=(a-o)/w,x=o,S=a):y?(x=b?o:x,S=_?a:S,w=c-1,T=(S-x)/w):(w=(S-x)/T,w=Pe(w,Math.round(w),T/1e3)?Math.round(w):Math.ceil(w));const A=Math.max(Ue(T),Ue(x));E=Math.pow(10,K(l)?A:l),x=Math.round(x*E)/E,S=Math.round(S*E)/E;let C=0;for(b&&(h&&x!==o?(n.push({value:o}),xa)break;n.push({value:e})}return _&&h&&S!==a?n.length&&Pe(n[n.length-1].value,a,sa(a,v,e))?n[n.length-1].value=a:n.push({value:a}):_&&S!==a||n.push({value:S}),n}function sa(e,t,{horizontal:n,minRotation:r}){const i=Fe(r),s=(n?Math.sin(i):Math.cos(i))||.001,o=.75*t*(""+e).length;return Math.min(t/s,o)}class oa extends ps{constructor(e){super(e),this.start=void 0,this.end=void 0,this._startValue=void 0,this._endValue=void 0,this._valueRange=0}parse(e,t){return K(e)||("number"===typeof e||e instanceof Number)&&!isFinite(+e)?null:+e}handleTickRangeOptions(){const{beginAtZero:e}=this.options,{minDefined:t,maxDefined:n}=this.getUserBounds();let{min:r,max:i}=this;const s=e=>r=t?r:e,o=e=>i=n?i:e;if(e){const e=ke(r),t=ke(i);e<0&&t<0?o(0):e>0&&t>0&&s(0)}if(r===i){let t=0===i?1:Math.abs(.05*i);o(i+t),e||s(r-t)}this.min=r,this.max=i}getTickLimit(){const e=this.options.ticks;let t,{maxTicksLimit:n,stepSize:r}=e;return r?(t=Math.ceil(this.max/r)-Math.floor(this.min/r)+1,t>1e3&&(console.warn(`scales.${this.id}.ticks.stepSize: ${r} would result generating up to ${t} ticks. Limiting to 1000.`),t=1e3)):(t=this.computeTickLimit(),n=n||11),n&&(t=Math.min(n,t)),t}computeTickLimit(){return Number.POSITIVE_INFINITY}buildTicks(){const e=this.options,t=e.ticks;let n=this.getTickLimit();n=Math.max(2,n);const r={maxTicks:n,bounds:e.bounds,min:e.min,max:e.max,precision:t.precision,step:t.stepSize,count:t.count,maxDigits:this._maxDigits(),horizontal:this.isHorizontal(),minRotation:t.minRotation||0,includeBounds:!1!==t.includeBounds},i=this._range||this,s=ia(r,i);return"ticks"===e.bounds&&Le(s,this,"value"),e.reverse?(s.reverse(),this.start=this.max,this.end=this.min):(this.start=this.min,this.end=this.max),s}configure(){const e=this.ticks;let t=this.min,n=this.max;if(super.configure(),this.options.offset&&e.length){const r=(n-t)/Math.max(e.length-1,1)/2;t-=r,n+=r}this._startValue=t,this._endValue=n,this._valueRange=n-t}getLabelForValue(e){return St(e,this.chart.options.locale,this.options.ticks.format)}}class aa extends oa{static id="linear";static defaults={ticks:{callback:At.formatters.numeric}};determineDataLimits(){const{min:e,max:t}=this.getMinMax(!0);this.min=J(e)?e:0,this.max=J(t)?t:1,this.handleTickRangeOptions()}computeTickLimit(){const e=this.isHorizontal(),t=e?this.width:this.height,n=Fe(this.options.ticks.minRotation),r=(e?Math.sin(n):Math.cos(n))||.001,i=this._resolveTickFontOptions(0);return Math.ceil(t/Math.min(40,i.lineHeight/r))}getPixelForValue(e){return null===e?NaN:this.getPixelForDecimal((e-this._startValue)/this._valueRange)}getValueForPixel(e){return this._startValue+this.getDecimalForPixel(e)*this._valueRange}}class la extends ps{static id="logarithmic";static defaults={ticks:{callback:At.formatters.logarithmic,major:{enabled:!0}}};constructor(e){super(e),this.start=void 0,this.end=void 0,this._startValue=void 0,this._valueRange=0}parse(e,t){const n=oa.prototype.parse.apply(this,[e,t]);if(0!==n)return J(n)&&n>0?n:null;this._zero=!0}determineDataLimits(){const{min:e,max:t}=this.getMinMax(!0);this.min=J(e)?Math.max(0,e):null,this.max=J(t)?Math.max(0,t):null,this.options.beginAtZero&&(this._zero=!0),this._zero&&this.min!==this._suggestedMin&&!J(this._userMin)&&(this.min=e===changeExponent(this.min,0)?changeExponent(this.min,-1):changeExponent(this.min,0)),this.handleTickRangeOptions()}handleTickRangeOptions(){const{minDefined:e,maxDefined:t}=this.getUserBounds();let n=this.min,r=this.max;const i=t=>e?n:t,s=e=>t?r:e;n===r&&(n<=0?(i(1),s(10)):(i(changeExponent(n,-1)),s(changeExponent(r,1)))),n<=0&&i(changeExponent(r,-1)),r<=0&&s(changeExponent(n,1)),this.min=n,this.max=r}buildTicks(){const e=this.options,t={min:this._userMin,max:this._userMax},n=generateTicks(t,this);return"ticks"===e.bounds&&Le(n,this,"value"),e.reverse?(n.reverse(),this.start=this.max,this.end=this.min):(this.start=this.min,this.end=this.max),n}getLabelForValue(e){return void 0===e?"0":St(e,this.chart.options.locale,this.options.ticks.format)}configure(){const e=this.min;super.configure(),this._startValue=Re(e),this._valueRange=Re(this.max)-Re(e)}getPixelForValue(e){return void 0!==e&&0!==e||this.min,null===e||isNaN(e)?NaN:this.getPixelForDecimal(e===this.min?0:(Re(e)-this._startValue)/this._valueRange)}getValueForPixel(e){const t=this.getDecimalForPixel(e);return Math.pow(10,this._startValue+t*this._valueRange)}}class ca extends oa{static id="radialLinear";static defaults={display:!0,animate:!0,position:"chartArea",angleLines:{display:!0,lineWidth:1,borderDash:[],borderDashOffset:0},grid:{circular:!1},startAngle:0,ticks:{showLabelBackdrop:!0,callback:At.formatters.numeric},pointLabels:{backdropColor:void 0,backdropPadding:2,display:!0,font:{size:10},callback(e){return e},padding:5,centerPointLabels:!1}};static defaultRoutes={"angleLines.color":"borderColor","pointLabels.color":"color","ticks.color":"color"};static descriptors={angleLines:{_fallback:"grid"}};constructor(e){super(e),this.xCenter=void 0,this.yCenter=void 0,this.drawingArea=void 0,this._pointLabels=[],this._pointLabelItems=[]}setDimensions(){const e=this._padding=rn(getTickBackdropHeight(this.options)/2),t=this.width=this.maxWidth-e.width,n=this.height=this.maxHeight-e.height;this.xCenter=Math.floor(this.left+t/2+e.left),this.yCenter=Math.floor(this.top+n/2+e.top),this.drawingArea=Math.floor(Math.min(t,n)/2)}determineDataLimits(){const{min:e,max:t}=this.getMinMax(!1);this.min=J(e)&&!isNaN(e)?e:0,this.max=J(t)&&!isNaN(t)?t:0,this.handleTickRangeOptions()}computeTickLimit(){return Math.ceil(this.drawingArea/getTickBackdropHeight(this.options))}generateTickLabels(e){oa.prototype.generateTickLabels.call(this,e),this._pointLabels=this.getLabels().map(((e,t)=>{const n=re(this.options.pointLabels.callback,[e,t],this);return n||0===n?n:""})).filter(((e,t)=>this.chart.getDataVisibility(t)))}fit(){const e=this.options;e.display&&e.pointLabels.display?fitWithPointLabels(this):this.setCenterPoint(0,0,0,0)}setCenterPoint(e,t,n,r){this.xCenter+=Math.floor((e-t)/2),this.yCenter+=Math.floor((n-r)/2),this.drawingArea-=Math.min(this.drawingArea/2,Math.max(e,t,n,r))}getIndexAngle(e){const t=xe/(this._pointLabels.length||1),n=this.options.startAngle||0;return He(e*t+Fe(n))}getDistanceFromCenterForValue(e){if(K(e))return NaN;const t=this.drawingArea/(this.max-this.min);return this.options.reverse?(this.max-e)*t:(e-this.min)*t}getValueForDistanceFromCenter(e){if(K(e))return NaN;const t=e/(this.drawingArea/(this.max-this.min));return this.options.reverse?this.max-t:this.min+t}getPointLabelContext(e){const t=this._pointLabels||[];if(e>=0&&e{if(0!==t){this.getDistanceFromCenterForValue(e.value);const n=this.getContext(t),o=r.setContext(n),l=i.setContext(n);drawRadiusLine(this,o,a,s,l)}})),n.display){for(e.save(),s-1;o>=0;o--){const r=n.setContext(this.getPointLabelContext(o)),{color:i,lineWidth:s}=r;s&&i&&(e.lineWidth=s,e.strokeStyle=i,e.setLineDash(r.borderDash),e.lineDashOffset=r.borderDashOffset,this.getDistanceFromCenterForValue(t.ticks.reverse?this.min:this.max),this.getPointPosition(o,a),e.beginPath(),e.moveTo(this.xCenter,this.yCenter),e.lineTo(l.x,l.y),e.stroke())}e.restore()}}drawBorder(){}drawLabels(){const e=this.ctx,t=this.options,n=t.ticks;if(!n.display)return;const r=this.getIndexAngle(0);let i,s;e.save(),e.translate(this.xCenter,this.yCenter),e.rotate(r),e.textAlign="center",e.textBaseline="middle",this.ticks.forEach(((r,o)=>{if(0===o&&!t.reverse)return;const a=n.setContext(this.getContext(o)),l=sn(a.font);if(this.getDistanceFromCenterForValue(this.ticks[o].value),a.showLabelBackdrop){e.font=l.string,e.measureText(r.label).width,e.fillStyle=a.backdropColor;const t=rn(a.backdropPadding);e.fillRect(-s/2-t.left,-i-l.size/2-t.top,s+t.width,l.size+t.height)}Xt(e,r.label,0,-i,l,{color:a.color})})),e.restore()}drawTitle(){}}const ua={millisecond:{common:!0,size:1,steps:1e3},second:{common:!0,size:1e3,steps:60},minute:{common:!0,size:6e4,steps:60},hour:{common:!0,size:36e5,steps:24},day:{common:!0,size:864e5,steps:30},week:{common:!1,size:6048e5,steps:4},month:{common:!0,size:2628e6,steps:12},quarter:{common:!1,size:7884e6,steps:4},year:{common:!0,size:3154e7}},da=Object.keys(ua);function ha(e,t){return e-t}function pa(e,t){if(K(t))return null;const n=e._adapter,{parser:r,round:i,isoWeekday:s}=e._parseOpts;let o=t;return"function"===typeof r&&(o=r(o)),J(o)||(o="string"===typeof r?n.parse(o,r):n.parse(o)),null===o?null:(i&&(o="week"!==i||!Me(s)&&!0!==s?n.startOf(o,i):n.startOf(o,"isoWeek",s)),+o)}function fa(e,t,n,r){const i=da.length;for(let s=da.indexOf(e);s=da.indexOf(n);s--){const n=da[s];if(ua[n].common&&e._adapter.diff(i,r,n)>=t-1)return n}return da[n?da.indexOf(n):0]}function ma(e){for(let t=da.indexOf(e)+1,n=da.length;t=t?n[r]:n[i];e[s]=!0}}else e[t]=!0}function _a(e,t,n,r){const i=e._adapter,s=+i.startOf(t[0].value,r),o=t[t.length-1].value;let a,l;for(a=s;a<=o;a=+i.add(a,1,r))l=n[a],l>=0&&(t[l].major=!0);return t}function ya(e,t,n){const r=[],i={},s=t.length;let o,a;for(o=0;o+e.value)))}initOffsets(e=[]){let t,n,r=0,i=0;this.options.offset&&e.length&&(t=this.getDecimalForValue(e[0]),r=1===e.length?1-t:(this.getDecimalForValue(e[1])-t)/2,n=this.getDecimalForValue(e[e.length-1]),i=1===e.length?n:(n-this.getDecimalForValue(e[e.length-2]))/2);const s=e.length<3?.5:.25;r=je(r,0,s),i=je(i,0,s),this._offsets={start:r,end:i,factor:1/(r+1+i)}}_generate(){const e=this._adapter,t=this.min,n=this.max,r=this.options,i=r.time,s=i.unit||fa(i.minUnit,t,n,this._getLabelCapacity(t)),o=te(r.ticks.stepSize,1),a="week"===s&&i.isoWeekday,l=Me(a)||!0===a,c={};let u,d,h=t;if(l&&(h=+e.startOf(h,"isoWeek",a)),h=+e.startOf(h,l?"day":s),e.diff(n,t,s)>1e5*o)throw new Error(t+" and "+n+" are too far apart with stepSize of "+o+" "+s);const p="data"===r.ticks.source&&this.getDataTimestamps();for(u=h,d=0;ue-t)).map((e=>+e))}getLabelForValue(e){const t=this._adapter,n=this.options.time;return n.tooltipFormat?t.format(e,n.tooltipFormat):t.format(e,n.displayFormats.datetime)}format(e,t){const n=this.options,r=n.time.displayFormats,i=this._unit,s=t||r[i];return this._adapter.format(e,s)}_tickFormatFunction(e,t,n,r){const i=this.options,s=i.ticks.callback;if(s)return re(s,[e,t,n],this);const o=i.time.displayFormats,a=this._unit,l=this._majorUnit,c=a&&o[a],u=l&&o[l],d=n[t],h=l&&u&&d&&d.major;return this._adapter.format(e,r||(h?u:c))}generateTickLabels(e){let t,n,r;for(t=0,n=e.length;t0?o:1}getDataTimestamps(){let e,t,n=this._cache.data||[];if(n.length)return n;const r=this.getMatchingVisibleMetas();if(this._normalized&&r.length)return this._cache.data=r[0].controller.getAllParsedValues(this);for(e=0,t=r.length;e=t&&l<=n&&r.push(l);if(r.length<2)return[{time:t,pos:0},{time:n,pos:1}];for(0,r.length;s{let t={};return e.forEach(((e,n)=>t[e]=n)),t})(d),p=/^(?:[A-Za-z\d+\/]{4})*?(?:[A-Za-z\d+\/]{2}(?:==)?|[A-Za-z\d+\/]{3}=?)?$/,f=String.fromCharCode.bind(String),g="function"===typeof Uint8Array.from?Uint8Array.from.bind(Uint8Array):e=>new Uint8Array(Array.prototype.slice.call(e,0)),m=e=>e.replace(/=/g,"").replace(/[+\/]/g,(e=>"+"==e?"-":"_")),b=e=>e.replace(/[^A-Za-z0-9\+\/]/g,""),_=e=>{let t,n,r,i,s="";const o=e.length%3;for(let a=0;a255||(r=e.charCodeAt(a++))>255||(i=e.charCodeAt(a++))>255)throw new TypeError("invalid character found");t=n<<16|r<<8|i,s+=d[t>>18&63]+d[t>>12&63]+d[t>>6&63]+d[63&t]}return o?s.slice(0,o-3)+"===".substring(o):s},y=o?e=>btoa(e):a?e=>Buffer.from(e,"binary").toString("base64"):_,v=a?e=>Buffer.from(e).toString("base64"):e=>{const t=4096;let n=[];for(let r=0,i=e.length;rt?m(v(e)):v(e),x=e=>{if(e.length<2){var t=e.charCodeAt(0);return t<128?e:t<2048?f(192|t>>>6)+f(128|63&t):f(224|t>>>12&15)+f(128|t>>>6&63)+f(128|63&t)}t=65536+1024*(e.charCodeAt(0)-55296)+(e.charCodeAt(1)-56320);return f(240|t>>>18&7)+f(128|t>>>12&63)+f(128|t>>>6&63)+f(128|63&t)},S=/[\uD800-\uDBFF][\uDC00-\uDFFFF]|[^\x00-\x7F]/g,w=e=>e.replace(S,x),T=a?e=>Buffer.from(e,"utf8").toString("base64"):c?e=>v(c.encode(e)):e=>y(w(e)),A=(e,t=!1)=>t?m(T(e)):T(e),C=e=>A(e,!0),I=/[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF]{2}|[\xF0-\xF7][\x80-\xBF]{3}/g,R=e=>{switch(e.length){case 4:var t=(7&e.charCodeAt(0))<<18|(63&e.charCodeAt(1))<<12|(63&e.charCodeAt(2))<<6|63&e.charCodeAt(3),n=t-65536;return f(55296+(n>>>10))+f(56320+(1023&n));case 3:return f((15&e.charCodeAt(0))<<12|(63&e.charCodeAt(1))<<6|63&e.charCodeAt(2));default:return f((31&e.charCodeAt(0))<<6|63&e.charCodeAt(1))}},k=e=>e.replace(I,R),P=e=>{if(e=e.replace(/\s+/g,""),!p.test(e))throw new TypeError("malformed base64.");e+="==".slice(2-(3&e.length));let t,n,r,i="";for(let s=0;s>16&255):64===r?f(t>>16&255,t>>8&255):f(t>>16&255,t>>8&255,255&t);return i},O=s?e=>atob(b(e)):a?e=>Buffer.from(e,"base64").toString("binary"):P,N=a?e=>g(Buffer.from(e,"base64")):e=>g(O(e).split("").map((e=>e.charCodeAt(0)))),M=e=>N(L(e)),D=a?e=>Buffer.from(e,"base64").toString("utf8"):l?e=>l.decode(N(e)):e=>k(O(e)),L=e=>b(e.replace(/[-_]/g,(e=>"-"==e?"+":"/"))),F=e=>D(L(e)),B=e=>{if("string"!==typeof e)return!1;const t=e.replace(/\s+/g,"").replace(/={0,2}$/,"");return!/[^\s0-9a-zA-Z\+/]/.test(t)||!/[^\s0-9a-zA-Z\-_]/.test(t)},U=e=>({value:e,enumerable:!1,writable:!0,configurable:!0}),G=function(){const e=(e,t)=>Object.defineProperty(String.prototype,e,U(t));e("fromBase64",(function(){return F(this)})),e("toBase64",(function(e){return A(this,e)})),e("toBase64URI",(function(){return A(this,!0)})),e("toBase64URL",(function(){return A(this,!0)})),e("toUint8Array",(function(){return M(this)}))},$=function(){const e=(e,t)=>Object.defineProperty(Uint8Array.prototype,e,U(t));e("toBase64",(function(e){return E(this,e)})),e("toBase64URI",(function(){return E(this,!0)})),e("toBase64URL",(function(){return E(this,!0)}))},z=()=>{G(),$()},H={version:r,VERSION:i,atob:O,atobPolyfill:P,btoa:y,btoaPolyfill:_,fromBase64:F,toBase64:A,encode:A,encodeURI:C,encodeURL:C,utob:w,btou:k,decode:F,isValid:B,fromUint8Array:E,toUint8Array:M,extendString:G,extendUint8Array:$,extendBuiltins:z}},9428:function(e,t,n){"use strict";n.d(t,{MxU:function(){return ie},vB5:function(){return r.vB}});var r=n(2848),i=n(8276);const s=new r.E9,o=new Uint16Array([0,1,2,0,2,3]);class a extends i.W2{constructor(e){super(),this._anchor=new r.AB(this._onAnchorUpdate,this,e?e.defaultAnchor.x:0,e?e.defaultAnchor.y:0),this._texture=null,this._width=0,this._height=0,this._tintColor=new r.Il(16777215),this._tintRGB=null,this.tint=16777215,this.blendMode=r.T$.NORMAL,this._cachedTint=16777215,this.uvs=null,this.texture=e||r.xE.EMPTY,this.vertexData=new Float32Array(8),this.vertexTrimmedData=null,this._transformID=-1,this._textureID=-1,this._transformTrimmedID=-1,this._textureTrimmedID=-1,this.indices=o,this.pluginName="batch",this.isSprite=!0,this._roundPixels=r.Xd.ROUND_PIXELS}_onTextureUpdate(){this._textureID=-1,this._textureTrimmedID=-1,this._cachedTint=16777215,this._width&&(this.scale.x=r.P6.sign(this.scale.x)*this._width/this._texture.orig.width),this._height&&(this.scale.y=r.P6.sign(this.scale.y)*this._height/this._texture.orig.height)}_onAnchorUpdate(){this._transformID=-1,this._transformTrimmedID=-1}calculateVertices(){const e=this._texture;if(this._transformID===this.transform._worldID&&this._textureID===e._updateID)return;this._textureID!==e._updateID&&(this.uvs=this._texture._uvs.uvsFloat32),this._transformID=this.transform._worldID,this._textureID=e._updateID;const t=this.transform.worldTransform,n=t.a,i=t.b,s=t.c,o=t.d,a=t.tx,l=t.ty,c=this.vertexData,u=e.trim,d=e.orig,h=this._anchor;let p=0,f=0,g=0,m=0;if(u?(f=u.x-h._x*d.width,p=f+u.width,m=u.y-h._y*d.height,g=m+u.height):(f=-h._x*d.width,p=f+d.width,m=-h._y*d.height,g=m+d.height),c[0]=n*f+s*m+a,c[1]=o*m+i*f+l,c[2]=n*p+s*m+a,c[3]=o*m+i*p+l,c[4]=n*p+s*g+a,c[5]=o*g+i*p+l,c[6]=n*f+s*g+a,c[7]=o*g+i*f+l,this._roundPixels){const e=r.Xd.RESOLUTION;for(let t=0;t=r&&s.x=i&&s.y=n&&(o=e-a-1),r=r.replace("%value%",t[o].toString()),i+=r,i+="\n"}return r=r.replace("%blur%",i),r=r.replace("%size%",e.toString()),r}const g="\n attribute vec2 aVertexPosition;\n\n uniform mat3 projectionMatrix;\n\n uniform float strength;\n\n varying vec2 vBlurTexCoords[%size%];\n\n uniform vec4 inputSize;\n uniform vec4 outputFrame;\n\n vec4 filterVertexPosition( void )\n {\n vec2 position = aVertexPosition * max(outputFrame.zw, vec2(0.)) + outputFrame.xy;\n\n return vec4((projectionMatrix * vec3(position, 1.0)).xy, 0.0, 1.0);\n }\n\n vec2 filterTextureCoord( void )\n {\n return aVertexPosition * (outputFrame.zw * inputSize.zw);\n }\n\n void main(void)\n {\n gl_Position = filterVertexPosition();\n\n vec2 textureCoord = filterTextureCoord();\n %blur%\n }";function m(e,t){const n=Math.ceil(e/2);let r,i=g,s="";r=t?"vBlurTexCoords[%index%] = textureCoord + vec2(%sampleIndex% * strength, 0.0);":"vBlurTexCoords[%index%] = textureCoord + vec2(0.0, %sampleIndex% * strength);";for(let o=0;o lumaMax))\n color = vec4(rgbA, texColor.a);\n else\n color = vec4(rgbB, texColor.a);\n return color;\n}\n\nvoid main() {\n\n vec4 color;\n\n color = fxaa(uSampler, vFragCoord, inputSize.zw, v_rgbNW, v_rgbNE, v_rgbSW, v_rgbSE, v_rgbM);\n\n gl_FragColor = color;\n}\n',T="\nattribute vec2 aVertexPosition;\n\nuniform mat3 projectionMatrix;\n\nvarying vec2 v_rgbNW;\nvarying vec2 v_rgbNE;\nvarying vec2 v_rgbSW;\nvarying vec2 v_rgbSE;\nvarying vec2 v_rgbM;\n\nvarying vec2 vFragCoord;\n\nuniform vec4 inputSize;\nuniform vec4 outputFrame;\n\nvec4 filterVertexPosition( void )\n{\n vec2 position = aVertexPosition * max(outputFrame.zw, vec2(0.)) + outputFrame.xy;\n\n return vec4((projectionMatrix * vec3(position, 1.0)).xy, 0.0, 1.0);\n}\n\nvoid texcoords(vec2 fragCoord, vec2 inverseVP,\n out vec2 v_rgbNW, out vec2 v_rgbNE,\n out vec2 v_rgbSW, out vec2 v_rgbSE,\n out vec2 v_rgbM) {\n v_rgbNW = (fragCoord + vec2(-1.0, -1.0)) * inverseVP;\n v_rgbNE = (fragCoord + vec2(1.0, -1.0)) * inverseVP;\n v_rgbSW = (fragCoord + vec2(-1.0, 1.0)) * inverseVP;\n v_rgbSE = (fragCoord + vec2(1.0, 1.0)) * inverseVP;\n v_rgbM = vec2(fragCoord * inverseVP);\n}\n\nvoid main(void) {\n\n gl_Position = filterVertexPosition();\n\n vFragCoord = aVertexPosition * outputFrame.zw;\n\n texcoords(vFragCoord, inputSize.zw, v_rgbNW, v_rgbNE, v_rgbSW, v_rgbSE, v_rgbM);\n}\n";class A extends r.wn{constructor(){super(T,w)}}var C="precision highp float;\n\nvarying vec2 vTextureCoord;\nvarying vec4 vColor;\n\nuniform float uNoise;\nuniform float uSeed;\nuniform sampler2D uSampler;\n\nfloat rand(vec2 co)\n{\n return fract(sin(dot(co.xy, vec2(12.9898, 78.233))) * 43758.5453);\n}\n\nvoid main()\n{\n vec4 color = texture2D(uSampler, vTextureCoord);\n float randomValue = rand(gl_FragCoord.xy * uSeed);\n float diff = (randomValue - 0.5) * uNoise;\n\n // Un-premultiply alpha before applying the color matrix. See issue #3539.\n if (color.a > 0.0) {\n color.rgb /= color.a;\n }\n\n color.r += diff;\n color.g += diff;\n color.b += diff;\n\n // Premultiply alpha again.\n color.rgb *= color.a;\n\n gl_FragColor = color;\n}\n";class I extends r.wn{constructor(e=.5,t=Math.random()){super(r.Y9,C,{uNoise:0,uSeed:0}),this.noise=e,this.seed=t}get noise(){return this.uniforms.uNoise}set noise(e){this.uniforms.uNoise=e}get seed(){return this.uniforms.uSeed}set seed(e){this.uniforms.uSeed=e}}const R={AlphaFilter:d,BlurFilter:_,BlurFilterPass:b,ColorMatrixFilter:v,DisplacementFilter:S,FXAAFilter:A,NoiseFilter:I};Object.entries(R).forEach((([e,t])=>{Object.defineProperty(R,e,{get(){return r.P6.deprecation("7.1.0",`filters.${e} has moved to ${e}`),t}})}));class k{constructor(){this.interactionFrequency=10,this._deltaTime=0,this._didMove=!1,this.tickerAdded=!1,this._pauseUpdate=!0}init(e){this.removeTickerListener(),this.events=e,this.interactionFrequency=10,this._deltaTime=0,this._didMove=!1,this.tickerAdded=!1,this._pauseUpdate=!0}get pauseUpdate(){return this._pauseUpdate}set pauseUpdate(e){this._pauseUpdate=e}addTickerListener(){!this.tickerAdded&&this.domElement&&(r.vB.system.add(this.tickerUpdate,this,r.uF.INTERACTION),this.tickerAdded=!0)}removeTickerListener(){this.tickerAdded&&(r.vB.system.remove(this.tickerUpdate,this),this.tickerAdded=!1)}pointerMoved(){this._didMove=!0}update(){if(!this.domElement||this._pauseUpdate)return;if(this._didMove)return void(this._didMove=!1);const e=this.events["rootPointerEvent"];this.events.supportsTouchEvents&&"touch"===e.pointerType||globalThis.document.dispatchEvent(new PointerEvent("pointermove",{clientX:e.clientX,clientY:e.clientY}))}tickerUpdate(e){this._deltaTime+=e,this._deltaTimee.priority-t.priority))}dispatchEvent(e,t){e.propagationStopped=!1,e.propagationImmediatelyStopped=!1,this.propagate(e,t),this.dispatch.emit(t||e.type,e)}mapEvent(e){if(!this.rootTarget)return;const t=this.mappingTable[e.type];if(t)for(let n=0,r=t.length;n=0;r--)if(e.currentTarget=n[r],this.notifyTarget(e,t),e.propagationStopped||e.propagationImmediatelyStopped)return}}all(e,t,n=this._allInteractiveElements){if(0===n.length)return;e.eventPhase=e.BUBBLING_PHASE;const r=Array.isArray(t)?t:[t];for(let i=n.length-1;i>=0;i--)r.forEach((t=>{e.currentTarget=n[i],this.notifyTarget(e,t)}))}propagationPath(e){const t=[e];for(let n=0;n=0;l--){const c=a[l],u=this.hitTestMoveRecursive(c,this._isInteractive(t)?t:c.eventMode,n,r,i,s||i(e,n));if(u){if(u.length>0&&!u[u.length-1].parent)continue;const t=e.isInteractive();(u.length>0||t)&&(t&&this._allInteractiveElements.push(e),u.push(e)),0===this._hitElements.length&&(this._hitElements=u),o=!0}}}const a=this._isInteractive(t),l=e.isInteractive();return l&&l&&this._allInteractiveElements.push(e),s||this._hitElements.length>0?null:o?this._hitElements:a&&!i(e,n)&&r(e,n)?l?[e]:[]:null}hitTestRecursive(e,t,n,r,i){if(this._interactivePrune(e)||i(e,n))return null;if("dynamic"!==e.eventMode&&"dynamic"!==t||(P.pauseUpdate=!1),e.interactiveChildren&&e.children){const s=e.children;for(let o=s.length-1;o>=0;o--){const a=s[o],l=this.hitTestRecursive(a,this._isInteractive(t)?t:a.eventMode,n,r,i);if(l){if(l.length>0&&!l[l.length-1].parent)continue;const t=e.isInteractive();return(l.length>0||t)&&l.push(e),l}}}const s=this._isInteractive(t),o=e.isInteractive();return s&&r(e,n)?o?[e]:[]:null}_isInteractive(e){return"static"===e||"dynamic"===e}_interactivePrune(e){return!(e&&!e.isMask&&e.visible&&e.renderable)||("none"===e.eventMode||("passive"===e.eventMode&&!e.interactiveChildren||!!e.isMask))}hitPruneFn(e,t){if(e.hitArea&&(e.worldTransform.applyInverse(t,B),!e.hitArea.contains(B.x,B.y)))return!0;if(e._mask){const n=e._mask.isMaskData?e._mask.maskObject:e._mask;if(n&&!n.containsPoint?.(t))return!0}return!1}hitTestFn(e,t){return"passive"!==e.eventMode&&(!!e.hitArea||!!e.containsPoint&&e.containsPoint(t))}notifyTarget(e,t){t=t??e.type;const n=`on${t}`;e.currentTarget[n]?.(e);const r=e.eventPhase===e.CAPTURING_PHASE||e.eventPhase===e.AT_TARGET?`${t}capture`:t;this.notifyListeners(e,r),e.eventPhase===e.AT_TARGET&&this.notifyListeners(e,t)}mapPointerDown(e){if(!(e instanceof M))return void console.warn("EventBoundary cannot map a non-pointer event as a pointer event");const t=this.createPointerEvent(e);if(this.dispatchEvent(t,"pointerdown"),"touch"===t.pointerType)this.dispatchEvent(t,"touchstart");else if("mouse"===t.pointerType||"pen"===t.pointerType){const e=2===t.button;this.dispatchEvent(t,e?"rightdown":"mousedown")}const n=this.trackingData(e.pointerId);n.pressTargetsByButton[e.button]=t.composedPath(),this.freeEvent(t)}mapPointerMove(e){if(!(e instanceof M))return void console.warn("EventBoundary cannot map a non-pointer event as a pointer event");this._allInteractiveElements.length=0,this._hitElements.length=0,this._isPointerMoveEvent=!0;const t=this.createPointerEvent(e);this._isPointerMoveEvent=!1;const n="mouse"===t.pointerType||"pen"===t.pointerType,r=this.trackingData(e.pointerId),i=this.findMountedTarget(r.overTargets);if(r.overTargets?.length>0&&i!==t.target){const r="mousemove"===e.type?"mouseout":"pointerout",s=this.createPointerEvent(e,r,i);if(this.dispatchEvent(s,"pointerout"),n&&this.dispatchEvent(s,"mouseout"),!t.composedPath().includes(i)){const r=this.createPointerEvent(e,"pointerleave",i);r.eventPhase=r.AT_TARGET;while(r.target&&!t.composedPath().includes(r.target))r.currentTarget=r.target,this.notifyTarget(r),n&&this.notifyTarget(r,"mouseleave"),r.target=r.target.parent;this.freeEvent(r)}this.freeEvent(s)}if(i!==t.target){const r="mousemove"===e.type?"mouseover":"pointerover",s=this.clonePointerEvent(t,r);this.dispatchEvent(s,"pointerover"),n&&this.dispatchEvent(s,"mouseover");let o=i?.parent;while(o&&o!==this.rootTarget.parent){if(o===t.target)break;o=o.parent}const a=!o||o===this.rootTarget.parent;if(a){const e=this.clonePointerEvent(t,"pointerenter");e.eventPhase=e.AT_TARGET;while(e.target&&e.target!==i&&e.target!==this.rootTarget.parent)e.currentTarget=e.target,this.notifyTarget(e),n&&this.notifyTarget(e,"mouseenter"),e.target=e.target.parent;this.freeEvent(e)}this.freeEvent(s)}const s=[],o=this.enableGlobalMoveEvents??!0;this.moveOnAll?s.push("pointermove"):this.dispatchEvent(t,"pointermove"),o&&s.push("globalpointermove"),"touch"===t.pointerType&&(this.moveOnAll?s.splice(1,0,"touchmove"):this.dispatchEvent(t,"touchmove"),o&&s.push("globaltouchmove")),n&&(this.moveOnAll?s.splice(1,0,"mousemove"):this.dispatchEvent(t,"mousemove"),o&&s.push("globalmousemove"),this.cursor=t.target?.cursor),s.length>0&&this.all(t,s),this._allInteractiveElements.length=0,this._hitElements.length=0,r.overTargets=t.composedPath(),this.freeEvent(t)}mapPointerOver(e){if(!(e instanceof M))return void console.warn("EventBoundary cannot map a non-pointer event as a pointer event");const t=this.trackingData(e.pointerId),n=this.createPointerEvent(e),r="mouse"===n.pointerType||"pen"===n.pointerType;this.dispatchEvent(n,"pointerover"),r&&this.dispatchEvent(n,"mouseover"),"mouse"===n.pointerType&&(this.cursor=n.target?.cursor);const i=this.clonePointerEvent(n,"pointerenter");i.eventPhase=i.AT_TARGET;while(i.target&&i.target!==this.rootTarget.parent)i.currentTarget=i.target,this.notifyTarget(i),r&&this.notifyTarget(i,"mouseenter"),i.target=i.target.parent;t.overTargets=n.composedPath(),this.freeEvent(n),this.freeEvent(i)}mapPointerOut(e){if(!(e instanceof M))return void console.warn("EventBoundary cannot map a non-pointer event as a pointer event");const t=this.trackingData(e.pointerId);if(t.overTargets){const n="mouse"===e.pointerType||"pen"===e.pointerType,r=this.findMountedTarget(t.overTargets),i=this.createPointerEvent(e,"pointerout",r);this.dispatchEvent(i),n&&this.dispatchEvent(i,"mouseout");const s=this.createPointerEvent(e,"pointerleave",r);s.eventPhase=s.AT_TARGET;while(s.target&&s.target!==this.rootTarget.parent)s.currentTarget=s.target,this.notifyTarget(s),n&&this.notifyTarget(s,"mouseleave"),s.target=s.target.parent;t.overTargets=null,this.freeEvent(i),this.freeEvent(s)}this.cursor=null}mapPointerUp(e){if(!(e instanceof M))return void console.warn("EventBoundary cannot map a non-pointer event as a pointer event");const t=performance.now(),n=this.createPointerEvent(e);if(this.dispatchEvent(n,"pointerup"),"touch"===n.pointerType)this.dispatchEvent(n,"touchend");else if("mouse"===n.pointerType||"pen"===n.pointerType){const e=2===n.button;this.dispatchEvent(n,e?"rightup":"mouseup")}const r=this.trackingData(e.pointerId),i=this.findMountedTarget(r.pressTargetsByButton[e.button]);let s=i;if(i&&!n.composedPath().includes(i)){let t=i;while(t&&!n.composedPath().includes(t)){if(n.currentTarget=t,this.notifyTarget(n,"pointerupoutside"),"touch"===n.pointerType)this.notifyTarget(n,"touchendoutside");else if("mouse"===n.pointerType||"pen"===n.pointerType){const e=2===n.button;this.notifyTarget(n,e?"rightupoutside":"mouseupoutside")}t=t.parent}delete r.pressTargetsByButton[e.button],s=t}if(s){const i=this.clonePointerEvent(n,"click");i.target=s,i.path=null,r.clicksByButton[e.button]||(r.clicksByButton[e.button]={clickCount:0,target:i.target,timeStamp:t});const o=r.clicksByButton[e.button];if(o.target===i.target&&t-o.timeStamp<200?++o.clickCount:o.clickCount=1,o.target=i.target,o.timeStamp=t,i.detail=o.clickCount,"mouse"===i.pointerType){const e=2===i.button;this.dispatchEvent(i,e?"rightclick":"click")}else"touch"===i.pointerType&&this.dispatchEvent(i,"tap");this.dispatchEvent(i,"pointertap"),this.freeEvent(i)}this.freeEvent(n)}mapPointerUpOutside(e){if(!(e instanceof M))return void console.warn("EventBoundary cannot map a non-pointer event as a pointer event");const t=this.trackingData(e.pointerId),n=this.findMountedTarget(t.pressTargetsByButton[e.button]),r=this.createPointerEvent(e);if(n){let i=n;while(i)r.currentTarget=i,this.notifyTarget(r,"pointerupoutside"),"touch"===r.pointerType?this.notifyTarget(r,"touchendoutside"):"mouse"!==r.pointerType&&"pen"!==r.pointerType||this.notifyTarget(r,2===r.button?"rightupoutside":"mouseupoutside"),i=i.parent;delete t.pressTargetsByButton[e.button]}this.freeEvent(r)}mapWheel(e){if(!(e instanceof D))return void console.warn("EventBoundary cannot map a non-wheel event as a wheel event");const t=this.createWheelEvent(e);this.dispatchEvent(t),this.freeEvent(t)}findMountedTarget(e){if(!e)return null;let t=e[0];for(let n=1;n("globalMove"===t&&(this.rootBoundary.enableGlobalMoveEvents=n),e[t]=n,!0)}),this.onPointerDown=this.onPointerDown.bind(this),this.onPointerMove=this.onPointerMove.bind(this),this.onPointerUp=this.onPointerUp.bind(this),this.onPointerOverOut=this.onPointerOverOut.bind(this),this.onWheel=this.onWheel.bind(this)}static get defaultEventMode(){return this._defaultEventMode}init(e){const{view:t,resolution:n}=this.renderer;this.setTargetElement(t),this.resolution=n,z._defaultEventMode=e.eventMode??"auto",Object.assign(this.features,e.eventFeatures??{}),this.rootBoundary.enableGlobalMoveEvents=this.features.globalMove}resolutionChange(e){this.resolution=e}destroy(){this.setTargetElement(null),this.renderer=null}setCursor(e){e=e||"default";let t=!0;if(globalThis.OffscreenCanvas&&this.domElement instanceof OffscreenCanvas&&(t=!1),this.currentCursor===e)return;this.currentCursor=e;const n=this.cursorStyles[e];if(n)switch(typeof n){case"string":t&&(this.domElement.style.cursor=n);break;case"function":n(e);break;case"object":t&&Object.assign(this.domElement.style,n);break}else t&&"string"===typeof e&&!Object.prototype.hasOwnProperty.call(this.cursorStyles,e)&&(this.domElement.style.cursor=e)}get pointer(){return this.rootPointerEvent}onPointerDown(e){if(!this.features.click)return;if(this.rootBoundary.rootTarget=this.renderer.lastObjectRendered,this.supportsTouchEvents&&"touch"===e.pointerType)return;const t=this.normalizeToPointerData(e);if(this.autoPreventDefault&&t[0].isNormalized){const t=e.cancelable||!("cancelable"in e);t&&e.preventDefault()}for(let n=0,r=t.length;n0&&(t=e.composedPath()[0]);const n=t!==this.domElement?"outside":"",r=this.normalizeToPointerData(e);for(let i=0,s=r.length;i{this._isMobileAccessibility=!0,this.activate(),this.destroyTouchHook()})),document.body.appendChild(e),this._hookDiv=e}destroyTouchHook(){this._hookDiv&&(document.body.removeChild(this._hookDiv),this._hookDiv=null)}activate(){this._isActive||(this._isActive=!0,globalThis.document.addEventListener("mousemove",this._onMouseMove,!0),globalThis.removeEventListener("keydown",this._onKeyDown,!1),this.renderer.on("postrender",this.update,this),this.renderer.view.parentNode?.appendChild(this.div))}deactivate(){this._isActive&&!this._isMobileAccessibility&&(this._isActive=!1,globalThis.document.removeEventListener("mousemove",this._onMouseMove,!0),globalThis.addEventListener("keydown",this._onKeyDown,!1),this.renderer.off("postrender",this.update),this.div.parentNode?.removeChild(this.div))}updateAccessibleObjects(e){if(!e.visible||!e.accessibleChildren)return;e.accessible&&e.isInteractive()&&(e._accessibleActive||this.addChild(e),e.renderId=this.renderId);const t=e.children;if(t)for(let n=0;n title : ${e.title}
    tabIndex: ${e.tabIndex}`}capHitArea(e){e.x<0&&(e.width+=e.x,e.x=0),e.y<0&&(e.height+=e.y,e.y=0);const{width:t,height:n}=this.renderer;e.x+e.width>t&&(e.width=t-e.x),e.y+e.height>n&&(e.height=n-e.y)}addChild(e){let t=this.pool.pop();t||(t=document.createElement("button"),t.style.width=`${X}px`,t.style.height=`${X}px`,t.style.backgroundColor=this.debug?"rgba(255,255,255,0.5)":"transparent",t.style.position="absolute",t.style.zIndex=Z.toString(),t.style.borderStyle="none",navigator.userAgent.toLowerCase().includes("chrome")?t.setAttribute("aria-live","off"):t.setAttribute("aria-live","polite"),navigator.userAgent.match(/rv:.*Gecko\//)?t.setAttribute("aria-relevant","additions"):t.setAttribute("aria-relevant","text"),t.addEventListener("click",this._onClick.bind(this)),t.addEventListener("focus",this._onFocus.bind(this)),t.addEventListener("focusout",this._onFocusOut.bind(this))),t.style.pointerEvents=e.accessiblePointerEvents,t.type=e.accessibleType,e.accessibleTitle&&null!==e.accessibleTitle?t.title=e.accessibleTitle:e.accessibleHint&&null!==e.accessibleHint||(t.title=`displayObject ${e.tabIndex}`),e.accessibleHint&&null!==e.accessibleHint&&t.setAttribute("aria-label",e.accessibleHint),this.debug&&this.updateDebugHTML(t),e._accessibleActive=!0,e._accessibleDiv=t,t.displayObject=e,this.children.push(e),this.div.appendChild(e._accessibleDiv),e._accessibleDiv.tabIndex=e.tabIndex}_dispatchEvent(e,t){const{displayObject:n}=e.target,r=this.renderer.events.rootBoundary,i=Object.assign(new O(r),{target:n});r.rootTarget=this.renderer.lastObjectRendered,t.forEach((e=>r.dispatchEvent(i,e)))}_onClick(e){this._dispatchEvent(e,["click","pointertap","tap"])}_onFocus(e){e.target.getAttribute("aria-live")||e.target.setAttribute("aria-live","assertive"),this._dispatchEvent(e,["mouseover"])}_onFocusOut(e){e.target.getAttribute("aria-live")||e.target.setAttribute("aria-live","polite"),this._dispatchEvent(e,["mouseout"])}_onKeyDown(e){e.keyCode===q&&this.activate()}_onMouseMove(e){0===e.movementX&&0===e.movementY||this.deactivate()}destroy(){this.destroyTouchHook(),this.div=null,globalThis.document.removeEventListener("mousemove",this._onMouseMove,!0),globalThis.removeEventListener("keydown",this._onKeyDown),this.pool=null,this.children=null,this.renderer=null}}ne.extension={name:"accessibility",type:[r.nw.RendererPlugin,r.nw.CanvasRendererPlugin]},r.Rw.add(ne);const re=class{constructor(e){this.stage=new i.W2,e=Object.assign({forceCanvas:!1},e),this.renderer=(0,r.e6)(e),re._plugins.forEach((t=>{t.init.call(this,e)}))}render(){this.renderer.render(this.stage)}get view(){return this.renderer.view}get screen(){return this.renderer.screen}destroy(e,t){const n=re._plugins.slice(0);n.reverse(),n.forEach((e=>{e.destroy.call(this)})),this.stage.destroy(t),this.stage=null,this.renderer.destroy(e),this.renderer=null}};let ie=re;ie._plugins=[],r.Rw.handleByList(r.nw.Application,ie._plugins);class se{static init(e){Object.defineProperty(this,"resizeTo",{set(e){globalThis.removeEventListener("resize",this.queueResize),this._resizeTo=e,e&&(globalThis.addEventListener("resize",this.queueResize),this.resize())},get(){return this._resizeTo}}),this.queueResize=()=>{this._resizeTo&&(this.cancelResize(),this._resizeId=requestAnimationFrame((()=>this.resize())))},this.cancelResize=()=>{this._resizeId&&(cancelAnimationFrame(this._resizeId),this._resizeId=null)},this.resize=()=>{if(!this._resizeTo)return;let e,t;if(this.cancelResize(),this._resizeTo===globalThis.window)e=globalThis.innerWidth,t=globalThis.innerHeight;else{const{clientWidth:n,clientHeight:r}=this._resizeTo;e=n,t=r}this.renderer.resize(e,t),this.render()},this._resizeId=null,this._resizeTo=null,this.resizeTo=e.resizeTo||null}static destroy(){globalThis.removeEventListener("resize",this.queueResize),this.cancelResize(),this.cancelResize=null,this.queueResize=null,this.resizeTo=null,this.resize=null}}se.extension=r.nw.Application,r.Rw.add(se);const oe={loader:r.nw.LoadParser,resolver:r.nw.ResolveParser,cache:r.nw.CacheParser,detection:r.nw.DetectionParser};r.Rw.handle(r.nw.Asset,(e=>{const t=e.ref;Object.entries(oe).filter((([e])=>!!t[e])).forEach((([e,n])=>r.Rw.add(Object.assign(t[e],{extension:t[e].extension??n}))))}),(e=>{const t=e.ref;Object.keys(oe).filter((e=>!!t[e])).forEach((e=>r.Rw.remove(t[e])))}));class ae{constructor(e,t=!1){this._loader=e,this._assetList=[],this._isLoading=!1,this._maxConcurrent=1,this.verbose=t}add(e){e.forEach((e=>{this._assetList.push(e)})),this.verbose&&console.log("[BackgroundLoader] assets: ",this._assetList),this._isActive&&!this._isLoading&&this._next()}async _next(){if(this._assetList.length&&this._isActive){this._isLoading=!0;const e=[],t=Math.min(this._assetList.length,this._maxConcurrent);for(let n=0;n(Array.isArray(e)||(e=[e]),t?e.map((e=>"string"===typeof e?t(e):e)):e);class ue{constructor(){this._parsers=[],this._cache=new Map,this._cacheMap=new Map}reset(){this._cacheMap.clear(),this._cache.clear()}has(e){return this._cache.has(e)}get(e){const t=this._cache.get(e);return t||console.warn(`[Assets] Asset id ${e} was not found in the Cache`),t}set(e,t){const n=ce(e);let i;for(let r=0;r{i[e]=t})));const s=Object.keys(i),o={cacheKeys:s,keys:n};if(n.forEach((e=>{this._cacheMap.set(e,o)})),s.forEach((e=>{this._cache.has(e)&&this._cache.get(e)!==t&&console.warn("[Cache] already has key:",e),this._cache.set(e,i[e])})),t instanceof r.xE){const e=t;n.forEach((t=>{e.baseTexture!==r.xE.EMPTY.baseTexture&&r.VL.addToCache(e.baseTexture,t),r.xE.addToCache(e,t)}))}}remove(e){if(this._cacheMap.get(e),!this._cacheMap.has(e))return void console.warn(`[Assets] Asset id ${e} was not found in the Cache`);const t=this._cacheMap.get(e),n=t.cacheKeys;n.forEach((e=>{this._cache.delete(e)})),t.keys.forEach((e=>{this._cacheMap.delete(e)}))}get parsers(){return this._parsers}}const de=new ue,he=e=>!Array.isArray(e);class pe{constructor(){this._parsers=[],this._parsersValidated=!1,this.parsers=new Proxy(this._parsers,{set:(e,t,n)=>(this._parsersValidated=!1,e[t]=n,!0)}),this.promiseCache={}}reset(){this._parsersValidated=!1,this.promiseCache={}}_getLoadPromiseAndParser(e,t){const n={promise:null,parser:null};return n.promise=(async()=>{let r=null,i=null;if(t.loadParser&&(i=this._parserHash[t.loadParser],i||console.warn(`[Assets] specified load parser "${t.loadParser}" not found while loading ${e}`)),!i){for(let n=0;n({src:e}))),a=o.length,l=o.map((async e=>{const s=r.P6.path.toAbsolute(e.src);if(!i[e.src])try{this.promiseCache[s]||(this.promiseCache[s]=this._getLoadPromiseAndParser(s,e)),i[e.src]=await this.promiseCache[s].promise,t&&t(++n/a)}catch(o){throw delete this.promiseCache[s],delete i[e.src],new Error(`[Loader.load] Failed to load ${s}.\n${o}`)}}));return await Promise.all(l),s?i[o[0].src]:i}async unload(e){const t=ce(e,(e=>({src:e}))),n=t.map((async e=>{const t=r.P6.path.toAbsolute(e.src),n=this.promiseCache[t];if(n){const r=await n.promise;n.parser?.unload?.(r,e,this),delete this.promiseCache[t]}}));await Promise.all(n)}_validateParsers(){this._parsersValidated=!0,this._parserHash=this._parsers.filter((e=>e.name)).reduce(((e,t)=>(e[t.name]&&console.warn(`[Assets] loadParser name conflict "${t.name}"`),{...e,[t.name]:t})),{})}}var fe=(e=>(e[e["Low"]=0]="Low",e[e["Normal"]=1]="Normal",e[e["High"]=2]="High",e))(fe||{});function ge(e,t){if(Array.isArray(t)){for(const n of t)if(e.startsWith(`data:${n}`))return!0;return!1}return e.startsWith(`data:${t}`)}const me=".json",be="application/json",_e={extension:{type:r.nw.LoadParser,priority:fe.Low},name:"loadJson",test(e){return ge(e,be)||le(e,me)},async load(e){const t=await r.Xd.ADAPTER.fetch(e),n=await t.json();return n}};r.Rw.add(_e);const ye=".txt",ve="text/plain",Ee={name:"loadTxt",extension:{type:r.nw.LoadParser,priority:fe.Low},test(e){return ge(e,ve)||le(e,ye)},async load(e){const t=await r.Xd.ADAPTER.fetch(e),n=await t.text();return n}};r.Rw.add(Ee);const xe=["normal","bold","100","200","300","400","500","600","700","800","900"],Se=[".ttf",".otf",".woff",".woff2"],we=["font/ttf","font/otf","font/woff","font/woff2"],Te=/^(--|-?[A-Z_])[0-9A-Z_-]*$/i;function Ae(e){const t=r.P6.path.extname(e),n=r.P6.path.basename(e,t),i=n.replace(/(-|_)/g," "),s=i.toLowerCase().split(" ").map((e=>e.charAt(0).toUpperCase()+e.slice(1)));let o=s.length>0;for(const r of s)if(!r.match(Te)){o=!1;break}let a=s.join(" ");return o||(a=`"${a.replace(/[\\"]/g,"\\$&")}"`),a}const Ce={extension:{type:r.nw.LoadParser,priority:fe.Low},name:"loadWebFont",test(e){return ge(e,we)||le(e,Se)},async load(e,t){const n=r.Xd.ADAPTER.getFontFaceSet();if(n){const r=[],i=t.data?.family??Ae(e),s=t.data?.weights?.filter((e=>xe.includes(e)))??["normal"],o=t.data??{};for(let t=0;tr.Xd.ADAPTER.getFontFaceSet().delete(e)))}};r.Rw.add(Ce);let Ie,Re=0;const ke="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/x8AAwMCAO+ip1sAAAAASUVORK5CYII=",Pe={id:"checkImageBitmap",code:`\n async function checkImageBitmap()\n {\n try\n {\n if (typeof createImageBitmap !== 'function') return false;\n\n const response = await fetch('${ke}');\n const imageBlob = await response.blob();\n const imageBitmap = await createImageBitmap(imageBlob);\n\n return imageBitmap.width === 1 && imageBitmap.height === 1;\n }\n catch (e)\n {\n return false;\n }\n }\n checkImageBitmap().then((result) => { self.postMessage(result); });\n `},Oe={id:"loadImageBitmap",code:"\n async function loadImageBitmap(url)\n {\n const response = await fetch(url);\n\n if (!response.ok)\n {\n throw new Error(`[WorkerManager.loadImageBitmap] Failed to fetch ${url}: `\n + `${response.status} ${response.statusText}`);\n }\n\n const imageBlob = await response.blob();\n const imageBitmap = await createImageBitmap(imageBlob);\n\n return imageBitmap;\n }\n self.onmessage = async (event) =>\n {\n try\n {\n const imageBitmap = await loadImageBitmap(event.data.data[0]);\n\n self.postMessage({\n data: imageBitmap,\n uuid: event.data.uuid,\n id: event.data.id,\n }, [imageBitmap]);\n }\n catch(e)\n {\n self.postMessage({\n error: e,\n uuid: event.data.uuid,\n id: event.data.id,\n });\n }\n };"};let Ne;class Me{constructor(){this._initialized=!1,this._createdWorkers=0,this.workerPool=[],this.queue=[],this.resolveHash={}}isImageBitmapSupported(){return void 0!==this._isImageBitmapSupported||(this._isImageBitmapSupported=new Promise((e=>{const t=URL.createObjectURL(new Blob([Pe.code],{type:"application/javascript"})),n=new Worker(t);n.addEventListener("message",(r=>{n.terminate(),URL.revokeObjectURL(t),e(r.data)}))}))),this._isImageBitmapSupported}loadImageBitmap(e){return this._run("loadImageBitmap",[e])}async _initWorkers(){this._initialized||(this._initialized=!0)}getWorker(){void 0===Ie&&(Ie=navigator.hardwareConcurrency||4);let e=this.workerPool.pop();return!e&&this._createdWorkers{this.complete(e.data),this.returnWorker(e.target),this.next()}))),e}returnWorker(e){this.workerPool.push(e)}complete(e){void 0!==e.error?this.resolveHash[e.uuid].reject(e.error):this.resolveHash[e.uuid].resolve(e.data),this.resolveHash[e.uuid]=null}async _run(e,t){await this._initWorkers();const n=new Promise(((n,r)=>{this.queue.push({id:e,arguments:t,resolve:n,reject:r})}));return this.next(),n}next(){if(!this.queue.length)return;const e=this.getWorker();if(!e)return;const t=this.queue.pop(),n=t.id;this.resolveHash[Re]={resolve:t.resolve,reject:t.reject},e.postMessage({data:t.arguments,uuid:Re++,id:n})}}const De=new Me;function Le(e,t,n){const i=new r.xE(e);return i.baseTexture.on("dispose",(()=>{delete t.promiseCache[n]})),i}const Fe=[".jpeg",".jpg",".png",".webp",".avif"],Be=["image/jpeg","image/png","image/webp","image/avif"];async function Ue(e){const t=await r.Xd.ADAPTER.fetch(e);if(!t.ok)throw new Error(`[loadImageBitmap] Failed to fetch ${e}: ${t.status} ${t.statusText}`);const n=await t.blob(),i=await createImageBitmap(n);return i}const Ge={name:"loadTextures",extension:{type:r.nw.LoadParser,priority:fe.High},config:{preferWorkers:!0,preferCreateImageBitmap:!0,crossOrigin:"anonymous"},test(e){return ge(e,Be)||le(e,Fe)},async load(e,t,n){let i=null;i=globalThis.createImageBitmap&&this.config.preferCreateImageBitmap?this.config.preferWorkers&&await De.isImageBitmapSupported()?await De.loadImageBitmap(e):await Ue(e):await new Promise((t=>{i=new Image,i.crossOrigin=this.config.crossOrigin,i.src=e,i.complete?t(i):i.onload=()=>{t(i)}}));const s=new r.VL(i,{resolution:r.P6.getResolutionOfUrl(e),...t.data});return s.resource.src=e,Le(s,n,e)},unload(e){e.destroy(!0)}};r.Rw.add(Ge);const $e=".svg",ze="image/svg+xml",He={extension:{type:r.nw.LoadParser,priority:fe.High},name:"loadSVG",test(e){return ge(e,ze)||le(e,$e)},async testParse(e){return r.pX.test(e)},async parse(e,t,n){const i=new r.pX(e,t?.data?.resourceOptions);await i.load();const s=new r.VL(i,{resolution:r.P6.getResolutionOfUrl(e),...t?.data});s.resource.src=e;const o=Le(s,n,e);return o},async load(e,t){const n=await r.Xd.ADAPTER.fetch(e);return n.text()},unload:Ge.unload};function Ve(e,t,n,r,i){const s=t[n];for(let o=0;o{const n=e.substring(1,e.length-1).split(",");t.push(n)})),Ve(e,t,0,n,r)}else r.push(e);return r}r.Rw.add(He);class We{constructor(){this._defaultBundleIdentifierOptions={connector:"-",createBundleAssetId:(e,t)=>`${e}${this._bundleIdConnector}${t}`,extractAssetIdFromBundle:(e,t)=>t.replace(`${e}${this._bundleIdConnector}`,"")},this._bundleIdConnector=this._defaultBundleIdentifierOptions.connector,this._createBundleAssetId=this._defaultBundleIdentifierOptions.createBundleAssetId,this._extractAssetIdFromBundle=this._defaultBundleIdentifierOptions.extractAssetIdFromBundle,this._assetMap={},this._preferredOrder=[],this._parsers=[],this._resolverHash={},this._bundles={}}setBundleIdentifier(e){if(this._bundleIdConnector=e.connector??this._bundleIdConnector,this._createBundleAssetId=e.createBundleAssetId??this._createBundleAssetId,this._extractAssetIdFromBundle=e.extractAssetIdFromBundle??this._extractAssetIdFromBundle,"bar"!==this._extractAssetIdFromBundle("foo",this._createBundleAssetId("foo","bar")))throw new Error("[Resolver] GenerateBundleAssetId are not working correctly")}prefer(...e){e.forEach((e=>{this._preferredOrder.push(e),e.priority||(e.priority=Object.keys(e.params))})),this._resolverHash={}}set basePath(e){this._basePath=e}get basePath(){return this._basePath}set rootPath(e){this._rootPath=e}get rootPath(){return this._rootPath}get parsers(){return this._parsers}reset(){this.setBundleIdentifier(this._defaultBundleIdentifierOptions),this._assetMap={},this._preferredOrder=[],this._resolverHash={},this._rootPath=null,this._basePath=null,this._manifest=null,this._bundles={},this._defaultSearchParams=null}setDefaultSearchParams(e){if("string"===typeof e)this._defaultSearchParams=e;else{const t=e;this._defaultSearchParams=Object.keys(t).map((e=>`${encodeURIComponent(e)}=${encodeURIComponent(t[e])}`)).join("&")}}addManifest(e){this._manifest&&console.warn("[Resolver] Manifest already exists, this will be overwritten"),this._manifest=e,e.bundles.forEach((e=>{this.addBundle(e.name,e.assets)}))}addBundle(e,t){const n=[];Array.isArray(t)?t.forEach((t=>{if("string"===typeof t.name){const r=this._createBundleAssetId(e,t.name);n.push(r),this.add([t.name,r],t.srcs,t.data)}else{const r=t.name.map((t=>this._createBundleAssetId(e,t)));r.forEach((e=>{n.push(e)})),this.add([...t.name,...r],t.srcs)}})):Object.keys(t).forEach((r=>{n.push(this._createBundleAssetId(e,r)),this.add([r,this._createBundleAssetId(e,r)],t[r])})),this._bundles[e]=n}add(e,t,n){const i=ce(e);i.forEach((e=>{this.hasKey(e)&&console.warn(`[Resolver] already has key: ${e} overwriting`)})),Array.isArray(t)||(t="string"===typeof t?je(t):[t]);const s=t.map((e=>{let t=e;if("string"===typeof e){let n=!1;for(let r=0;r{this._assetMap[e]=s}))}resolveBundle(e){const t=he(e);e=ce(e);const n={};return e.forEach((e=>{const t=this._bundles[e];if(t){const r=this.resolve(t),i={};for(const t in r){const n=r[t];i[this._extractAssetIdFromBundle(e,t)]=n}n[e]=i}})),t?n[e[0]]:n}resolveUrl(e){const t=this.resolve(e);if("string"!==typeof e){const e={};for(const n in t)e[n]=t[n].src;return e}return t.src}resolve(e){const t=he(e);e=ce(e);const n={};return e.forEach((e=>{if(!this._resolverHash[e])if(this._assetMap[e]){let t=this._assetMap[e];const n=this._getPreferredOrder(t),r=t[0];n?.priority.forEach((e=>{n.params[e].forEach((n=>{const r=t.filter((t=>!!t[e]&&t[e]===n));r.length&&(t=r)}))})),this._resolverHash[e]=t[0]??r}else{let t=e;(this._basePath||this._rootPath)&&(t=r.P6.path.toAbsolute(t,this._basePath,this._rootPath)),t=this._appendDefaultSearchParams(t),this._resolverHash[e]={src:t}}n[e]=this._resolverHash[e]})),t?n[e[0]]:n}hasKey(e){return!!this._assetMap[e]}hasBundle(e){return!!this._bundles[e]}_getPreferredOrder(e){for(let t=0;te.params.format.includes(t.format)));if(n)return n}return this._preferredOrder[0]}_appendDefaultSearchParams(e){if(!this._defaultSearchParams)return e;const t=/\?/.test(e)?"&":"?";return`${e}${t}${this._defaultSearchParams}`}}class qe{constructor(){this._detections=[],this._initialized=!1,this.resolver=new We,this.loader=new pe,this.cache=de,this._backgroundLoader=new ae(this.loader),this._backgroundLoader.active=!0,this.reset()}async init(e={}){if(this._initialized)return void console.warn("[Assets]AssetManager already initialized, did you load before calling this Asset.init()?");if(this._initialized=!0,e.defaultSearchParams&&this.resolver.setDefaultSearchParams(e.defaultSearchParams),e.basePath&&(this.resolver.basePath=e.basePath),e.bundleIdentifier&&this.resolver.setBundleIdentifier(e.bundleIdentifier),e.manifest){let t=e.manifest;"string"===typeof t&&(t=await this.load(t)),this.resolver.addManifest(t)}const t=e.texturePreference?.resolution??1,n="number"===typeof t?[t]:t;let r=[];if(e.texturePreference?.format){const t=e.texturePreference?.format;r="string"===typeof t?[t]:t;for(const e of this._detections)await e.test()||(r=await e.remove(r))}else for(const i of this._detections)await i.test()&&(r=await i.add(r));this.resolver.prefer({params:{format:r,resolution:n}}),e.preferences&&this.setPreferences(e.preferences)}add(e,t,n){this.resolver.add(e,t,n)}async load(e,t){this._initialized||await this.init();const n=he(e),r=ce(e).map((e=>"string"!==typeof e?(this.resolver.add(e.src,e),e.src):(this.resolver.hasKey(e)||this.resolver.add(e,e),e))),i=this.resolver.resolve(r),s=await this._mapLoadToResolve(i,t);return n?s[r[0]]:s}addBundle(e,t){this.resolver.addBundle(e,t)}async loadBundle(e,t){this._initialized||await this.init();let n=!1;"string"===typeof e&&(n=!0,e=[e]);const r=this.resolver.resolveBundle(e),i={},s=Object.keys(r);let o=0,a=0;const l=()=>{t?.(++o/a)},c=s.map((e=>{const t=r[e];return a+=Object.keys(t).length,this._mapLoadToResolve(t,l).then((t=>{i[e]=t}))}));return await Promise.all(c),n?i[e[0]]:i}async backgroundLoad(e){this._initialized||await this.init(),"string"===typeof e&&(e=[e]);const t=this.resolver.resolve(e);this._backgroundLoader.add(Object.values(t))}async backgroundLoadBundle(e){this._initialized||await this.init(),"string"===typeof e&&(e=[e]);const t=this.resolver.resolveBundle(e);Object.values(t).forEach((e=>{this._backgroundLoader.add(Object.values(e))}))}reset(){this.resolver.reset(),this.loader.reset(),this.cache.reset(),this._initialized=!1}get(e){if("string"===typeof e)return de.get(e);const t={};for(let n=0;n{const n=i[e.src],o=[e.src];e.alias&&o.push(...e.alias),s[r[t]]=n,de.set(o,n)})),s}async unload(e){this._initialized||await this.init();const t=ce(e).map((e=>"string"!==typeof e?e.src:e)),n=this.resolver.resolve(t);await this._unloadFromResolved(n)}async unloadBundle(e){this._initialized||await this.init(),e=ce(e);const t=this.resolver.resolveBundle(e),n=Object.keys(t).map((e=>this._unloadFromResolved(t[e])));await Promise.all(n)}async _unloadFromResolved(e){const t=Object.values(e);t.forEach((e=>{de.remove(e.src)})),await this.loader.unload(t)}get detections(){return this._detections}get preferWorkers(){return Ge.config.preferWorkers}set preferWorkers(e){r.P6.deprecation("7.2.0","Assets.prefersWorkers is deprecated, use Assets.setPreferences({ preferWorkers: true }) instead."),this.setPreferences({preferWorkers:e})}setPreferences(e){this.loader.parsers.forEach((t=>{t.config&&Object.keys(t.config).filter((t=>t in e)).forEach((n=>{t.config[n]=e[n]}))}))}}const Xe=new qe;r.Rw.handleByList(r.nw.LoadParser,Xe.loader.parsers).handleByList(r.nw.ResolveParser,Xe.resolver.parsers).handleByList(r.nw.CacheParser,Xe.cache.parsers).handleByList(r.nw.DetectionParser,Xe.detections);const Ye={extension:r.nw.CacheParser,test:e=>Array.isArray(e)&&e.every((e=>e instanceof r.xE)),getCacheableAssets:(e,t)=>{const n={};return e.forEach((e=>{t.forEach(((t,r)=>{n[e+(0===r?"":r+1)]=t}))})),n}};r.Rw.add(Ye);const Ke={extension:{type:r.nw.DetectionParser,priority:1},test:async()=>{if(!globalThis.createImageBitmap)return!1;const e="data:image/avif;base64,AAAAIGZ0eXBhdmlmAAAAAGF2aWZtaWYxbWlhZk1BMUIAAADybWV0YQAAAAAAAAAoaGRscgAAAAAAAAAAcGljdAAAAAAAAAAAAAAAAGxpYmF2aWYAAAAADnBpdG0AAAAAAAEAAAAeaWxvYwAAAABEAAABAAEAAAABAAABGgAAAB0AAAAoaWluZgAAAAAAAQAAABppbmZlAgAAAAABAABhdjAxQ29sb3IAAAAAamlwcnAAAABLaXBjbwAAABRpc3BlAAAAAAAAAAIAAAACAAAAEHBpeGkAAAAAAwgICAAAAAxhdjFDgQ0MAAAAABNjb2xybmNseAACAAIAAYAAAAAXaXBtYQAAAAAAAAABAAEEAQKDBAAAACVtZGF0EgAKCBgANogQEAwgMg8f8D///8WfhwB8+ErK42A=",t=await r.Xd.ADAPTER.fetch(e).then((e=>e.blob()));return createImageBitmap(t).then((()=>!0),(()=>!1))},add:async e=>[...e,"avif"],remove:async e=>e.filter((e=>"avif"!==e))};r.Rw.add(Ke);const Ze={extension:{type:r.nw.DetectionParser,priority:0},test:async()=>{if(!globalThis.createImageBitmap)return!1;const e="data:image/webp;base64,UklGRh4AAABXRUJQVlA4TBEAAAAvAAAAAAfQ//73v/+BiOh/AAA=",t=await r.Xd.ADAPTER.fetch(e).then((e=>e.blob()));return createImageBitmap(t).then((()=>!0),(()=>!1))},add:async e=>[...e,"webp"],remove:async e=>e.filter((e=>"webp"!==e))};r.Rw.add(Ze);const Qe=["png","jpg","jpeg"],Je={extension:{type:r.nw.DetectionParser,priority:-1},test:()=>Promise.resolve(!0),add:async e=>[...e,...Qe],remove:async e=>e.filter((e=>!Qe.includes(e)))};r.Rw.add(Je);const et={extension:r.nw.ResolveParser,test:Ge.test,parse:e=>({resolution:parseFloat(r.Xd.RETINA_PREFIX.exec(e)?.[1]??"1"),format:e.split(".").pop(),src:e})};r.Rw.add(et);const tt=(e,t)=>{const n=t.split("?")[1];return n&&(e+=`?${n}`),e};var nt=(e=>(e[e["COMPRESSED_RGB_S3TC_DXT1_EXT"]=33776]="COMPRESSED_RGB_S3TC_DXT1_EXT",e[e["COMPRESSED_RGBA_S3TC_DXT1_EXT"]=33777]="COMPRESSED_RGBA_S3TC_DXT1_EXT",e[e["COMPRESSED_RGBA_S3TC_DXT3_EXT"]=33778]="COMPRESSED_RGBA_S3TC_DXT3_EXT",e[e["COMPRESSED_RGBA_S3TC_DXT5_EXT"]=33779]="COMPRESSED_RGBA_S3TC_DXT5_EXT",e[e["COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT"]=35917]="COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT",e[e["COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT"]=35918]="COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT",e[e["COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT"]=35919]="COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT",e[e["COMPRESSED_SRGB_S3TC_DXT1_EXT"]=35916]="COMPRESSED_SRGB_S3TC_DXT1_EXT",e[e["COMPRESSED_R11_EAC"]=37488]="COMPRESSED_R11_EAC",e[e["COMPRESSED_SIGNED_R11_EAC"]=37489]="COMPRESSED_SIGNED_R11_EAC",e[e["COMPRESSED_RG11_EAC"]=37490]="COMPRESSED_RG11_EAC",e[e["COMPRESSED_SIGNED_RG11_EAC"]=37491]="COMPRESSED_SIGNED_RG11_EAC",e[e["COMPRESSED_RGB8_ETC2"]=37492]="COMPRESSED_RGB8_ETC2",e[e["COMPRESSED_RGBA8_ETC2_EAC"]=37496]="COMPRESSED_RGBA8_ETC2_EAC",e[e["COMPRESSED_SRGB8_ETC2"]=37493]="COMPRESSED_SRGB8_ETC2",e[e["COMPRESSED_SRGB8_ALPHA8_ETC2_EAC"]=37497]="COMPRESSED_SRGB8_ALPHA8_ETC2_EAC",e[e["COMPRESSED_RGB8_PUNCHTHROUGH_ALPHA1_ETC2"]=37494]="COMPRESSED_RGB8_PUNCHTHROUGH_ALPHA1_ETC2",e[e["COMPRESSED_SRGB8_PUNCHTHROUGH_ALPHA1_ETC2"]=37495]="COMPRESSED_SRGB8_PUNCHTHROUGH_ALPHA1_ETC2",e[e["COMPRESSED_RGB_PVRTC_4BPPV1_IMG"]=35840]="COMPRESSED_RGB_PVRTC_4BPPV1_IMG",e[e["COMPRESSED_RGBA_PVRTC_4BPPV1_IMG"]=35842]="COMPRESSED_RGBA_PVRTC_4BPPV1_IMG",e[e["COMPRESSED_RGB_PVRTC_2BPPV1_IMG"]=35841]="COMPRESSED_RGB_PVRTC_2BPPV1_IMG",e[e["COMPRESSED_RGBA_PVRTC_2BPPV1_IMG"]=35843]="COMPRESSED_RGBA_PVRTC_2BPPV1_IMG",e[e["COMPRESSED_RGB_ETC1_WEBGL"]=36196]="COMPRESSED_RGB_ETC1_WEBGL",e[e["COMPRESSED_RGB_ATC_WEBGL"]=35986]="COMPRESSED_RGB_ATC_WEBGL",e[e["COMPRESSED_RGBA_ATC_EXPLICIT_ALPHA_WEBGL"]=35986]="COMPRESSED_RGBA_ATC_EXPLICIT_ALPHA_WEBGL",e[e["COMPRESSED_RGBA_ATC_INTERPOLATED_ALPHA_WEBGL"]=34798]="COMPRESSED_RGBA_ATC_INTERPOLATED_ALPHA_WEBGL",e[e["COMPRESSED_RGBA_ASTC_4x4_KHR"]=37808]="COMPRESSED_RGBA_ASTC_4x4_KHR",e))(nt||{});const rt={[33776]:.5,[33777]:.5,[33778]:1,[33779]:1,[35916]:.5,[35917]:.5,[35918]:1,[35919]:1,[37488]:.5,[37489]:.5,[37490]:1,[37491]:1,[37492]:.5,[37496]:1,[37493]:.5,[37497]:1,[37494]:.5,[37495]:.5,[35840]:.5,[35842]:.5,[35841]:.25,[35843]:.25,[36196]:.5,[35986]:.5,[35986]:1,[34798]:1,[37808]:1};let it,st;function ot(){st={s3tc:it.getExtension("WEBGL_compressed_texture_s3tc"),s3tc_sRGB:it.getExtension("WEBGL_compressed_texture_s3tc_srgb"),etc:it.getExtension("WEBGL_compressed_texture_etc"),etc1:it.getExtension("WEBGL_compressed_texture_etc1"),pvrtc:it.getExtension("WEBGL_compressed_texture_pvrtc")||it.getExtension("WEBKIT_WEBGL_compressed_texture_pvrtc"),atc:it.getExtension("WEBGL_compressed_texture_atc"),astc:it.getExtension("WEBGL_compressed_texture_astc")}}const at={extension:{type:r.nw.DetectionParser,priority:2},test:async()=>{const e=r.Xd.ADAPTER.createCanvas(),t=e.getContext("webgl");return t?(it=t,!0):(console.warn("WebGL not available for compressed textures."),!1)},add:async e=>{st||ot();const t=[];for(const n in st){const e=st[n];e&&t.push(n)}return[...t,...e]},remove:async e=>(st||ot(),e.filter((e=>!(e in st))))};r.Rw.add(at);class lt extends r.qm{constructor(e,t={width:1,height:1,autoLoad:!0}){let n,i;"string"===typeof e?(n=e,i=new Uint8Array):(n=null,i=e),super(i,t),this.origin=n,this.buffer=i?new r.Rv(i):null,this._load=null,this.loaded=!1,null!==this.origin&&!1!==t.autoLoad&&this.load(),null===this.origin&&this.buffer&&(this._load=Promise.resolve(this),this.loaded=!0,this.onBlobLoaded(this.buffer.rawBinaryData))}onBlobLoaded(e){}load(){return this._load||(this._load=fetch(this.origin).then((e=>e.blob())).then((e=>e.arrayBuffer())).then((e=>(this.data=new Uint32Array(e),this.buffer=new r.Rv(e),this.loaded=!0,this.onBlobLoaded(e),this.update(),this)))),this._load}}class ct extends lt{constructor(e,t){super(e,t),this.format=t.format,this.levels=t.levels||1,this._width=t.width,this._height=t.height,this._extension=ct._formatToExtension(this.format),(t.levelBuffers||this.buffer)&&(this._levelBuffers=t.levelBuffers||ct._createLevelBuffers(e instanceof Uint8Array?e:this.buffer.uint8View,this.format,this.levels,4,4,this.width,this.height))}upload(e,t,n){const r=e.gl,i=e.context.extensions[this._extension];if(!i)throw new Error(`${this._extension} textures are not supported on the current machine`);if(!this._levelBuffers)return!1;for(let s=0,o=this.levels;s=33776&&e<=33779)return"s3tc";if(e>=37488&&e<=37497)return"etc";if(e>=35840&&e<=35843)return"pvrtc";if(e>=36196)return"etc1";if(e>=35986&&e<=34798)return"atc";throw new Error("Invalid (compressed) texture format given!")}static _createLevelBuffers(e,t,n,r,i,s,o){const a=new Array(n);let l=e.byteOffset,c=s,u=o,d=c+r-1&~(r-1),h=u+i-1&~(i-1),p=d*h*rt[t];for(let f=0;f1?c:d,levelHeight:n>1?u:h,levelBuffer:new Uint8Array(e.buffer,l,p)},l+=p,c=c>>1||1,u=u>>1||1,d=c+r-1&~(r-1),h=u+i-1&~(i-1),p=d*h*rt[t];return a}}const ut=4,dt=124,ht=32,pt=20,ft=542327876,gt={SIZE:1,FLAGS:2,HEIGHT:3,WIDTH:4,MIPMAP_COUNT:7,PIXEL_FORMAT:19},mt={SIZE:0,FLAGS:1,FOURCC:2,RGB_BITCOUNT:3,R_BIT_MASK:4,G_BIT_MASK:5,B_BIT_MASK:6,A_BIT_MASK:7},bt={DXGI_FORMAT:0,RESOURCE_DIMENSION:1,MISC_FLAG:2,ARRAY_SIZE:3,MISC_FLAGS2:4};const _t=1,yt=2,vt=4,Et=64,xt=512,St=131072,wt=827611204,Tt=861165636,At=894720068,Ct=808540228,It=4,Rt={[wt]:nt.COMPRESSED_RGBA_S3TC_DXT1_EXT,[Tt]:nt.COMPRESSED_RGBA_S3TC_DXT3_EXT,[At]:nt.COMPRESSED_RGBA_S3TC_DXT5_EXT},kt={[70]:nt.COMPRESSED_RGBA_S3TC_DXT1_EXT,[71]:nt.COMPRESSED_RGBA_S3TC_DXT1_EXT,[73]:nt.COMPRESSED_RGBA_S3TC_DXT3_EXT,[74]:nt.COMPRESSED_RGBA_S3TC_DXT3_EXT,[76]:nt.COMPRESSED_RGBA_S3TC_DXT5_EXT,[77]:nt.COMPRESSED_RGBA_S3TC_DXT5_EXT,[72]:nt.COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT,[75]:nt.COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT,[78]:nt.COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT};function Pt(e){const t=new Uint32Array(e),n=t[0];if(n!==ft)throw new Error("Invalid DDS file magic word");const r=new Uint32Array(e,0,dt/Uint32Array.BYTES_PER_ELEMENT),i=r[gt.HEIGHT],s=r[gt.WIDTH],o=r[gt.MIPMAP_COUNT],a=new Uint32Array(e,gt.PIXEL_FORMAT*Uint32Array.BYTES_PER_ELEMENT,ht/Uint32Array.BYTES_PER_ELEMENT),l=a[_t];if(l&vt){const n=a[mt.FOURCC];if(n!==Ct){const t=Rt[n],r=ut+dt,a=new Uint8Array(e,r),l=new ct(a,{format:t,width:s,height:i,levels:o});return[l]}const r=ut+dt,l=new Uint32Array(t.buffer,r,pt/Uint32Array.BYTES_PER_ELEMENT),c=l[bt.DXGI_FORMAT],u=l[bt.RESOURCE_DIMENSION],d=l[bt.MISC_FLAG],h=l[bt.ARRAY_SIZE],p=kt[c];if(void 0===p)throw new Error(`DDSParser cannot parse texture data with DXGI format ${c}`);if(d===It)throw new Error("DDSParser does not support cubemap textures");if(6===u)throw new Error("DDSParser does not supported 3D texture data");const f=new Array,g=ut+dt+pt;if(1===h)f.push(new Uint8Array(e,g));else{const t=rt[p];let n=0,r=s,a=i;for(let e=0;e>>=1,a>>>=1}let l=g;for(let i=0;inew ct(e,{format:p,width:s,height:i,levels:o})))}if(l&Et)throw new Error("DDSParser does not support uncompressed texture data.");if(l&xt)throw new Error("DDSParser does not supported YUV uncompressed texture data.");if(l&St)throw new Error("DDSParser does not support single-channel (lumninance) texture data!");if(l&yt)throw new Error("DDSParser does not support single-channel (alpha) texture data!");throw new Error("DDSParser failed to load a texture file due to an unknown reason!")}const Ot=[171,75,84,88,32,49,49,187,13,10,26,10],Nt=67305985,Mt={FILE_IDENTIFIER:0,ENDIANNESS:12,GL_TYPE:16,GL_TYPE_SIZE:20,GL_FORMAT:24,GL_INTERNAL_FORMAT:28,GL_BASE_INTERNAL_FORMAT:32,PIXEL_WIDTH:36,PIXEL_HEIGHT:40,PIXEL_DEPTH:44,NUMBER_OF_ARRAY_ELEMENTS:48,NUMBER_OF_FACES:52,NUMBER_OF_MIPMAP_LEVELS:56,BYTES_OF_KEY_VALUE_DATA:60},Dt=64,Lt={[r.vK.UNSIGNED_BYTE]:1,[r.vK.UNSIGNED_SHORT]:2,[r.vK.INT]:4,[r.vK.UNSIGNED_INT]:4,[r.vK.FLOAT]:4,[r.vK.HALF_FLOAT]:8},Ft={[r.I2.RGBA]:4,[r.I2.RGB]:3,[r.I2.RG]:2,[r.I2.RED]:1,[r.I2.LUMINANCE]:1,[r.I2.LUMINANCE_ALPHA]:2,[r.I2.ALPHA]:1},Bt={[r.vK.UNSIGNED_SHORT_4_4_4_4]:2,[r.vK.UNSIGNED_SHORT_5_5_5_1]:2,[r.vK.UNSIGNED_SHORT_5_6_5]:2};function Ut(e,t,n=!1){const i=new DataView(t);if(!Gt(e,i))return null;const s=i.getUint32(Mt.ENDIANNESS,!0)===Nt,o=i.getUint32(Mt.GL_TYPE,s),a=i.getUint32(Mt.GL_FORMAT,s),l=i.getUint32(Mt.GL_INTERNAL_FORMAT,s),c=i.getUint32(Mt.PIXEL_WIDTH,s),u=i.getUint32(Mt.PIXEL_HEIGHT,s)||1,d=i.getUint32(Mt.PIXEL_DEPTH,s)||1,h=i.getUint32(Mt.NUMBER_OF_ARRAY_ELEMENTS,s)||1,p=i.getUint32(Mt.NUMBER_OF_FACES,s),f=i.getUint32(Mt.NUMBER_OF_MIPMAP_LEVELS,s),g=i.getUint32(Mt.BYTES_OF_KEY_VALUE_DATA,s);if(0===u||1!==d)throw new Error("Only 2D textures are supported");if(1!==p)throw new Error("CubeTextures are not supported by KTXLoader yet!");if(1!==h)throw new Error("WebGL does not support array textures");const m=4,b=4,_=c+3&-4,y=u+3&-4,v=new Array(h);let E,x=c*u;if(0===o&&(x=_*y),E=0!==o?Lt[o]?Lt[o]*Ft[a]:Bt[o]:rt[l],void 0===E)throw new Error("Unable to resolve the pixel format stored in the *.ktx file!");const S=n?zt(i,g,s):null,w=x*E;let T=w,A=c,C=u,I=_,R=y,k=Dt+g;for(let r=0;r1||0!==o?A:I,levelHeight:f>1||0!==o?C:R,levelBuffer:new Uint8Array(t,n,T)},n+=T}k+=e+4,k=k%4!==0?k+4-k%4:k,A=A>>1||1,C=C>>1||1,I=A+m-1&~(m-1),R=C+b-1&~(b-1),T=I*R*E}return 0!==o?{uncompressed:v.map((e=>{let t=e[0].levelBuffer,n=!1;return o===r.vK.FLOAT?t=new Float32Array(e[0].levelBuffer.buffer,e[0].levelBuffer.byteOffset,e[0].levelBuffer.byteLength/4):o===r.vK.UNSIGNED_INT?(n=!0,t=new Uint32Array(e[0].levelBuffer.buffer,e[0].levelBuffer.byteOffset,e[0].levelBuffer.byteLength/4)):o===r.vK.INT&&(n=!0,t=new Int32Array(e[0].levelBuffer.buffer,e[0].levelBuffer.byteOffset,e[0].levelBuffer.byteLength/4)),{resource:new r.qm(t,{width:e[0].levelWidth,height:e[0].levelHeight}),type:o,format:n?$t(a):a}})),kvData:S}:{compressed:v.map((e=>new ct(null,{format:l,width:c,height:u,levels:f,levelBuffers:e}))),kvData:S}}function Gt(e,t){for(let n=0;nt-i){console.error("KTXLoader: keyAndValueByteSize out of bounds");break}let l=0;for(;l{const s=new r.VL(i,{mipmap:r.KI.OFF,alphaMode:r.iw.NO_PREMULTIPLIED_ALPHA,resolution:r.P6.getResolutionOfUrl(e),...t.data});return Le(s,n,e)}));return 1===a.length?a[0]:a},unload(e){Array.isArray(e)?e.forEach((e=>e.destroy(!0))):e.destroy(!0)}};r.Rw.add(Ht);const Vt={extension:{type:r.nw.LoadParser,priority:fe.High},name:"loadKTX",test(e){return le(e,".ktx")},async load(e,t,n){const i=await r.Xd.ADAPTER.fetch(e),s=await i.arrayBuffer(),{compressed:o,uncompressed:a,kvData:l}=Ut(e,s),c=o??a,u={mipmap:r.KI.OFF,alphaMode:r.iw.NO_PREMULTIPLIED_ALPHA,resolution:r.P6.getResolutionOfUrl(e),...t.data},d=c.map((t=>{c===a&&Object.assign(u,{type:t.type,format:t.format});const i=new r.VL(t,u);return i.ktxKeyValueData=l,Le(i,n,e)}));return 1===d.length?d[0]:d},unload(e){Array.isArray(e)?e.forEach((e=>e.destroy(!0))):e.destroy(!0)}};r.Rw.add(Vt);const jt={extension:r.nw.ResolveParser,test:e=>{const t=e.split("?")[0],n=t.split(".").pop();return["basis","ktx","dds"].includes(n)},parse:e=>{const t=e.split("?")[0],n=t.split(".").pop();if("ktx"===n){const t=[".s3tc.ktx",".s3tc_sRGB.ktx",".etc.ktx",".etc1.ktx",".pvrt.ktx",".atc.ktx",".astc.ktx"];if(t.some((t=>e.endsWith(t))))return{resolution:parseFloat(r.Xd.RETINA_PREFIX.exec(e)?.[1]??"1"),format:t.find((t=>e.endsWith(t))),src:e}}return{resolution:parseFloat(r.Xd.RETINA_PREFIX.exec(e)?.[1]??"1"),format:e.split(".").pop(),src:e}}};r.Rw.add(jt);const Wt=new r.Ae,qt=4,Xt=class{constructor(e){this.renderer=e}async image(e,t,n){const r=new Image;return r.src=await this.base64(e,t,n),r}async base64(e,t,n){const r=this.canvas(e);if(void 0!==r.toBlob)return new Promise(((e,i)=>{r.toBlob((t=>{if(!t)return void i(new Error("ICanvas.toBlob failed!"));const n=new FileReader;n.onload=()=>e(n.result),n.onerror=i,n.readAsDataURL(t)}),t,n)}));if(void 0!==r.toDataURL)return r.toDataURL(t,n);if(void 0!==r.convertToBlob){const e=await r.convertToBlob({type:t,quality:n});return new Promise(((t,n)=>{const r=new FileReader;r.onload=()=>t(r.result),r.onerror=n,r.readAsDataURL(e)}))}throw new Error("Extract.base64() requires ICanvas.toDataURL, ICanvas.toBlob, or ICanvas.convertToBlob to be implemented")}canvas(e,t){const{pixels:n,width:i,height:s,flipY:o}=this._rawPixels(e,t);o&&Xt._flipY(n,i,s),Xt._unpremultiplyAlpha(n);const a=new r.P6.CanvasRenderTarget(i,s,1),l=new ImageData(new Uint8ClampedArray(n.buffer),i,s);return a.context.putImageData(l,0,0),a.canvas}pixels(e,t){const{pixels:n,width:r,height:i,flipY:s}=this._rawPixels(e,t);return s&&Xt._flipY(n,r,i),Xt._unpremultiplyAlpha(n),n}_rawPixels(e,t){const n=this.renderer;if(!n)throw new Error("The Extract has already been destroyed");let i,s,o=!1,a=!1;if(e&&(e instanceof r.TI?s=e:(s=n.generateTexture(e,{resolution:n.resolution,multisample:n.multisample}),a=!0)),s){if(i=s.baseTexture.resolution,t=t??s.frame,o=!1,!a){n.renderTexture.bind(s);const e=s.framebuffer.glFramebuffers[n.CONTEXT_UID];e.blitFramebuffer&&n.framebuffer.bind(e.blitFramebuffer)}}else i=n.resolution,t||(t=Wt,t.width=n.width/i,t.height=n.height/i),o=!0,n.renderTexture.bind();const l=Math.round(t.width*i),c=Math.round(t.height*i),u=new Uint8Array(qt*l*c),d=n.gl;return d.readPixels(Math.round(t.x*i),Math.round(t.y*i),l,c,d.RGBA,d.UNSIGNED_BYTE,u),a&&s?.destroy(!0),{pixels:u,width:l,height:c,flipY:o}}destroy(){this.renderer=null}static _flipY(e,t,n){const r=t<<2,i=n>>1,s=new Uint8Array(r);for(let o=0;o=0&&l>=0&&s>=0&&o>=0))return void(t.length=0);const c=Math.ceil(2.3*Math.sqrt(a+l)),u=8*c+(s?4:0)+(o?4:0);if(t.length=u,0===u)return;if(0===c)return t.length=8,t[0]=t[6]=n+s,t[1]=t[3]=i+o,t[2]=t[4]=n-s,void(t[5]=t[7]=i-o);let d=0,h=4*c+(s?2:0)+2,p=h,f=u;{const e=s+a,r=o,l=n+e,c=n-e,u=i+r;if(t[d++]=l,t[d++]=u,t[--h]=u,t[--h]=c,o){const e=i-r;t[p++]=c,t[p++]=e,t[--f]=e,t[--f]=l}}for(let r=1;r0||t&&r<=0){const t=n/2;for(let r=t+t%2;r=6){Zt(n,!1);const e=[];for(let r=0;r=0&&s>=0&&o.push(n,r,n+i,r,n+i,r+s,n,r+s)},triangulate(e,t){const n=e.points,r=t.points;if(0===n.length)return;const i=r.length/2;r.push(n[0],n[1],n[2],n[3],n[6],n[7],n[4],n[5]),t.indices.push(i,i+1,i+2,i+1,i+2,i+3)}},en={build(e){Kt.build(e)},triangulate(e,t){Kt.triangulate(e,t)}};var tn=(e=>(e["MITER"]="miter",e["BEVEL"]="bevel",e["ROUND"]="round",e))(tn||{}),nn=(e=>(e["BUTT"]="butt",e["ROUND"]="round",e["SQUARE"]="square",e))(nn||{});const rn={adaptive:!0,maxLength:10,minSegments:8,maxSegments:2048,epsilon:1e-4,_segmentsCount(e,t=20){if(!this.adaptive||!e||isNaN(e))return t;let n=Math.ceil(e/this.maxLength);return nthis.maxSegments&&(n=this.maxSegments),n}};class sn{static curveTo(e,t,n,r,i,s){const o=s[s.length-2],a=s[s.length-1],l=a-t,c=o-e,u=r-t,d=n-e,h=Math.abs(l*d-c*u);if(h<1e-8||0===i)return s[s.length-2]===e&&s[s.length-1]===t||s.push(e,t),null;const p=l*l+c*c,f=u*u+d*d,g=l*u+c*d,m=i*Math.sqrt(p)/h,b=i*Math.sqrt(f)/h,_=m*g/p,y=b*g/f,v=m*d+b*c,E=m*u+b*l,x=c*(b+_),S=l*(b+_),w=d*(m+y),T=u*(m+y),A=Math.atan2(S-E,x-v),C=Math.atan2(T-E,w-v);return{cx:v+e,cy:E+t,radius:i,startAngle:A,endAngle:C,anticlockwise:c*u>d*l}}static arc(e,t,n,i,s,o,a,l,c){const u=a-o,d=rn._segmentsCount(Math.abs(u)*s,40*Math.ceil(Math.abs(u)/r._b)),h=u/(2*d),p=2*h,f=Math.cos(h),g=Math.sin(h),m=d-1,b=m%1/m;for(let r=0;r<=m;++r){const e=r+b*r,t=h+o+p*e,a=Math.cos(t),l=-Math.sin(t);c.push((f*a+g*l)*s+n,(f*-l+g*a)*s+i)}}}class on{static curveLength(e,t,n,r,i,s,o,a){const l=10;let c=0,u=0,d=0,h=0,p=0,f=0,g=0,m=0,b=0,_=0,y=0,v=e,E=t;for(let x=1;x<=l;++x)u=x/l,d=u*u,h=d*u,p=1-u,f=p*p,g=f*p,m=g*e+3*f*u*n+3*p*d*i+h*o,b=g*t+3*f*u*r+3*p*d*s+h*a,_=v-m,y=E-b,v=m,E=b,c+=Math.sqrt(_*_+y*y);return c}static curveTo(e,t,n,r,i,s,o){const a=o[o.length-2],l=o[o.length-1];o.length-=2;const c=rn._segmentsCount(on.curveLength(a,l,e,t,n,r,i,s));let u=0,d=0,h=0,p=0,f=0;o.push(a,l);for(let g=1,m=0;g<=c;++g)m=g/c,u=1-m,d=u*u,h=d*u,p=m*m,f=p*m,o.push(h*a+3*d*m*e+3*u*p*n+f*i,h*l+3*d*m*t+3*u*p*r+f*s)}}function an(e,t,n,r,i,s,o,a){const l=e-n*i,c=t-r*i,u=e+n*s,d=t+r*s;let h,p;o?(h=r,p=-n):(h=-r,p=n);const f=l+h,g=c+p,m=u+h,b=d+p;return a.push(f,g,m,b),2}function ln(e,t,n,r,i,s,o,a){const l=n-e,c=r-t;let u=Math.atan2(l,c),d=Math.atan2(i-e,s-t);a&&ud&&(d+=2*Math.PI);let h=u;const p=d-u,f=Math.abs(p),g=Math.sqrt(l*l+c*c),m=1+(15*f*Math.sqrt(g)/Math.PI>>0),b=p/m;if(h+=b,a){o.push(e,t,n,r);for(let n=1,r=h;n=0&&(o.join===tn.ROUND?p+=ln(v,E,v-w*k,E-T*k,v-A*k,E-C*k,d,!1)+4:p+=2,d.push(v-A*P,E-C*P,v+A*k,E+C*k));continue}const u=(-w+_)*(-T+E)-(-w+v)*(-T+y),h=(-A+x)*(-C+E)-(-A+v)*(-C+S),f=(e*h-n*u)/l,R=(s*u-t*h)/l,O=(f-v)*(f-v)+(R-E)*(R-E),N=v+(f-v)*k,M=E+(R-E)*k,D=v-(f-v)*P,L=E-(R-E)*P,F=Math.min(e*e+t*t,n*n+s*s),B=c?k:P,U=F+B*B*m,G=O<=U;let $=o.join;if($===tn.MITER&&O/m>b&&($=tn.BEVEL),G)switch($){case tn.MITER:d.push(N,M,D,L);break;case tn.BEVEL:c?d.push(N,M,v+w*P,E+T*P,N,M,v+A*P,E+C*P):d.push(v-w*k,E-T*k,D,L,v-A*k,E-C*k,D,L),p+=2;break;case tn.ROUND:c?(d.push(N,M,v+w*P,E+T*P),p+=ln(v,E,v+w*P,E+T*P,v+A*P,E+C*P,d,!0)+4,d.push(N,M,v+A*P,E+C*P)):(d.push(v-w*k,E-T*k,D,L),p+=ln(v,E,v-w*k,E-T*k,v-A*k,E-C*k,d,!1)+4,d.push(v-A*k,E-C*k,D,L));break}else{switch(d.push(v-w*k,E-T*k,v+w*P,E+T*P),$){case tn.MITER:c?d.push(D,L,D,L):d.push(N,M,N,M),p+=2;break;case tn.ROUND:p+=c?ln(v,E,v+w*P,E+T*P,v+A*P,E+C*P,d,!0)+2:ln(v,E,v-w*k,E-T*k,v-A*k,E-C*k,d,!1)+2;break}d.push(v-A*k,E-C*k,v+A*P,E+C*P),p+=2}}_=i[2*(h-2)],y=i[2*(h-2)+1],v=i[2*(h-1)],E=i[2*(h-1)+1],w=-(y-E),T=_-v,I=Math.sqrt(w*w+T*T),w/=I,T/=I,w*=g,T*=g,d.push(v-w*k,E-T*k,v+w*P,E+T*P),c||(o.cap===nn.ROUND?p+=ln(v-w*(k-P)*.5,E-T*(k-P)*.5,v-w*k,E-T*k,v+w*P,E+T*P,d,!1)+2:o.cap===nn.SQUARE&&(p+=an(v,E,w,T,k,P,!1,d)));const O=t.indices,N=rn.epsilon*rn.epsilon;for(let r=f;r0&&(this.invalidate(),this.clearDirty++,this.graphicsData.length=0),this}drawShape(e,t=null,n=null,r=null){const i=new mn(e,t,n,r);return this.graphicsData.push(i),this.dirty++,this}drawHole(e,t=null){if(!this.graphicsData.length)return null;const n=new mn(e,null,null,t),r=this.graphicsData[this.graphicsData.length-1];return n.lineStyle=r.lineStyle,r.holes.push(n),this.dirty++,this}destroy(){super.destroy();for(let e=0;e0&&(n=this.batches[this.batches.length-1],i=n.style);for(let l=this.shapeIndex;l65535;this.indicesUint16&&this.indices.length===this.indicesUint16.length&&a===this.indicesUint16.BYTES_PER_ELEMENT>2?this.indicesUint16.set(this.indices):this.indicesUint16=a?new Uint32Array(this.indices):new Uint16Array(this.indices),this.batchable=this.isBatchable(),this.batchable?this.packBatches():this.buildDrawCalls()}_compareStyles(e,t){return!(!e||!t)&&(e.texture.baseTexture===t.texture.baseTexture&&(e.color+e.alpha===t.color+t.alpha&&!!e.native===!!t.native))}validateBatching(){if(this.dirty===this.cacheDirty||!this.graphicsData.length)return!1;for(let e=0,t=this.graphicsData.length;e131070)return!1;const e=this.batches;for(let t=0;t0&&(i=gn.pop(),i||(i=new r.a$,i.texArray=new r.Ie),this.drawCalls.push(i)),i.start=u,i.size=0,i.texArray.count=0,i.type=c),g.touched=1,g._batchEnabled=e,g._batchLocation=s,g.wrapMode=r.Nt.REPEAT,i.texArray.elements[i.texArray.count++]=g,s++)),i.size+=h.size,u+=h.size,a=g._batchLocation,this.addColors(t,f.color,f.alpha,h.attribSize,h.attribStart),this.addTextureIds(n,a,h.attribSize,h.attribStart)}r.VL._globalBatch=e,this.packAttributes()}packAttributes(){const e=this.points,t=this.uvs,n=this.colors,r=this.textureIds,i=new ArrayBuffer(3*e.length*4),s=new Float32Array(i),o=new Uint32Array(i);let a=0;for(let l=0;l0&&e.alpha>0;return n?(e.matrix&&(e.matrix=e.matrix.clone(),e.matrix.invert()),Object.assign(this._lineStyle,{visible:n},e)):this._lineStyle.reset(),this}startPoly(){if(this.currentPath){const e=this.currentPath.points,t=this.currentPath.points.length;t>2&&(this.drawShape(this.currentPath),this.currentPath=new r.mg,this.currentPath.closeStroke=!1,this.currentPath.points.push(e[t-2],e[t-1]))}else this.currentPath=new r.mg,this.currentPath.closeStroke=!1}finishPoly(){this.currentPath&&(this.currentPath.points.length>2?(this.drawShape(this.currentPath),this.currentPath=null):this.currentPath.points.length=0)}moveTo(e,t){return this.startPoly(),this.currentPath.points[0]=e,this.currentPath.points[1]=t,this}lineTo(e,t){this.currentPath||this.moveTo(0,0);const n=this.currentPath.points,r=n[n.length-2],i=n[n.length-1];return r===e&&i===t||n.push(e,t),this}_initCurve(e=0,t=0){this.currentPath?0===this.currentPath.points.length&&(this.currentPath.points=[e,t]):this.moveTo(e,t)}quadraticCurveTo(e,t,n,r){this._initCurve();const i=this.currentPath.points;return 0===i.length&&this.moveTo(0,0),hn.curveTo(e,t,n,r,i),this}bezierCurveTo(e,t,n,r,i,s){return this._initCurve(),on.curveTo(e,t,n,r,i,s,this.currentPath.points),this}arcTo(e,t,n,r,i){this._initCurve(e,t);const s=this.currentPath.points,o=sn.curveTo(e,t,n,r,i,s);if(o){const{cx:e,cy:t,radius:n,startAngle:r,endAngle:i,anticlockwise:s}=o;this.arc(e,t,n,r,i,s)}return this}arc(e,t,n,i,s,o=!1){if(i===s)return this;!o&&s<=i?s+=r._b:o&&i<=s&&(i+=r._b);const a=s-i;if(0===a)return this;const l=e+Math.cos(i)*n,c=t+Math.sin(i)*n,u=this._geometry.closePointEps;let d=this.currentPath?this.currentPath.points:null;if(d){const e=Math.abs(d[d.length-2]-l),t=Math.abs(d[d.length-1]-c);e0;return n?(e.matrix&&(e.matrix=e.matrix.clone(),e.matrix.invert()),Object.assign(this._fillStyle,{visible:n},e)):this._fillStyle.reset(),this}endFill(){return this.finishPoly(),this._fillStyle.reset(),this}drawRect(e,t,n,i){return this.drawShape(new r.Ae(e,t,n,i))}drawRoundedRect(e,t,n,i,s){return this.drawShape(new r.c9(e,t,n,i,s))}drawCircle(e,t,n){return this.drawShape(new r.Cd(e,t,n))}drawEllipse(e,t,n,i){return this.drawShape(new r.Pj(e,t,n,i))}drawPolygon(...e){let t,n=!0;const i=e[0];i.points?(n=i.closeStroke,t=i.points):t=Array.isArray(e[0])?e[0]:e;const s=new r.mg(t);return s.closeStroke=n,this.drawShape(s),this}drawShape(e){return this._holeMode?this._geometry.drawHole(e,this._matrix):this._geometry.drawShape(e,this._fillStyle.clone(),this._lineStyle.clone(),this._matrix),this}clear(){return this._geometry.clear(),this._lineStyle.reset(),this._fillStyle.reset(),this._boundsID++,this._matrix=null,this._holeMode=!1,this.currentPath=null,this}isFastRect(){const e=this._geometry.graphicsData;return 1===e.length&&e[0].shape.type===r.HS.RECT&&!e[0].matrix&&!e[0].holes.length&&!(e[0].lineStyle.visible&&e[0].lineStyle.width)}_render(e){this.finishPoly();const t=this._geometry;t.updateBatches(),t.batchable?(this.batchDirty!==t.batchDirty&&this._populateBatches(),this._renderBatched(e)):(e.batch.flush(),this._renderDirect(e))}_populateBatches(){const e=this._geometry,t=this.blendMode,n=e.batches.length;this.batchTint=-1,this._transformID=-1,this.batchDirty=e.batchDirty,this.batches.length=n,this.vertexData=new Float32Array(e.points);for(let i=0;in&&!e.autoResize&&(o=n);let a=e._buffers;a||(a=e._buffers=this.generateBuffers(e));const l=t[0]._texture.baseTexture,c=l.alphaMode>0;this.state.blendMode=r.P6.correctBlendMode(e.blendMode,c),s.state.set(this.state);const u=s.gl,d=e.worldTransform.copyTo(this.tempMatrix);d.prepend(s.globalUniforms.uniforms.projectionMatrix),this.shader.uniforms.translationMatrix=d.toArray(!0),this.shader.uniforms.uColor=r.Il.shared.setValue(e.tintRgb).premultiply(e.worldAlpha,c).toArray(this.shader.uniforms.uColor),this.shader.uniforms.uSampler=l,this.renderer.shader.bind(this.shader);let h=!1;for(let r=0,p=0;ri&&(n=i),p>=a.length&&a.push(this._generateOneMoreBuffer(e));const l=a[p];l.uploadDynamic(t,r,n);const c=e._bufferUpdateIDs[p]||0;h=h||l._updateID0);i[o]=l,i[o+s]=l,i[o+2*s]=l,i[o+3*s]=l,o+=4*s}}destroy(){super.destroy(),this.shader&&(this.shader.destroy(),this.shader=null),this.tempMatrix=null}}Bn.extension={name:"particle",type:r.nw.RendererPlugin},r.Rw.add(Bn);var Un=(e=>(e[e["LINEAR_VERTICAL"]=0]="LINEAR_VERTICAL",e[e["LINEAR_HORIZONTAL"]=1]="LINEAR_HORIZONTAL",e))(Un||{});const Gn={willReadFrequently:!0},$n=class{static get experimentalLetterSpacingSupported(){let e=$n._experimentalLetterSpacingSupported;if(void 0!==e){const t=r.Xd.ADAPTER.getCanvasRenderingContext2D().prototype;e=$n._experimentalLetterSpacingSupported="letterSpacing"in t||"textLetterSpacing"in t}return e}constructor(e,t,n,r,i,s,o,a,l){this.text=e,this.style=t,this.width=n,this.height=r,this.lines=i,this.lineWidths=s,this.lineHeight=o,this.maxLineWidth=a,this.fontProperties=l}static measureText(e,t,n,r=$n._canvas){n=void 0===n||null===n?t.wordWrap:n;const i=t.toFontString(),s=$n.measureFont(i);0===s.fontSize&&(s.fontSize=t.fontSize,s.ascent=t.fontSize);const o=r.getContext("2d",Gn);o.font=i;const a=n?$n.wordWrap(e,t,r):e,l=a.split(/(?:\r\n|\r|\n)/),c=new Array(l.length);let u=0;for(let f=0;f0&&(r?i-=t:i+=($n.graphemeSegmenter(e).length-1)*t),i}static wordWrap(e,t,n=$n._canvas){const r=n.getContext("2d",Gn);let i=0,s="",o="";const a=Object.create(null),{letterSpacing:l,whiteSpace:c}=t,u=$n.collapseSpaces(c),d=$n.collapseNewlines(c);let h=!u;const p=t.wordWrapWidth+l,f=$n.tokenize(e);for(let g=0;gp)if(""!==s&&(o+=$n.addLine(s),s="",i=0),$n.canBreakWords(e,t.breakWords)){const n=$n.wordWrapSplit(e);for(let c=0;cp&&(o+=$n.addLine(s),h=!1,s="",i=0),s+=u,i+=g}}else{s.length>0&&(o+=$n.addLine(s),s="",i=0);const t=g===f.length-1;o+=$n.addLine(e,!t),h=!1,s="",i=0}else n+i>p&&(h=!1,o+=$n.addLine(s),s="",i=0),(s.length>0||!$n.isBreakingSpace(e)||h)&&(s+=e,i+=n)}return o+=$n.addLine(s,!1),o}static addLine(e,t=!0){return e=$n.trimRight(e),e=t?`${e}\n`:e,e}static getFromCache(e,t,n,r){let i=n[e];return"number"!==typeof i&&(i=$n._measureText(e,t,r)+t,n[e]=i),i}static collapseSpaces(e){return"normal"===e||"pre-line"===e}static collapseNewlines(e){return"normal"===e}static trimRight(e){if("string"!==typeof e)return"";for(let t=e.length-1;t>=0;t--){const n=e[t];if(!$n.isBreakingSpace(n))break;e=e.slice(0,-1)}return e}static isNewline(e){return"string"===typeof e&&$n._newlines.includes(e.charCodeAt(0))}static isBreakingSpace(e,t){return"string"===typeof e&&$n._breakingSpaces.includes(e.charCodeAt(0))}static tokenize(e){const t=[];let n="";if("string"!==typeof e)return t;for(let r=0;ro;--d){for(let e=0;e{if("function"===typeof Intl?.Segmenter){const e=new Intl.Segmenter;return t=>[...e.segment(t)].map((e=>e.segment))}return e=>[...e]})(),zn.experimentalLetterSpacing=!1,zn._fonts={},zn._newlines=[10,13],zn._breakingSpaces=[9,32,8192,8193,8194,8195,8196,8197,8198,8200,8201,8202,8287,12288];const Hn=["serif","sans-serif","monospace","cursive","fantasy","system-ui"],Vn=class{constructor(e){this.styleID=0,this.reset(),Xn(this,e,e)}clone(){const e={};return Xn(e,this,Vn.defaultStyle),new Vn(e)}reset(){Xn(this,Vn.defaultStyle,Vn.defaultStyle)}get align(){return this._align}set align(e){this._align!==e&&(this._align=e,this.styleID++)}get breakWords(){return this._breakWords}set breakWords(e){this._breakWords!==e&&(this._breakWords=e,this.styleID++)}get dropShadow(){return this._dropShadow}set dropShadow(e){this._dropShadow!==e&&(this._dropShadow=e,this.styleID++)}get dropShadowAlpha(){return this._dropShadowAlpha}set dropShadowAlpha(e){this._dropShadowAlpha!==e&&(this._dropShadowAlpha=e,this.styleID++)}get dropShadowAngle(){return this._dropShadowAngle}set dropShadowAngle(e){this._dropShadowAngle!==e&&(this._dropShadowAngle=e,this.styleID++)}get dropShadowBlur(){return this._dropShadowBlur}set dropShadowBlur(e){this._dropShadowBlur!==e&&(this._dropShadowBlur=e,this.styleID++)}get dropShadowColor(){return this._dropShadowColor}set dropShadowColor(e){const t=Wn(e);this._dropShadowColor!==t&&(this._dropShadowColor=t,this.styleID++)}get dropShadowDistance(){return this._dropShadowDistance}set dropShadowDistance(e){this._dropShadowDistance!==e&&(this._dropShadowDistance=e,this.styleID++)}get fill(){return this._fill}set fill(e){const t=Wn(e);this._fill!==t&&(this._fill=t,this.styleID++)}get fillGradientType(){return this._fillGradientType}set fillGradientType(e){this._fillGradientType!==e&&(this._fillGradientType=e,this.styleID++)}get fillGradientStops(){return this._fillGradientStops}set fillGradientStops(e){qn(this._fillGradientStops,e)||(this._fillGradientStops=e,this.styleID++)}get fontFamily(){return this._fontFamily}set fontFamily(e){this.fontFamily!==e&&(this._fontFamily=e,this.styleID++)}get fontSize(){return this._fontSize}set fontSize(e){this._fontSize!==e&&(this._fontSize=e,this.styleID++)}get fontStyle(){return this._fontStyle}set fontStyle(e){this._fontStyle!==e&&(this._fontStyle=e,this.styleID++)}get fontVariant(){return this._fontVariant}set fontVariant(e){this._fontVariant!==e&&(this._fontVariant=e,this.styleID++)}get fontWeight(){return this._fontWeight}set fontWeight(e){this._fontWeight!==e&&(this._fontWeight=e,this.styleID++)}get letterSpacing(){return this._letterSpacing}set letterSpacing(e){this._letterSpacing!==e&&(this._letterSpacing=e,this.styleID++)}get lineHeight(){return this._lineHeight}set lineHeight(e){this._lineHeight!==e&&(this._lineHeight=e,this.styleID++)}get leading(){return this._leading}set leading(e){this._leading!==e&&(this._leading=e,this.styleID++)}get lineJoin(){return this._lineJoin}set lineJoin(e){this._lineJoin!==e&&(this._lineJoin=e,this.styleID++)}get miterLimit(){return this._miterLimit}set miterLimit(e){this._miterLimit!==e&&(this._miterLimit=e,this.styleID++)}get padding(){return this._padding}set padding(e){this._padding!==e&&(this._padding=e,this.styleID++)}get stroke(){return this._stroke}set stroke(e){const t=Wn(e);this._stroke!==t&&(this._stroke=t,this.styleID++)}get strokeThickness(){return this._strokeThickness}set strokeThickness(e){this._strokeThickness!==e&&(this._strokeThickness=e,this.styleID++)}get textBaseline(){return this._textBaseline}set textBaseline(e){this._textBaseline!==e&&(this._textBaseline=e,this.styleID++)}get trim(){return this._trim}set trim(e){this._trim!==e&&(this._trim=e,this.styleID++)}get whiteSpace(){return this._whiteSpace}set whiteSpace(e){this._whiteSpace!==e&&(this._whiteSpace=e,this.styleID++)}get wordWrap(){return this._wordWrap}set wordWrap(e){this._wordWrap!==e&&(this._wordWrap=e,this.styleID++)}get wordWrapWidth(){return this._wordWrapWidth}set wordWrapWidth(e){this._wordWrapWidth!==e&&(this._wordWrapWidth=e,this.styleID++)}toFontString(){const e="number"===typeof this.fontSize?`${this.fontSize}px`:this.fontSize;let t=this.fontFamily;Array.isArray(this.fontFamily)||(t=this.fontFamily.split(","));for(let n=t.length-1;n>=0;n--){let e=t[n].trim();/([\"\'])[^\'\"]+\1/.test(e)||Hn.includes(e)||(e=`"${e}"`),t[n]=e}return`${this.fontStyle} ${this.fontVariant} ${this.fontWeight} ${e} ${t.join(",")}`}};let jn=Vn;function Wn(e){const t=r.Il.shared;return Array.isArray(e)?e.map((e=>t.setValue(e).toHex())):t.setValue(e).toHex()}function qn(e,t){if(!Array.isArray(e)||!Array.isArray(t))return!1;if(e.length!==t.length)return!1;for(let n=0;n0&&s>o&&(a=(o+s)/2);const d=o+r,h=n.lineHeight*(e+1);let p=d;e+10}}function Jn(e,t){let n=!1;if(e?._textures?.length)for(let i=0;i{this.queue&&this.prepareItems()},this.registerFindHook(ir),this.registerFindHook(sr),this.registerFindHook(Jn),this.registerFindHook(er),this.registerFindHook(tr),this.registerUploadHook(nr),this.registerUploadHook(rr)}upload(e){return new Promise((t=>{e&&this.add(e),this.queue.length?(this.completes.push(t),this.ticking||(this.ticking=!0,r.vB.system.addOnce(this.tick,this,r.uF.UTILITY))):t()}))}tick(){setTimeout(this.delayedTick,0)}prepareItems(){this.limiter.beginFrame();while(this.queue.length&&this.limiter.allowedToUpload()){const e=this.queue[0];let t=!1;if(e&&!e._destroyed)for(let n=0,r=this.uploadHooks.length;n=0;t--)this.add(e.children[t]);return this}destroy(){this.ticking&&r.vB.system.remove(this.tick,this),this.ticking=!1,this.addHooks=null,this.uploadHooks=null,this.renderer=null,this.completes=null,this.queue=null,this.limiter=null,this.uploadHookHelper=null}};let ar=or;function lr(e,t){return t instanceof r.VL&&(t._glTextures[e.CONTEXT_UID]||e.texture.bind(t),!0)}function cr(e,t){if(!(t instanceof Tn))return!1;const{geometry:n}=t;t.finishPoly(),n.updateBatches();const{batches:r}=n;for(let i=0;i1?r.ex.from(pr,hr,t):r.ex.from(gr,fr,t)}render(e){const t=this.renderer,n=this.quad;let i=n.vertices;i[0]=i[6]=e._width*-e.anchor.x,i[1]=i[3]=e._height*-e.anchor.y,i[2]=i[4]=e._width*(1-e.anchor.x),i[5]=i[7]=e._height*(1-e.anchor.y);const s=e.uvRespectAnchor?e.anchor.x:0,o=e.uvRespectAnchor?e.anchor.y:0;i=n.uvs,i[0]=i[6]=-s,i[1]=i[3]=-o,i[2]=i[4]=1-s,i[5]=i[7]=1-o,n.invalidate();const a=e._texture,l=a.baseTexture,c=l.alphaMode>0,u=e.tileTransform.localTransform,d=e.uvMatrix;let h=l.isPowerOfTwo&&a.frame.width===l.width&&a.frame.height===l.height;h&&(l._glTextures[t.CONTEXT_UID]?h=l.wrapMode!==r.Nt.CLAMP:l.wrapMode===r.Nt.CLAMP&&(l.wrapMode=r.Nt.REPEAT));const p=h?this.simpleShader:this.shader,f=a.width,g=a.height,m=e._width,b=e._height;br.set(u.a*f/m,u.b*f/b,u.c*g/m,u.d*g/b,u.tx/m,u.ty/b),br.invert(),h?br.prepend(d.mapCoord):(p.uniforms.uMapCoord=d.mapCoord.toArray(!0),p.uniforms.uClampFrame=d.uClampFrame,p.uniforms.uClampOffset=d.uClampOffset),p.uniforms.uTransform=br.toArray(!0),p.uniforms.uColor=r.Il.shared.setValue(e.tint).premultiply(e.worldAlpha,c).toArray(p.uniforms.uColor),p.uniforms.translationMatrix=e.transform.worldTransform.toArray(!0),p.uniforms.uSampler=a,t.shader.bind(p),t.geometry.bind(n),this.state.blendMode=r.P6.correctBlendMode(e.blendMode,c),t.state.set(this.state),t.geometry.draw(this.renderer.gl.TRIANGLES,6,0)}}_r.extension={name:"tilingSprite",type:r.nw.RendererPlugin},r.Rw.add(_r);const yr=class{constructor(e,t,n=null){this.linkedSheets=[],this._texture=e instanceof r.xE?e:null,this.baseTexture=e instanceof r.VL?e:this._texture.baseTexture,this.textures={},this.animations={},this.data=t;const i=this.baseTexture.resource;this.resolution=this._updateResolution(n||(i?i.url:null)),this._frames=this.data.frames,this._frameKeys=Object.keys(this._frames),this._batchIndex=0,this._callback=null}_updateResolution(e=null){const{scale:t}=this.data.meta;let n=r.P6.getResolutionOfUrl(e,null);return null===n&&(n=parseFloat(t??"1")),1!==n&&this.baseTexture.setResolution(n),n}parse(){return new Promise((e=>{this._callback=e,this._batchIndex=0,this._frameKeys.length<=yr.BATCH_SIZE?(this._processFrames(0),this._processAnimations(),this._parseComplete()):this._nextBatch()}))}_processFrames(e){let t=e;const n=yr.BATCH_SIZE;while(t-e{this._batchIndex*yr.BATCH_SIZE{i[e]=t})),Object.keys(t.textures).forEach((e=>{i[e]=t.textures[e]})),!n){const n=r.P6.path.dirname(e[0]);t.linkedSheets.forEach(((e,r)=>{const s=xr([`${n}/${t.data.meta.related_multi_packs[r]}`],e,!0);Object.assign(i,s)}))}return i}const Sr={extension:r.nw.Asset,cache:{test:e=>e instanceof vr,getCacheableAssets:(e,t)=>xr(e,t,!1)},resolver:{test:e=>{const t=e.split("?")[0],n=t.split("."),r=n.pop(),i=n.pop();return"json"===r&&Er.includes(i)},parse:e=>{const t=e.split(".");return{resolution:parseFloat(r.Xd.RETINA_PREFIX.exec(e)?.[1]??"1"),format:t[t.length-2],src:e}}},loader:{name:"spritesheetLoader",extension:{type:r.nw.LoadParser,priority:fe.Normal},async testParse(e,t){return".json"===r.P6.path.extname(t.src).toLowerCase()&&!!e.frames},async parse(e,t,n){let i=r.P6.path.dirname(t.src);i&&i.lastIndexOf("/")!==i.length-1&&(i+="/");let s=i+e.meta.image;s=tt(s,t.src);const o=await n.load([s]),a=o[s],l=new vr(a.baseTexture,e,t.src);await l.parse();const c=e?.meta?.related_multi_packs;if(Array.isArray(c)){const e=[];for(const s of c){if("string"!==typeof s)continue;let r=i+s;t.data?.ignoreMultiPack||(r=tt(r,t.src),e.push(n.load({src:r,data:{ignoreMultiPack:!0}})))}const r=await Promise.all(e);l.linkedSheets=r,r.forEach((e=>{e.linkedSheets=[l].concat(l.linkedSheets.filter((t=>t!==e)))}))}return l},unload(e){e.destroy(!0)}}};r.Rw.add(Sr);class wr{constructor(){this.info=[],this.common=[],this.page=[],this.char=[],this.kerning=[],this.distanceField=[]}}class Tr{static test(e){return"string"===typeof e&&e.startsWith("info face=")}static parse(e){const t=e.match(/^[a-z]+\s+.+$/gm),n={info:[],common:[],page:[],char:[],chars:[],kerning:[],kernings:[],distanceField:[]};for(const i in t){const e=t[i].match(/^[a-z]+/gm)[0],r=t[i].match(/[a-zA-Z]+=([^\s"']+|"([^"]*)")/gm),s={};for(const t in r){const e=r[t].split("="),n=e[0],i=e[1].replace(/"/gm,""),o=parseFloat(i),a=isNaN(o)?i:o;s[n]=a}n[e].push(s)}const r=new wr;return n.info.forEach((e=>r.info.push({face:e.face,size:parseInt(e.size,10)}))),n.common.forEach((e=>r.common.push({lineHeight:parseInt(e.lineHeight,10)}))),n.page.forEach((e=>r.page.push({id:parseInt(e.id,10),file:e.file}))),n.char.forEach((e=>r.char.push({id:parseInt(e.id,10),page:parseInt(e.page,10),x:parseInt(e.x,10),y:parseInt(e.y,10),width:parseInt(e.width,10),height:parseInt(e.height,10),xoffset:parseInt(e.xoffset,10),yoffset:parseInt(e.yoffset,10),xadvance:parseInt(e.xadvance,10)}))),n.kerning.forEach((e=>r.kerning.push({first:parseInt(e.first,10),second:parseInt(e.second,10),amount:parseInt(e.amount,10)}))),n.distanceField.forEach((e=>r.distanceField.push({distanceRange:parseInt(e.distanceRange,10),fieldType:e.fieldType}))),r}}class Ar{static test(e){const t=e;return"getElementsByTagName"in t&&t.getElementsByTagName("page").length&&null!==t.getElementsByTagName("info")[0].getAttribute("face")}static parse(e){const t=new wr,n=e.getElementsByTagName("info"),r=e.getElementsByTagName("common"),i=e.getElementsByTagName("page"),s=e.getElementsByTagName("char"),o=e.getElementsByTagName("kerning"),a=e.getElementsByTagName("distanceField");for(let l=0;l"))&&Ar.test(r.Xd.ADAPTER.parseXML(e))}static parse(e){return Ar.parse(r.Xd.ADAPTER.parseXML(e))}}const Ir=[Tr,Ar,Cr];function Rr(e){for(let t=0;t=l-i*o){if(0===_)throw new Error(`[BitmapFont] textureHeight ${l}px is too small (fontFamily: '${d.fontFamily}', fontSize: ${d.fontSize}px, char: '${e}')`);--S,f=null,g=null,m=null,_=0,b=0,y=0;continue}if(y=Math.max(i+t.fontProperties.descent,y),x*o+b>=h){if(0===b)throw new Error(`[BitmapFont] textureWidth ${a}px is too small (fontFamily: '${d.fontFamily}', fontSize: ${d.fontSize}px, char: '${e}')`);--S,_+=y*o,_=Math.ceil(_),b=0,y=0;continue}Pr(f,g,t,b,_,o,d);const w=Mr(t.text);p.char.push({id:w,page:E.length-1,x:b/o,y:_/o,width:x,height:i,xoffset:0,yoffset:0,xadvance:n-(d.dropShadow?d.dropShadowDistance:0)-(d.stroke?d.strokeThickness:0)}),b+=(x+2*s)*o,b=Math.ceil(b)}for(let r=0,S=u.length;r{this.dirty=!0}),this,0,0),this._roundPixels=r.Xd.ROUND_PIXELS,this.dirty=!0,this._resolution=r.Xd.RESOLUTION,this._autoResolution=!0,this._textureCache={}}updateText(){const e=Lr.available[this._fontName],t=this.fontSize,n=t/e.size,i=new r.E9,s=[],o=[],a=[],l=this._text.replace(/(?:\r\n|\r)/g,"\n")||" ",c=Or(l),u=this._maxWidth*e.size/t,d="none"===e.distanceFieldType?Ur:Gr;let h=null,p=0,f=0,g=0,m=-1,b=0,_=0,y=0,v=0;for(let C=0;C0&&i.x>u&&(++_,r.P6.removeItems(s,1+m-_,1+C-m),C=m,m=-1,o.push(b),a.push(s.length>0?s[s.length-1].prevSpaces:0),f=Math.max(f,b),g++,i.x=0,i.y+=e.lineHeight,h=null,v=0)}const E=c[c.length-1];"\r"!==E&&"\n"!==E&&(/(?:\s)/.test(E)&&(p=b),o.push(p),f=Math.max(f,p),a.push(-1));const x=[];for(let r=0;r<=g;r++){let e=0;"right"===this._align?e=f-o[r]:"center"===this._align?e=(f-o[r])/2:"justify"===this._align&&(e=a[r]<0?0:(f-o[r])/a[r]),x.push(e)}const S=s.length,w={},T=[],A=this._activePagesMeshData;d.push(...A);for(let C=0;C6*t)||e.vertices.length<2*kn.BATCHABLE_SIZE)e.vertices=new Float32Array(8*t),e.uvs=new Float32Array(8*t),e.indices=new Uint16Array(6*t);else{const t=e.total,n=e.vertices;for(let e=4*t*2;et[e.mesh.texture.baseTexture.uid])).forEach((e=>{e.mesh.texture=r.xE.EMPTY}));for(const r in t){const e=t[r];e.destroy(),delete t[r]}this._font=null,this._tintColor=null,this._textureCache=null,super.destroy(e)}};let Hr=zr;Hr.styleDefaults={align:"left",tint:16777215,maxWidth:0,letterSpacing:0};const Vr=[".xml",".fnt"],jr={extension:{type:r.nw.LoadParser,priority:fe.Normal},name:"loadBitmapFont",test(e){return Vr.includes(r.P6.path.extname(e).toLowerCase())},async testParse(e){return Tr.test(e)||Cr.test(e)},async parse(e,t,n){const i=Tr.test(e)?Tr.parse(e):Cr.parse(e),{src:s}=t,{page:o}=i,a=[];for(let u=0;ul[e]));return Lr.install(i,c,!0)},async load(e,t){const n=await r.Xd.ADAPTER.fetch(e);return n.text()},unload(e){e.destroy()}};r.Rw.add(jr);const Wr=class extends jn{constructor(){super(...arguments),this._fonts=[],this._overrides=[],this._stylesheet="",this.fontsDirty=!1}static from(e){return new Wr(Object.keys(Wr.defaultOptions).reduce(((t,n)=>({...t,[n]:e[n]})),{}))}cleanFonts(){this._fonts.length>0&&(this._fonts.forEach((e=>{URL.revokeObjectURL(e.src),e.refs--,0===e.refs&&(e.fontFace&&document.fonts.delete(e.fontFace),delete Wr.availableFonts[e.originalUrl])})),this.fontFamily="Arial",this._fonts.length=0,this.styleID++,this.fontsDirty=!0)}loadFont(e,t={}){const{availableFonts:n}=Wr;if(n[e]){const t=n[e];return this._fonts.push(t),t.refs++,this.styleID++,this.fontsDirty=!0,Promise.resolve()}return r.Xd.ADAPTER.fetch(e).then((e=>e.blob())).then((async e=>new Promise(((t,n)=>{const r=URL.createObjectURL(e),i=new FileReader;i.onload=()=>t([r,i.result]),i.onerror=n,i.readAsDataURL(e)})))).then((async([i,s])=>{const o=Object.assign({family:r.P6.path.basename(e,r.P6.path.extname(e)),weight:"normal",style:"normal",src:i,dataSrc:s,refs:1,originalUrl:e,fontFace:null},t);n[e]=o,this._fonts.push(o),this.styleID++;const a=new FontFace(o.family,`url(${o.src})`,{weight:o.weight,style:o.style});o.fontFace=a,await a.load(),document.fonts.add(a),await document.fonts.ready,this.styleID++,this.fontsDirty=!0}))}addOverride(...e){const t=e.filter((e=>!this._overrides.includes(e)));t.length>0&&(this._overrides.push(...t),this.styleID++)}removeOverride(...e){const t=e.filter((e=>this._overrides.includes(e)));t.length>0&&(this._overrides=this._overrides.filter((e=>!t.includes(e))),this.styleID++)}toCSS(e){return[`transform: scale(${e})`,"transform-origin: top left","display: inline-block",`color: ${this.normalizeColor(this.fill)}`,`font-size: ${this.fontSize}px`,`font-family: ${this.fontFamily}`,`font-weight: ${this.fontWeight}`,`font-style: ${this.fontStyle}`,`font-variant: ${this.fontVariant}`,`letter-spacing: ${this.letterSpacing}px`,`text-align: ${this.align}`,`padding: ${this.padding}px`,`white-space: ${this.whiteSpace}`,...this.lineHeight?[`line-height: ${this.lineHeight}px`]:[],...this.wordWrap?["word-wrap: "+(this.breakWords?"break-all":"break-word"),`max-width: ${this.wordWrapWidth}px`]:[],...this.strokeThickness?[`-webkit-text-stroke-width: ${this.strokeThickness}px`,`-webkit-text-stroke-color: ${this.normalizeColor(this.stroke)}`,`text-stroke-width: ${this.strokeThickness}px`,`text-stroke-color: ${this.normalizeColor(this.stroke)}`,"paint-order: stroke"]:[],...this.dropShadow?[this.dropShadowToCSS()]:[],...this._overrides].join(";")}toGlobalCSS(){return this._fonts.reduce(((e,t)=>`${e}\n @font-face {\n font-family: "${t.family}";\n src: url('${t.dataSrc}');\n font-weight: ${t.weight};\n font-style: ${t.style}; \n }`),this._stylesheet)}get stylesheet(){return this._stylesheet}set stylesheet(e){this._stylesheet!==e&&(this._stylesheet=e,this.styleID++)}normalizeColor(e){return Array.isArray(e)&&(e=r.P6.rgb2hex(e)),"number"===typeof e?r.P6.hex2string(e):e}dropShadowToCSS(){let e=this.normalizeColor(this.dropShadowColor);const t=this.dropShadowAlpha,n=Math.round(Math.cos(this.dropShadowAngle)*this.dropShadowDistance),r=Math.round(Math.sin(this.dropShadowAngle)*this.dropShadowDistance);e.startsWith("#")&&t<1&&(e+=(255*t|0).toString(16).padStart(2,"0"));const i=`${n}px ${r}px`;return this.dropShadowBlur>0?`text-shadow: ${i} ${this.dropShadowBlur}px ${e}`:`text-shadow: ${i} ${e}`}reset(){Object.assign(this,Wr.defaultOptions)}onBeforeDraw(){const{fontsDirty:e}=this;return this.fontsDirty=!1,this.isSafari&&this._fonts.length>0&&e?new Promise((e=>setTimeout(e,100))):Promise.resolve()}get isSafari(){const{userAgent:e}=r.Xd.ADAPTER.getNavigator();return/^((?!chrome|android).)*safari/i.test(e)}set fillGradientStops(e){console.warn("[HTMLTextStyle] fillGradientStops is not supported by HTMLText")}get fillGradientStops(){return super.fillGradientStops}set fillGradientType(e){console.warn("[HTMLTextStyle] fillGradientType is not supported by HTMLText")}get fillGradientType(){return super.fillGradientType}set miterLimit(e){console.warn("[HTMLTextStyle] miterLimit is not supported by HTMLText")}get miterLimit(){return super.miterLimit}set trim(e){console.warn("[HTMLTextStyle] trim is not supported by HTMLText")}get trim(){return super.trim}set textBaseline(e){console.warn("[HTMLTextStyle] textBaseline is not supported by HTMLText")}get textBaseline(){return super.textBaseline}set leading(e){console.warn("[HTMLTextStyle] leading is not supported by HTMLText")}get leading(){return super.leading}set lineJoin(e){console.warn("[HTMLTextStyle] lineJoin is not supported by HTMLText")}get lineJoin(){return super.lineJoin}};let qr=Wr;qr.availableFonts={},qr.defaultOptions={align:"left",breakWords:!1,dropShadow:!1,dropShadowAlpha:1,dropShadowAngle:Math.PI/6,dropShadowBlur:0,dropShadowColor:"black",dropShadowDistance:5,fill:"black",fontFamily:"Arial",fontSize:26,fontStyle:"normal",fontVariant:"normal",fontWeight:"normal",letterSpacing:0,lineHeight:0,padding:0,stroke:"black",strokeThickness:0,whiteSpace:"normal",wordWrap:!1,wordWrapWidth:100};const Xr=class extends a{constructor(e="",t={}){super(r.xE.EMPTY),this._text=null,this._style=null,this._autoResolution=!0,this._loading=!1,this.localStyleID=-1,this.dirty=!1,this.ownsStyle=!1;const n=new Image,i=r.xE.from(n,{scaleMode:r.Xd.SCALE_MODE,resourceOptions:{autoLoad:!1}});i.orig=new r.Ae,i.trim=new r.Ae,this.texture=i;const s="http://www.w3.org/2000/svg",o="http://www.w3.org/1999/xhtml",a=document.createElementNS(s,"svg"),l=document.createElementNS(s,"foreignObject"),c=document.createElementNS(o,"div"),u=document.createElementNS(o,"style");l.setAttribute("width","10000"),l.setAttribute("height","10000"),l.style.overflow="hidden",a.appendChild(l),this.maxWidth=Xr.defaultMaxWidth,this.maxHeight=Xr.defaultMaxHeight,this._domElement=c,this._styleElement=u,this._svgRoot=a,this._foreignObject=l,this._foreignObject.appendChild(u),this._foreignObject.appendChild(c),this._image=n,this._loadImage=new Image,this._autoResolution=Xr.defaultAutoResolution,this._resolution=Xr.defaultResolution??r.Xd.RESOLUTION,this.text=e,this.style=t}measureText(e){const{text:t,style:n,resolution:r}=Object.assign({text:this._text,style:this._style,resolution:this._resolution},e);Object.assign(this._domElement,{innerHTML:t,style:n.toCSS(r)}),this._styleElement.textContent=n.toGlobalCSS(),document.body.appendChild(this._svgRoot);const i=this._domElement.getBoundingClientRect();this._svgRoot.remove();const s=Math.min(this.maxWidth,Math.ceil(i.width)),o=Math.min(this.maxHeight,Math.ceil(i.height));return this._svgRoot.setAttribute("width",s.toString()),this._svgRoot.setAttribute("height",o.toString()),t!==this._text&&(this._domElement.innerHTML=this._text),n!==this._style&&(Object.assign(this._domElement,{style:this._style?.toCSS(r)}),this._styleElement.textContent=this._style?.toGlobalCSS()),{width:s+2*n.padding,height:o+2*n.padding}}async updateText(e=!0){const{style:t,_image:n,_loadImage:r}=this;if(this.localStyleID!==t.styleID&&(this.dirty=!0,this.localStyleID=t.styleID),!this.dirty&&e)return;const{width:i,height:s}=this.measureText();n.width=r.width=Math.ceil(Math.max(1,i)),n.height=r.height=Math.ceil(Math.max(1,s)),this._loading||(this._loading=!0,await new Promise((e=>{r.onload=async()=>{await t.onBeforeDraw(),this._loading=!1,n.src=r.src,r.onload=null,r.src="",this.updateTexture(),e()};const i=(new XMLSerializer).serializeToString(this._svgRoot);r.src=`data:image/svg+xml;charset=utf8,${encodeURIComponent(i)}`})))}get source(){return this._image}updateTexture(){const{style:e,texture:t,_image:n,resolution:r}=this,{padding:i}=e,{baseTexture:s}=t;t.trim.width=t._frame.width=n.width/r,t.trim.height=t._frame.height=n.height/r,t.trim.x=-i,t.trim.y=-i,t.orig.width=t._frame.width-2*i,t.orig.height=t._frame.height-2*i,this._onTextureUpdate(),s.setRealSize(n.width,n.height,r),this.dirty=!1}_render(e){this._autoResolution&&this._resolution!==e.resolution&&(this._resolution=e.resolution,this.dirty=!0),this.updateText(!0),super._render(e)}_renderCanvas(e){this._autoResolution&&this._resolution!==e.resolution&&(this._resolution=e.resolution,this.dirty=!0),this.updateText(!0),super._renderCanvas(e)}getLocalBounds(e){return this.updateText(!0),super.getLocalBounds(e)}_calculateBounds(){this.updateText(!0),this.calculateVertices(),this._bounds.addQuad(this.vertexData)}_onStyleChange(){this.dirty=!0}destroy(e){"boolean"===typeof e&&(e={children:e}),e=Object.assign({},Xr.defaultDestroyOptions,e),super.destroy(e);const t=null;this.ownsStyle&&this._style?.cleanFonts(),this._style=t,this._svgRoot?.remove(),this._svgRoot=t,this._domElement?.remove(),this._domElement=t,this._foreignObject?.remove(),this._foreignObject=t,this._styleElement?.remove(),this._styleElement=t,this._loadImage.src="",this._loadImage.onload=null,this._loadImage=t,this._image.src="",this._image=t}get width(){return this.updateText(!0),Math.abs(this.scale.x)*this._image.width/this.resolution}set width(e){this.updateText(!0);const t=r.P6.sign(this.scale.x)||1;this.scale.x=t*e/this._image.width/this.resolution,this._width=e}get height(){return this.updateText(!0),Math.abs(this.scale.y)*this._image.height/this.resolution}set height(e){this.updateText(!0);const t=r.P6.sign(this.scale.y)||1;this.scale.y=t*e/this._image.height/this.resolution,this._height=e}get style(){return this._style}set style(e){this._style!==e&&(e=e||{},e instanceof qr?(this.ownsStyle=!1,this._style=e):e instanceof jn?(console.warn("[HTMLText] Cloning TextStyle, if this is not what you want, use HTMLTextStyle"),this.ownsStyle=!0,this._style=qr.from(e)):(this.ownsStyle=!0,this._style=new qr(e)),this.localStyleID=-1,this.dirty=!0)}get text(){return this._text}set text(e){e=String(""===e||null===e||void 0===e?" ":e),e=this.sanitiseText(e),this._text!==e&&(this._text=e,this.dirty=!0)}get resolution(){return this._resolution}set resolution(e){this._autoResolution=!1,this._resolution!==e&&(this._resolution=e,this.dirty=!0)}sanitiseText(e){return e.replace(/

    /gi,"
    ").replace(/
    /gi,"
    ").replace(/ /gi," ")}};let Yr=Xr;Yr.defaultDestroyOptions={texture:!0,children:!1,baseTexture:!0},Yr.defaultMaxWidth=2024,Yr.defaultMaxHeight=2024,Yr.defaultAutoResolution=!0},7929:function(e,t,n){"use strict";n.d(t,{Z:function(){return m}});var r,i=n(821),s=function(){return s=Object.assign||function(e){for(var t,n=1,r=arguments.length;nt.MAX_VERSION)throw new RangeError("Version value out of range");if(s<-1||s>7)throw new RangeError("Mask value out of range");this.size=4*e+17;for(var o=[],a=0;a7)throw new RangeError("Invalid value");var u,d;for(u=o;;u++){var h=8*t.getNumDataCodewords(u,r),p=s.getTotalBits(e,u);if(p<=h){d=p;break}if(u>=a)throw new RangeError("Data too long")}for(var f=0,g=[t.Ecc.MEDIUM,t.Ecc.QUARTILE,t.Ecc.HIGH];f>>3]|=e<<7-(7&t)})),new t(u,r,A,l)},t.prototype.getModule=function(e,t){return 0<=e&&e>>9);var o=21522^(t<<10|n);i(o>>>15==0);for(s=0;s<=5;s++)this.setFunctionModule(8,s,r(o,s));this.setFunctionModule(8,7,r(o,6)),this.setFunctionModule(8,8,r(o,7)),this.setFunctionModule(7,8,r(o,8));for(s=9;s<15;s++)this.setFunctionModule(14-s,8,r(o,s));for(s=0;s<8;s++)this.setFunctionModule(this.size-1-s,8,r(o,s));for(s=8;s<15;s++)this.setFunctionModule(8,this.size-15+s,r(o,s));this.setFunctionModule(8,this.size-8,!0)},t.prototype.drawVersion=function(){if(!(this.version<7)){for(var e=this.version,t=0;t<12;t++)e=e<<1^7973*(e>>>11);var n=this.version<<12|e;i(n>>>18==0);for(t=0;t<18;t++){var s=r(n,t),o=this.size-11+t%3,a=Math.floor(t/3);this.setFunctionModule(o,a,s),this.setFunctionModule(a,o,s)}}},t.prototype.drawFinderPattern=function(e,t){for(var n=-4;n<=4;n++)for(var r=-4;r<=4;r++){var i=Math.max(Math.abs(r),Math.abs(n)),s=e+r,o=t+n;0<=s&&s=l)&&m.push(t[e])}))};for(h=0;h=1;s-=2){6==s&&(s=5);for(var o=0;o>>3],7-(7&n)),n++)}}i(n==8*e.length)},t.prototype.applyMask=function(e){if(e<0||e>7)throw new RangeError("Mask value out of range");for(var t=0;t5&&e++):(this.finderPenaltyAddHistory(s,o),r||(e+=this.finderPenaltyCountPatterns(o)*t.PENALTY_N3),r=this.modules[n][a],s=1);e+=this.finderPenaltyTerminateAndCount(r,s,o)*t.PENALTY_N3}for(a=0;a5&&e++):(this.finderPenaltyAddHistory(l,o),r||(e+=this.finderPenaltyCountPatterns(o)*t.PENALTY_N3),r=this.modules[n][a],l=1);e+=this.finderPenaltyTerminateAndCount(r,l,o)*t.PENALTY_N3}for(n=0;nt.MAX_VERSION)throw new RangeError("Version number out of range");var n=(16*e+128)*e+64;if(e>=2){var r=Math.floor(e/7)+2;n-=(25*r-10)*r-55,e>=7&&(n-=36)}return i(208<=n&&n<=29648),n},t.getNumDataCodewords=function(e,n){return Math.floor(t.getNumRawDataModules(e)/8)-t.ECC_CODEWORDS_PER_BLOCK[n.ordinal][e]*t.NUM_ERROR_CORRECTION_BLOCKS[n.ordinal][e]},t.reedSolomonComputeDivisor=function(e){if(e<1||e>255)throw new RangeError("Degree out of range");for(var n=[],r=0;r>>8!=0||t>>>8!=0)throw new RangeError("Byte out of range");for(var n=0,r=7;r>=0;r--)n=n<<1^285*(n>>>7),n^=(t>>>r&1)*e;return i(n>>>8==0),n},t.prototype.finderPenaltyCountPatterns=function(e){var t=e[1];i(t<=3*this.size);var n=t>0&&e[2]==t&&e[3]==3*t&&e[4]==t&&e[5]==t;return(n&&e[0]>=4*t&&e[6]>=t?1:0)+(n&&e[6]>=4*t&&e[0]>=t?1:0)},t.prototype.finderPenaltyTerminateAndCount=function(e,t,n){return e&&(this.finderPenaltyAddHistory(t,n),t=0),t+=this.size,this.finderPenaltyAddHistory(t,n),this.finderPenaltyCountPatterns(n)},t.prototype.finderPenaltyAddHistory=function(e,t){0==t[0]&&(e+=this.size),t.pop(),t.unshift(e)},t.MIN_VERSION=1,t.MAX_VERSION=40,t.PENALTY_N1=3,t.PENALTY_N2=3,t.PENALTY_N3=40,t.PENALTY_N4=10,t.ECC_CODEWORDS_PER_BLOCK=[[-1,7,10,15,20,26,18,20,24,30,18,20,24,26,30,22,24,28,30,28,28,28,28,30,30,26,28,30,30,30,30,30,30,30,30,30,30,30,30,30,30],[-1,10,16,26,18,24,16,18,22,22,26,30,22,22,24,24,28,28,26,26,26,26,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28],[-1,13,22,18,26,18,24,18,22,20,24,28,26,24,20,30,24,28,28,26,30,28,30,30,30,30,28,30,30,30,30,30,30,30,30,30,30,30,30,30,30],[-1,17,28,22,16,22,28,26,26,24,28,24,28,22,24,24,30,28,28,26,28,30,24,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30]],t.NUM_ERROR_CORRECTION_BLOCKS=[[-1,1,1,1,1,1,2,2,2,2,4,4,4,4,4,6,6,6,6,7,8,8,9,9,10,12,12,12,13,14,15,16,17,18,19,19,20,21,22,24,25],[-1,1,1,1,2,2,4,4,4,5,5,5,8,9,9,10,10,11,13,14,16,17,17,18,20,21,23,25,26,28,29,31,33,35,37,38,40,43,45,47,49],[-1,1,1,2,2,4,4,6,6,8,8,8,10,12,16,12,17,16,18,21,20,23,23,25,27,29,34,34,35,38,40,43,45,48,51,53,56,59,62,65,68],[-1,1,1,2,4,4,4,5,6,8,8,11,11,16,16,18,16,19,21,25,25,25,34,30,32,35,37,40,42,45,48,51,54,57,60,63,66,70,74,77,81]],t}();function n(e,t,n){if(t<0||t>31||e>>>t!=0)throw new RangeError("Value out of range");for(var r=t-1;r>=0;r--)n.push(e>>>r&1)}function r(e,t){return 0!=(e>>>t&1)}function i(e){if(!e)throw new Error("Assertion error")}e.QrCode=t;var s=function(){function e(e,t,n){if(this.mode=e,this.numChars=t,this.bitData=n,t<0)throw new RangeError("Invalid argument");this.bitData=n.slice()}return e.makeBytes=function(t){for(var r=[],i=0,s=t;i=1<-1}}}),f=(0,i.defineComponent)({name:"QRCodeSvg",props:h,setup:function(e){var t=(0,i.ref)(0),n=(0,i.ref)(""),r=function(){var r=e.value,i=e.level,s=e.margin,a=o.QrCode.encodeText(r,l[i]).getModules();t.value=a.length+2*s,n.value=d(a,s)};return r(),(0,i.onUpdated)(r),function(){return(0,i.h)("svg",{width:e.size,height:e.size,"shape-rendering":"crispEdges",xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 ".concat(t.value," ").concat(t.value)},[(0,i.h)("path",{fill:e.background,d:"M0,0 h".concat(t.value,"v").concat(t.value,"H0z")}),(0,i.h)("path",{fill:e.foreground,d:n.value})])}}}),g=(0,i.defineComponent)({name:"QRCodeCanvas",props:h,setup:function(e){var t=(0,i.ref)(null),n=function(){var n=e.value,r=e.level,i=e.size,s=e.margin,a=e.background,u=e.foreground,h=t.value;if(h){var p=h.getContext("2d");if(p){var f=o.QrCode.encodeText(n,l[r]).getModules(),g=f.length+2*s,m=window.devicePixelRatio||1,b=i/g*m;h.height=h.width=i*m,p.scale(b,b),p.fillStyle=a,p.fillRect(0,0,g,g),p.fillStyle=u,c?p.fill(new Path2D(d(f,s))):f.forEach((function(e,t){e.forEach((function(e,n){e&&p.fillRect(n+s,t+s,1,1)}))}))}}};return(0,i.onMounted)(n),(0,i.onUpdated)(n),function(){return(0,i.h)("canvas",{ref:t,style:{width:"".concat(e.size,"px"),height:"".concat(e.size,"px")}})}}}),m=(0,i.defineComponent)({name:"Qrcode",render:function(){var e=this.$props,t=e.renderAs,n=e.value,r=e.size,s=e.margin,o=e.level,l=e.background,c=e.foreground,d=r>>>0,h=s>>>0,p=u(o)?o:a;return(0,i.h)("svg"===t?f:g,{value:n,size:d,margin:h,level:p,background:l,foreground:c})},props:p})},2005:function(e,t,n){"use strict";n.d(t,{x1:function(){return m}});var r=n(821),i=n(5750);const s={data:{type:Object,required:!0},options:{type:Object,default:()=>({})},plugins:{type:Array,default:()=>[]},datasetIdKey:{type:String,default:"label"},updateMode:{type:String,default:void 0}},o={type:{type:String,required:!0},...s},a="2"===r.version[0]?(e,t)=>Object.assign(e,{attrs:t}):(e,t)=>Object.assign(e,t);function l(e){return(0,r.isProxy)(e)?(0,r.toRaw)(e):e}function c(e){let t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:e;return(0,r.isProxy)(t)?new Proxy(e,{}):e}function u(e,t){const n=e.options;n&&t&&Object.assign(n,t)}function d(e,t){e.labels=t}function h(e,t,n){const r=[];e.datasets=t.map((t=>{const i=e.datasets.find((e=>e[n]===t[n]));return i&&t.data&&!r.includes(i)?(r.push(i),Object.assign(i,t),i):{...t}}))}function p(e,t){const n={labels:[],datasets:[]};return d(n,e.labels),h(n,e.datasets,t),n}const f=(0,r.defineComponent)({props:o,setup(e,t){let{expose:n}=t;const s=(0,r.ref)(null),o=(0,r.shallowRef)(null);n({chart:o});const a=()=>{if(!s.value)return;const{type:t,data:n,options:r,plugins:a,datasetIdKey:l}=e,u=p(n,l),d=c(u,n);o.value=new i.kL(s.value,{type:t,data:d,options:{...r},plugins:a})},f=()=>{const e=(0,r.toRaw)(o.value);e&&(e.destroy(),o.value=null)},g=t=>{t.update(e.updateMode)};return(0,r.onMounted)(a),(0,r.onBeforeUnmount)(f),(0,r.watch)([()=>e.options,()=>e.data],((t,n)=>{let[i,s]=t,[a,c]=n;const p=(0,r.toRaw)(o.value);if(!p)return;let f=!1;if(i){const e=l(i),t=l(a);e&&e!==t&&(u(p,e),f=!0)}if(s){const t=l(s.labels),n=l(c.labels),r=l(s.datasets),i=l(c.datasets);t!==n&&(d(p.config.data,t),f=!0),r&&r!==i&&(h(p.config.data,r,e.datasetIdKey),f=!0)}f&&g(p)}),{deep:!0}),()=>(0,r.h)("canvas",{ref:s})}});function g(e,t){return i.kL.register(t),(0,r.defineComponent)({props:s,setup(t,n){let{expose:i}=n;const s=(0,r.shallowRef)(null),o=e=>{s.value=e?.chart};return i({chart:s}),()=>(0,r.h)(f,a({ref:o},{type:e,...t}))}})}const m=g("line",i.ST)},2201:function(e,t,n){"use strict";n.d(t,{PO:function(){return B},p7:function(){return et}});var r=n(821); -/*! - * vue-router v4.2.1 - * (c) 2023 Eduardo San Martin Morote - * @license MIT - */const i="undefined"!==typeof window;function s(e){return e.__esModule||"Module"===e[Symbol.toStringTag]}const o=Object.assign;function a(e,t){const n={};for(const r in t){const i=t[r];n[r]=c(i)?i.map(e):e(i)}return n}const l=()=>{},c=Array.isArray;const u=/\/$/,d=e=>e.replace(u,"");function h(e,t,n="/"){let r,i={},s="",o="";const a=t.indexOf("#");let l=t.indexOf("?");return a=0&&(l=-1),l>-1&&(r=t.slice(0,l),s=t.slice(l+1,a>-1?a:t.length),i=e(s)),a>-1&&(r=r||t.slice(0,a),o=t.slice(a,t.length)),r=v(null!=r?r:t,n),{fullPath:r+(s&&"?")+s+o,path:r,query:i,hash:o}}function p(e,t){const n=t.query?e(t.query):"";return t.path+(n&&"?")+n+(t.hash||"")}function f(e,t){return t&&e.toLowerCase().startsWith(t.toLowerCase())?e.slice(t.length)||"/":e}function g(e,t,n){const r=t.matched.length-1,i=n.matched.length-1;return r>-1&&r===i&&m(t.matched[r],n.matched[i])&&b(t.params,n.params)&&e(t.query)===e(n.query)&&t.hash===n.hash}function m(e,t){return(e.aliasOf||e)===(t.aliasOf||t)}function b(e,t){if(Object.keys(e).length!==Object.keys(t).length)return!1;for(const n in e)if(!_(e[n],t[n]))return!1;return!0}function _(e,t){return c(e)?y(e,t):c(t)?y(t,e):e===t}function y(e,t){return c(t)?e.length===t.length&&e.every(((e,n)=>e===t[n])):1===e.length&&e[0]===t}function v(e,t){if(e.startsWith("/"))return e;if(!e)return t;const n=t.split("/"),r=e.split("/"),i=r[r.length-1];".."!==i&&"."!==i||r.push("");let s,o,a=n.length-1;for(s=0;s1&&a--}return n.slice(0,a).join("/")+"/"+r.slice(s-(s===r.length?1:0)).join("/")}var E,x;(function(e){e["pop"]="pop",e["push"]="push"})(E||(E={})),function(e){e["back"]="back",e["forward"]="forward",e["unknown"]=""}(x||(x={}));function S(e){if(!e)if(i){const t=document.querySelector("base");e=t&&t.getAttribute("href")||"/",e=e.replace(/^\w+:\/\/[^\/]+/,"")}else e="/";return"/"!==e[0]&&"#"!==e[0]&&(e="/"+e),d(e)}const w=/^[^#]+#/;function T(e,t){return e.replace(w,"#")+t}function A(e,t){const n=document.documentElement.getBoundingClientRect(),r=e.getBoundingClientRect();return{behavior:t.behavior,left:r.left-n.left-(t.left||0),top:r.top-n.top-(t.top||0)}}const C=()=>({left:window.pageXOffset,top:window.pageYOffset});function I(e){let t;if("el"in e){const n=e.el,r="string"===typeof n&&n.startsWith("#");0;const i="string"===typeof n?r?document.getElementById(n.slice(1)):document.querySelector(n):n;if(!i)return;t=A(i,e)}else t=e;"scrollBehavior"in document.documentElement.style?window.scrollTo(t):window.scrollTo(null!=t.left?t.left:window.pageXOffset,null!=t.top?t.top:window.pageYOffset)}function R(e,t){const n=history.state?history.state.position-t:-1;return n+e}const k=new Map;function P(e,t){k.set(e,t)}function O(e){const t=k.get(e);return k.delete(e),t}let N=()=>location.protocol+"//"+location.host;function M(e,t){const{pathname:n,search:r,hash:i}=t,s=e.indexOf("#");if(s>-1){let t=i.includes(e.slice(s))?e.slice(s).length:1,n=i.slice(t);return"/"!==n[0]&&(n="/"+n),f(n,"")}const o=f(n,e);return o+r+i}function D(e,t,n,r){let i=[],s=[],a=null;const l=({state:s})=>{const o=M(e,location),l=n.value,c=t.value;let u=0;if(s){if(n.value=o,t.value=s,a&&a===l)return void(a=null);u=c?s.position-c.position:0}else r(o);i.forEach((e=>{e(n.value,l,{delta:u,type:E.pop,direction:u?u>0?x.forward:x.back:x.unknown})}))};function c(){a=n.value}function u(e){i.push(e);const t=()=>{const t=i.indexOf(e);t>-1&&i.splice(t,1)};return s.push(t),t}function d(){const{history:e}=window;e.state&&e.replaceState(o({},e.state,{scroll:C()}),"")}function h(){for(const e of s)e();s=[],window.removeEventListener("popstate",l),window.removeEventListener("beforeunload",d)}return window.addEventListener("popstate",l),window.addEventListener("beforeunload",d,{passive:!0}),{pauseListeners:c,listen:u,destroy:h}}function L(e,t,n,r=!1,i=!1){return{back:e,current:t,forward:n,replaced:r,position:window.history.length,scroll:i?C():null}}function F(e){const{history:t,location:n}=window,r={value:M(e,n)},i={value:t.state};function s(r,s,o){const a=e.indexOf("#"),l=a>-1?(n.host&&document.querySelector("base")?e:e.slice(a))+r:N()+e+r;try{t[o?"replaceState":"pushState"](s,"",l),i.value=s}catch(c){console.error(c),n[o?"replace":"assign"](l)}}function a(e,n){const a=o({},t.state,L(i.value.back,e,i.value.forward,!0),n,{position:i.value.position});s(e,a,!0),r.value=e}function l(e,n){const a=o({},i.value,t.state,{forward:e,scroll:C()});s(a.current,a,!0);const l=o({},L(r.value,e,null),{position:a.position+1},n);s(e,l,!1),r.value=e}return i.value||s(r.value,{back:null,current:r.value,forward:null,position:t.length-1,replaced:!0,scroll:null},!0),{location:r,state:i,push:l,replace:a}}function B(e){e=S(e);const t=F(e),n=D(e,t.state,t.location,t.replace);function r(e,t=!0){t||n.pauseListeners(),history.go(e)}const i=o({location:"",base:e,go:r,createHref:T.bind(null,e)},t,n);return Object.defineProperty(i,"location",{enumerable:!0,get:()=>t.location.value}),Object.defineProperty(i,"state",{enumerable:!0,get:()=>t.state.value}),i}function U(e){return"string"===typeof e||e&&"object"===typeof e}function G(e){return"string"===typeof e||"symbol"===typeof e}const $={path:"/",name:void 0,params:{},query:{},hash:"",fullPath:"/",matched:[],meta:{},redirectedFrom:void 0},z=Symbol("");var H;(function(e){e[e["aborted"]=4]="aborted",e[e["cancelled"]=8]="cancelled",e[e["duplicated"]=16]="duplicated"})(H||(H={}));function V(e,t){return o(new Error,{type:e,[z]:!0},t)}function j(e,t){return e instanceof Error&&z in e&&(null==t||!!(e.type&t))}const W="[^/]+?",q={sensitive:!1,strict:!1,start:!0,end:!0},X=/[.+*?^${}()[\]/\\]/g;function Y(e,t){const n=o({},q,t),r=[];let i=n.start?"^":"";const s=[];for(const o of e){const e=o.length?[]:[90];n.strict&&!o.length&&(i+="/");for(let t=0;tt.length?1===t.length&&80===t[0]?1:-1:0}function Z(e,t){let n=0;const r=e.score,i=t.score;while(n0&&t[t.length-1]<0}const J={type:0,value:""},ee=/[a-zA-Z0-9_]/;function te(e){if(!e)return[[]];if("/"===e)return[[J]];if(!e.startsWith("/"))throw new Error(`Invalid path "${e}"`);function t(e){throw new Error(`ERR (${n})/"${c}": ${e}`)}let n=0,r=n;const i=[];let s;function o(){s&&i.push(s),s=[]}let a,l=0,c="",u="";function d(){c&&(0===n?s.push({type:0,value:c}):1===n||2===n||3===n?(s.length>1&&("*"===a||"+"===a)&&t(`A repeatable param (${c}) must be alone in its segment. eg: '/:ids+.`),s.push({type:1,value:c,regexp:u,repeatable:"*"===a||"+"===a,optional:"*"===a||"?"===a})):t("Invalid state to consume buffer"),c="")}function h(){c+=a}while(l{a(f)}:l}function a(e){if(G(e)){const t=r.get(e);t&&(r.delete(e),n.splice(n.indexOf(t),1),t.children.forEach(a),t.alias.forEach(a))}else{const t=n.indexOf(e);t>-1&&(n.splice(t,1),e.record.name&&r.delete(e.record.name),e.children.forEach(a),e.alias.forEach(a))}}function c(){return n}function u(e){let t=0;while(t=0&&(e.record.path!==n[t].record.path||!ue(e,n[t])))t++;n.splice(t,0,e),e.record.name&&!ae(e)&&r.set(e.record.name,e)}function d(e,t){let i,s,a,l={};if("name"in e&&e.name){if(i=r.get(e.name),!i)throw V(1,{location:e});0,a=i.record.name,l=o(ie(t.params,i.keys.filter((e=>!e.optional)).map((e=>e.name))),e.params&&ie(e.params,i.keys.map((e=>e.name)))),s=i.stringify(l)}else if("path"in e)s=e.path,i=n.find((e=>e.re.test(s))),i&&(l=i.parse(s),a=i.record.name);else{if(i=t.name?r.get(t.name):n.find((e=>e.re.test(t.path))),!i)throw V(1,{location:e,currentLocation:t});a=i.record.name,l=o({},t.params,e.params),s=i.stringify(l)}const c=[];let u=i;while(u)c.unshift(u.record),u=u.parent;return{name:a,path:s,params:l,matched:c,meta:le(c)}}return t=ce({strict:!1,end:!0,sensitive:!1},t),e.forEach((e=>s(e))),{addRoute:s,resolve:d,removeRoute:a,getRoutes:c,getRecordMatcher:i}}function ie(e,t){const n={};for(const r of t)r in e&&(n[r]=e[r]);return n}function se(e){return{path:e.path,redirect:e.redirect,name:e.name,meta:e.meta||{},aliasOf:void 0,beforeEnter:e.beforeEnter,props:oe(e),children:e.children||[],instances:{},leaveGuards:new Set,updateGuards:new Set,enterCallbacks:{},components:"components"in e?e.components||null:e.component&&{default:e.component}}}function oe(e){const t={},n=e.props||!1;if("component"in e)t.default=n;else for(const r in e.components)t[r]="boolean"===typeof n?n:n[r];return t}function ae(e){while(e){if(e.record.aliasOf)return!0;e=e.parent}return!1}function le(e){return e.reduce(((e,t)=>o(e,t.meta)),{})}function ce(e,t){const n={};for(const r in e)n[r]=r in t?t[r]:e[r];return n}function ue(e,t){return t.children.some((t=>t===e||ue(e,t)))}const de=/#/g,he=/&/g,pe=/\//g,fe=/=/g,ge=/\?/g,me=/\+/g,be=/%5B/g,_e=/%5D/g,ye=/%5E/g,ve=/%60/g,Ee=/%7B/g,xe=/%7C/g,Se=/%7D/g,we=/%20/g;function Te(e){return encodeURI(""+e).replace(xe,"|").replace(be,"[").replace(_e,"]")}function Ae(e){return Te(e).replace(Ee,"{").replace(Se,"}").replace(ye,"^")}function Ce(e){return Te(e).replace(me,"%2B").replace(we,"+").replace(de,"%23").replace(he,"%26").replace(ve,"`").replace(Ee,"{").replace(Se,"}").replace(ye,"^")}function Ie(e){return Ce(e).replace(fe,"%3D")}function Re(e){return Te(e).replace(de,"%23").replace(ge,"%3F")}function ke(e){return null==e?"":Re(e).replace(pe,"%2F")}function Pe(e){try{return decodeURIComponent(""+e)}catch(t){}return""+e}function Oe(e){const t={};if(""===e||"?"===e)return t;const n="?"===e[0],r=(n?e.slice(1):e).split("&");for(let i=0;ie&&Ce(e))):[r&&Ce(r)];i.forEach((e=>{void 0!==e&&(t+=(t.length?"&":"")+n,null!=e&&(t+="="+e))}))}return t}function Me(e){const t={};for(const n in e){const r=e[n];void 0!==r&&(t[n]=c(r)?r.map((e=>null==e?null:""+e)):null==r?r:""+r)}return t}const De=Symbol(""),Le=Symbol(""),Fe=Symbol(""),Be=Symbol(""),Ue=Symbol("");function Ge(){let e=[];function t(t){return e.push(t),()=>{const n=e.indexOf(t);n>-1&&e.splice(n,1)}}function n(){e=[]}return{add:t,list:()=>e,reset:n}}function $e(e,t,n,r,i){const s=r&&(r.enterCallbacks[i]=r.enterCallbacks[i]||[]);return()=>new Promise(((o,a)=>{const l=e=>{!1===e?a(V(4,{from:n,to:t})):e instanceof Error?a(e):U(e)?a(V(2,{from:t,to:e})):(s&&r.enterCallbacks[i]===s&&"function"===typeof e&&s.push(e),o())},c=e.call(r&&r.instances[i],t,n,l);let u=Promise.resolve(c);e.length<3&&(u=u.then(l)),u.catch((e=>a(e)))}))}function ze(e,t,n,r){const i=[];for(const o of e){0;for(const e in o.components){let a=o.components[e];if("beforeRouteEnter"===t||o.instances[e])if(He(a)){const s=a.__vccOpts||a,l=s[t];l&&i.push($e(l,n,r,o,e))}else{let l=a();0,i.push((()=>l.then((i=>{if(!i)return Promise.reject(new Error(`Couldn't resolve component "${e}" at "${o.path}"`));const a=s(i)?i.default:i;o.components[e]=a;const l=a.__vccOpts||a,c=l[t];return c&&$e(c,n,r,o,e)()}))))}}}return i}function He(e){return"object"===typeof e||"displayName"in e||"props"in e||"__vccOpts"in e}function Ve(e){const t=(0,r.inject)(Fe),n=(0,r.inject)(Be),i=(0,r.computed)((()=>t.resolve((0,r.unref)(e.to)))),s=(0,r.computed)((()=>{const{matched:e}=i.value,{length:t}=e,r=e[t-1],s=n.matched;if(!r||!s.length)return-1;const o=s.findIndex(m.bind(null,r));if(o>-1)return o;const a=Ye(e[t-2]);return t>1&&Ye(r)===a&&s[s.length-1].path!==a?s.findIndex(m.bind(null,e[t-2])):o})),o=(0,r.computed)((()=>s.value>-1&&Xe(n.params,i.value.params))),a=(0,r.computed)((()=>s.value>-1&&s.value===n.matched.length-1&&b(n.params,i.value.params)));function c(n={}){return qe(n)?t[(0,r.unref)(e.replace)?"replace":"push"]((0,r.unref)(e.to)).catch(l):Promise.resolve()}return{route:i,href:(0,r.computed)((()=>i.value.href)),isActive:o,isExactActive:a,navigate:c}}const je=(0,r.defineComponent)({name:"RouterLink",compatConfig:{MODE:3},props:{to:{type:[String,Object],required:!0},replace:Boolean,activeClass:String,exactActiveClass:String,custom:Boolean,ariaCurrentValue:{type:String,default:"page"}},useLink:Ve,setup(e,{slots:t}){const n=(0,r.reactive)(Ve(e)),{options:i}=(0,r.inject)(Fe),s=(0,r.computed)((()=>({[Ke(e.activeClass,i.linkActiveClass,"router-link-active")]:n.isActive,[Ke(e.exactActiveClass,i.linkExactActiveClass,"router-link-exact-active")]:n.isExactActive})));return()=>{const i=t.default&&t.default(n);return e.custom?i:(0,r.h)("a",{"aria-current":n.isExactActive?e.ariaCurrentValue:null,href:n.href,onClick:n.navigate,class:s.value},i)}}}),We=je;function qe(e){if(!(e.metaKey||e.altKey||e.ctrlKey||e.shiftKey)&&!e.defaultPrevented&&(void 0===e.button||0===e.button)){if(e.currentTarget&&e.currentTarget.getAttribute){const t=e.currentTarget.getAttribute("target");if(/\b_blank\b/i.test(t))return}return e.preventDefault&&e.preventDefault(),!0}}function Xe(e,t){for(const n in t){const r=t[n],i=e[n];if("string"===typeof r){if(r!==i)return!1}else if(!c(i)||i.length!==r.length||r.some(((e,t)=>e!==i[t])))return!1}return!0}function Ye(e){return e?e.aliasOf?e.aliasOf.path:e.path:""}const Ke=(e,t,n)=>null!=e?e:null!=t?t:n,Ze=(0,r.defineComponent)({name:"RouterView",inheritAttrs:!1,props:{name:{type:String,default:"default"},route:Object},compatConfig:{MODE:3},setup(e,{attrs:t,slots:n}){const i=(0,r.inject)(Ue),s=(0,r.computed)((()=>e.route||i.value)),a=(0,r.inject)(Le,0),l=(0,r.computed)((()=>{let e=(0,r.unref)(a);const{matched:t}=s.value;let n;while((n=t[e])&&!n.components)e++;return e})),c=(0,r.computed)((()=>s.value.matched[l.value]));(0,r.provide)(Le,(0,r.computed)((()=>l.value+1))),(0,r.provide)(De,c),(0,r.provide)(Ue,s);const u=(0,r.ref)();return(0,r.watch)((()=>[u.value,c.value,e.name]),(([e,t,n],[r,i,s])=>{t&&(t.instances[n]=e,i&&i!==t&&e&&e===r&&(t.leaveGuards.size||(t.leaveGuards=i.leaveGuards),t.updateGuards.size||(t.updateGuards=i.updateGuards))),!e||!t||i&&m(t,i)&&r||(t.enterCallbacks[n]||[]).forEach((t=>t(e)))}),{flush:"post"}),()=>{const i=s.value,a=e.name,l=c.value,d=l&&l.components[a];if(!d)return Qe(n.default,{Component:d,route:i});const h=l.props[a],p=h?!0===h?i.params:"function"===typeof h?h(i):h:null,f=e=>{e.component.isUnmounted&&(l.instances[a]=null)},g=(0,r.h)(d,o({},p,t,{onVnodeUnmounted:f,ref:u}));return Qe(n.default,{Component:g,route:i})||g}}});function Qe(e,t){if(!e)return null;const n=e(t);return 1===n.length?n[0]:n}const Je=Ze;function et(e){const t=re(e.routes,e),n=e.parseQuery||Oe,s=e.stringifyQuery||Ne,u=e.history;const d=Ge(),f=Ge(),m=Ge(),b=(0,r.shallowRef)($);let _=$;i&&e.scrollBehavior&&"scrollRestoration"in history&&(history.scrollRestoration="manual");const y=a.bind(null,(e=>""+e)),v=a.bind(null,ke),x=a.bind(null,Pe);function S(e,n){let r,i;return G(e)?(r=t.getRecordMatcher(e),i=n):i=e,t.addRoute(i,r)}function w(e){const n=t.getRecordMatcher(e);n&&t.removeRoute(n)}function T(){return t.getRoutes().map((e=>e.record))}function A(e){return!!t.getRecordMatcher(e)}function k(e,r){if(r=o({},r||b.value),"string"===typeof e){const i=h(n,e,r.path),s=t.resolve({path:i.path},r),a=u.createHref(i.fullPath);return o(i,s,{params:x(s.params),hash:Pe(i.hash),redirectedFrom:void 0,href:a})}let i;if("path"in e)i=o({},e,{path:h(n,e.path,r.path).path});else{const t=o({},e.params);for(const e in t)null==t[e]&&delete t[e];i=o({},e,{params:v(t)}),r.params=v(r.params)}const a=t.resolve(i,r),l=e.hash||"";a.params=y(x(a.params));const c=p(s,o({},e,{hash:Ae(l),path:a.path})),d=u.createHref(c);return o({fullPath:c,hash:l,query:s===Ne?Me(e.query):e.query||{}},a,{redirectedFrom:void 0,href:d})}function N(e){return"string"===typeof e?h(n,e,b.value.path):o({},e)}function M(e,t){if(_!==e)return V(8,{from:t,to:e})}function D(e){return B(e)}function L(e){return D(o(N(e),{replace:!0}))}function F(e){const t=e.matched[e.matched.length-1];if(t&&t.redirect){const{redirect:n}=t;let r="function"===typeof n?n(e):n;return"string"===typeof r&&(r=r.includes("?")||r.includes("#")?r=N(r):{path:r},r.params={}),o({query:e.query,hash:e.hash,params:"path"in r?{}:e.params},r)}}function B(e,t){const n=_=k(e),r=b.value,i=e.state,a=e.force,l=!0===e.replace,c=F(n);if(c)return B(o(N(c),{state:"object"===typeof c?o({},i,c.state):i,force:a,replace:l}),t||n);const u=n;let d;return u.redirectedFrom=t,!a&&g(s,r,n)&&(d=V(16,{to:u,from:r}),ne(r,r,!0,!1)),(d?Promise.resolve(d):H(u,r)).catch((e=>j(e)?j(e,2)?e:te(e):J(e,u,r))).then((e=>{if(e){if(j(e,2))return B(o({replace:l},N(e.to),{state:"object"===typeof e.to?o({},i,e.to.state):i,force:a}),t||u)}else e=q(u,r,!0,l,i);return W(u,r,e),e}))}function U(e,t){const n=M(e,t);return n?Promise.reject(n):Promise.resolve()}function z(e){const t=oe.values().next().value;return t&&"function"===typeof t.runWithContext?t.runWithContext(e):e()}function H(e,t){let n;const[r,i,s]=tt(e,t);n=ze(r.reverse(),"beforeRouteLeave",e,t);for(const a of r)a.leaveGuards.forEach((r=>{n.push($e(r,e,t))}));const o=U.bind(null,e,t);return n.push(o),le(n).then((()=>{n=[];for(const r of d.list())n.push($e(r,e,t));return n.push(o),le(n)})).then((()=>{n=ze(i,"beforeRouteUpdate",e,t);for(const r of i)r.updateGuards.forEach((r=>{n.push($e(r,e,t))}));return n.push(o),le(n)})).then((()=>{n=[];for(const r of e.matched)if(r.beforeEnter&&!t.matched.includes(r))if(c(r.beforeEnter))for(const i of r.beforeEnter)n.push($e(i,e,t));else n.push($e(r.beforeEnter,e,t));return n.push(o),le(n)})).then((()=>(e.matched.forEach((e=>e.enterCallbacks={})),n=ze(s,"beforeRouteEnter",e,t),n.push(o),le(n)))).then((()=>{n=[];for(const r of f.list())n.push($e(r,e,t));return n.push(o),le(n)})).catch((e=>j(e,8)?e:Promise.reject(e)))}function W(e,t,n){for(const r of m.list())z((()=>r(e,t,n)))}function q(e,t,n,r,s){const a=M(e,t);if(a)return a;const l=t===$,c=i?history.state:{};n&&(r||l?u.replace(e.fullPath,o({scroll:l&&c&&c.scroll},s)):u.push(e.fullPath,s)),b.value=e,ne(e,t,n,l),te()}let X;function Y(){X||(X=u.listen(((e,t,n)=>{if(!ae.listening)return;const r=k(e),s=F(r);if(s)return void B(o(s,{replace:!0}),r).catch(l);_=r;const a=b.value;i&&P(R(a.fullPath,n.delta),C()),H(r,a).catch((e=>j(e,12)?e:j(e,2)?(B(e.to,r).then((e=>{j(e,20)&&!n.delta&&n.type===E.pop&&u.go(-1,!1)})).catch(l),Promise.reject()):(n.delta&&u.go(-n.delta,!1),J(e,r,a)))).then((e=>{e=e||q(r,a,!1),e&&(n.delta&&!j(e,8)?u.go(-n.delta,!1):n.type===E.pop&&j(e,20)&&u.go(-1,!1)),W(r,a,e)})).catch(l)})))}let K,Z=Ge(),Q=Ge();function J(e,t,n){te(e);const r=Q.list();return r.length?r.forEach((r=>r(e,t,n))):console.error(e),Promise.reject(e)}function ee(){return K&&b.value!==$?Promise.resolve():new Promise(((e,t)=>{Z.add([e,t])}))}function te(e){return K||(K=!e,Y(),Z.list().forEach((([t,n])=>e?n(e):t())),Z.reset()),e}function ne(t,n,s,o){const{scrollBehavior:a}=e;if(!i||!a)return Promise.resolve();const l=!s&&O(R(t.fullPath,0))||(o||!s)&&history.state&&history.state.scroll||null;return(0,r.nextTick)().then((()=>a(t,n,l))).then((e=>e&&I(e))).catch((e=>J(e,t,n)))}const ie=e=>u.go(e);let se;const oe=new Set,ae={currentRoute:b,listening:!0,addRoute:S,removeRoute:w,hasRoute:A,getRoutes:T,resolve:k,options:e,push:D,replace:L,go:ie,back:()=>ie(-1),forward:()=>ie(1),beforeEach:d.add,beforeResolve:f.add,afterEach:m.add,onError:Q.add,isReady:ee,install(e){const t=this;e.component("RouterLink",We),e.component("RouterView",Je),e.config.globalProperties.$router=t,Object.defineProperty(e.config.globalProperties,"$route",{enumerable:!0,get:()=>(0,r.unref)(b)}),i&&!se&&b.value===$&&(se=!0,D(u.location).catch((e=>{0})));const n={};for(const i in $)n[i]=(0,r.computed)((()=>b.value[i]));e.provide(Fe,t),e.provide(Be,(0,r.reactive)(n)),e.provide(Ue,b);const s=e.unmount;oe.add(e),e.unmount=function(){oe.delete(e),oe.size<1&&(_=$,X&&X(),X=null,b.value=$,se=!1,K=!1),s()}}};function le(e){return e.reduce(((e,t)=>e.then((()=>z(t)))),Promise.resolve())}return ae}function tt(e,t){const n=[],r=[],i=[],s=Math.max(t.matched.length,e.matched.length);for(let o=0;om(e,s)))?r.push(s):n.push(s));const a=e.matched[o];a&&(t.matched.find((e=>m(e,a)))||i.push(a))}return[n,r,i]}},2676:function(e){"use strict";e.exports=JSON.parse('{"grinning":"😀","smiley":"😃","smile":"😄","grin":"😁","laughing":"😆","satisfied":"😆","sweat_smile":"😅","joy":"😂","blush":"😊","innocent":"😇","wink":"😉","relieved":"😌","heart_eyes":"😍","kissing_heart":"😘","kissing":"😗","kissing_smiling_eyes":"😙","kissing_closed_eyes":"😚","yum":"😋","stuck_out_tongue_winking_eye":"😜","stuck_out_tongue_closed_eyes":"😝","stuck_out_tongue":"😛","sunglasses":"😎","smirk":"😏","unamused":"😒","disappointed":"😞","pensive":"😔","worried":"😟","confused":"😕","persevere":"😣","confounded":"😖","tired_face":"😫","weary":"😩","angry":"😠","rage":"😡","pout":"😡","no_mouth":"😶","neutral_face":"😐","expressionless":"😑","hushed":"😯","frowning":"😦","anguished":"😧","open_mouth":"😮","astonished":"😲","dizzy_face":"😵","flushed":"😳","scream":"😱","fearful":"😨","cold_sweat":"😰","cry":"😢","disappointed_relieved":"😥","sob":"😭","sweat":"😓","sleepy":"😪","sleeping":"😴","mask":"😷","smiling_imp":"😈","smiley_cat":"😺","smile_cat":"😸","joy_cat":"😹","heart_eyes_cat":"😻","smirk_cat":"😼","kissing_cat":"😽","scream_cat":"🙀","crying_cat_face":"😿","pouting_cat":"😾","fist_raised":"✊","fist":"✊","v":"✌️","point_up":"☝️","hand":"✋","raised_hand":"✋","cat":"🐱","mouse":"🐭","cow":"🐮","monkey_face":"🐵","star":"⭐️","sparkles":"✨","zap":"⚡️","sunny":"☀️","cloud":"☁️","snowflake":"❄️","umbrella":"☔️","coffee":"☕️","airplane":"✈️","anchor":"⚓️","watch":"⌚️","phone":"☎️","telephone":"☎️","hourglass":"⌛️","email":"✉️","envelope":"✉️","scissors":"✂️","black_nib":"✒️","pencil2":"✏️","heart":"❤️","aries":"♈️","taurus":"♉️","gemini":"♊️","cancer":"♋️","leo":"♌️","virgo":"♍️","libra":"♎️","scorpius":"♏️","sagittarius":"♐️","capricorn":"♑️","aquarius":"♒️","pisces":"♓️","eight_pointed_black_star":"✴️","x":"❌","hotsprings":"♨️","exclamation":"❗️","heavy_exclamation_mark":"❗️","grey_exclamation":"❕","question":"❓","grey_question":"❔","bangbang":"‼️","interrobang":"⁉️","part_alternation_mark":"〽️","warning":"⚠️","recycle":"♻️","white_check_mark":"✅","sparkle":"❇️","eight_spoked_asterisk":"✳️","negative_squared_cross_mark":"❎","m":"Ⓜ️","wheelchair":"♿️","information_source":"ℹ️","heavy_plus_sign":"➕","heavy_minus_sign":"➖","heavy_division_sign":"➗","heavy_multiplication_x":"✖️","tm":"™️","copyright":"©️","registered":"®️","wavy_dash":"〰️","curly_loop":"➰","loop":"➿","heavy_check_mark":"✔️","ballot_box_with_check":"☑️","white_circle":"⚪️","black_circle":"⚫️","black_small_square":"▪️","white_small_square":"▫️","black_medium_small_square":"◾️","white_medium_small_square":"◽️","black_medium_square":"◼️","white_medium_square":"◻️","black_large_square":"⬛️","white_large_square":"⬜️","black_joker":"🃏","mahjong":"🀄️"}')}}]); -//# sourceMappingURL=chunk-vendors.cd7b5e68.js.map \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/activations.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/activations.py deleted file mode 100644 index 64759b706e2f108803e51ccd50f9dff67ad49722..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/activations.py +++ /dev/null @@ -1,12 +0,0 @@ -from torch import nn - - -def get_activation(act_fn): - if act_fn in ["swish", "silu"]: - return nn.SiLU() - elif act_fn == "mish": - return nn.Mish() - elif act_fn == "gelu": - return nn.GELU() - else: - raise ValueError(f"Unsupported activation function: {act_fn}") diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/attention_processor.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/attention_processor.py deleted file mode 100644 index 43497c2284acf1fc49ef52798d07d0889a1c31d1..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/attention_processor.py +++ /dev/null @@ -1,1680 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import Callable, Optional, Union - -import torch -import torch.nn.functional as F -from torch import nn - -from ..utils import deprecate, logging, maybe_allow_in_graph -from ..utils.import_utils import is_xformers_available -from .lora import LoRALinearLayer - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -if is_xformers_available(): - import xformers - import xformers.ops -else: - xformers = None - - -@maybe_allow_in_graph -class Attention(nn.Module): - r""" - A cross attention layer. - - Parameters: - query_dim (`int`): The number of channels in the query. - cross_attention_dim (`int`, *optional*): - The number of channels in the encoder_hidden_states. If not given, defaults to `query_dim`. - heads (`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention. - dim_head (`int`, *optional*, defaults to 64): The number of channels in each head. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - bias (`bool`, *optional*, defaults to False): - Set to `True` for the query, key, and value linear layers to contain a bias parameter. - """ - - def __init__( - self, - query_dim: int, - cross_attention_dim: Optional[int] = None, - heads: int = 8, - dim_head: int = 64, - dropout: float = 0.0, - bias=False, - upcast_attention: bool = False, - upcast_softmax: bool = False, - cross_attention_norm: Optional[str] = None, - cross_attention_norm_num_groups: int = 32, - added_kv_proj_dim: Optional[int] = None, - norm_num_groups: Optional[int] = None, - spatial_norm_dim: Optional[int] = None, - out_bias: bool = True, - scale_qk: bool = True, - only_cross_attention: bool = False, - eps: float = 1e-5, - rescale_output_factor: float = 1.0, - residual_connection: bool = False, - _from_deprecated_attn_block=False, - processor: Optional["AttnProcessor"] = None, - ): - super().__init__() - inner_dim = dim_head * heads - cross_attention_dim = cross_attention_dim if cross_attention_dim is not None else query_dim - self.upcast_attention = upcast_attention - self.upcast_softmax = upcast_softmax - self.rescale_output_factor = rescale_output_factor - self.residual_connection = residual_connection - self.dropout = dropout - - # we make use of this private variable to know whether this class is loaded - # with an deprecated state dict so that we can convert it on the fly - self._from_deprecated_attn_block = _from_deprecated_attn_block - - self.scale_qk = scale_qk - self.scale = dim_head**-0.5 if self.scale_qk else 1.0 - - self.heads = heads - # for slice_size > 0 the attention score computation - # is split across the batch axis to save memory - # You can set slice_size with `set_attention_slice` - self.sliceable_head_dim = heads - - self.added_kv_proj_dim = added_kv_proj_dim - self.only_cross_attention = only_cross_attention - - if self.added_kv_proj_dim is None and self.only_cross_attention: - raise ValueError( - "`only_cross_attention` can only be set to True if `added_kv_proj_dim` is not None. Make sure to set either `only_cross_attention=False` or define `added_kv_proj_dim`." - ) - - if norm_num_groups is not None: - self.group_norm = nn.GroupNorm(num_channels=query_dim, num_groups=norm_num_groups, eps=eps, affine=True) - else: - self.group_norm = None - - if spatial_norm_dim is not None: - self.spatial_norm = SpatialNorm(f_channels=query_dim, zq_channels=spatial_norm_dim) - else: - self.spatial_norm = None - - if cross_attention_norm is None: - self.norm_cross = None - elif cross_attention_norm == "layer_norm": - self.norm_cross = nn.LayerNorm(cross_attention_dim) - elif cross_attention_norm == "group_norm": - if self.added_kv_proj_dim is not None: - # The given `encoder_hidden_states` are initially of shape - # (batch_size, seq_len, added_kv_proj_dim) before being projected - # to (batch_size, seq_len, cross_attention_dim). The norm is applied - # before the projection, so we need to use `added_kv_proj_dim` as - # the number of channels for the group norm. - norm_cross_num_channels = added_kv_proj_dim - else: - norm_cross_num_channels = cross_attention_dim - - self.norm_cross = nn.GroupNorm( - num_channels=norm_cross_num_channels, num_groups=cross_attention_norm_num_groups, eps=1e-5, affine=True - ) - else: - raise ValueError( - f"unknown cross_attention_norm: {cross_attention_norm}. Should be None, 'layer_norm' or 'group_norm'" - ) - - self.to_q = nn.Linear(query_dim, inner_dim, bias=bias) - - if not self.only_cross_attention: - # only relevant for the `AddedKVProcessor` classes - self.to_k = nn.Linear(cross_attention_dim, inner_dim, bias=bias) - self.to_v = nn.Linear(cross_attention_dim, inner_dim, bias=bias) - else: - self.to_k = None - self.to_v = None - - if self.added_kv_proj_dim is not None: - self.add_k_proj = nn.Linear(added_kv_proj_dim, inner_dim) - self.add_v_proj = nn.Linear(added_kv_proj_dim, inner_dim) - - self.to_out = nn.ModuleList([]) - self.to_out.append(nn.Linear(inner_dim, query_dim, bias=out_bias)) - self.to_out.append(nn.Dropout(dropout)) - - # set attention processor - # We use the AttnProcessor2_0 by default when torch 2.x is used which uses - # torch.nn.functional.scaled_dot_product_attention for native Flash/memory_efficient_attention - # but only if it has the default `scale` argument. TODO remove scale_qk check when we move to torch 2.1 - if processor is None: - processor = ( - AttnProcessor2_0() if hasattr(F, "scaled_dot_product_attention") and self.scale_qk else AttnProcessor() - ) - self.set_processor(processor) - - def set_use_memory_efficient_attention_xformers( - self, use_memory_efficient_attention_xformers: bool, attention_op: Optional[Callable] = None - ): - is_lora = hasattr(self, "processor") and isinstance( - self.processor, - LORA_ATTENTION_PROCESSORS, - ) - is_custom_diffusion = hasattr(self, "processor") and isinstance( - self.processor, (CustomDiffusionAttnProcessor, CustomDiffusionXFormersAttnProcessor) - ) - is_added_kv_processor = hasattr(self, "processor") and isinstance( - self.processor, - ( - AttnAddedKVProcessor, - AttnAddedKVProcessor2_0, - SlicedAttnAddedKVProcessor, - XFormersAttnAddedKVProcessor, - LoRAAttnAddedKVProcessor, - ), - ) - - if use_memory_efficient_attention_xformers: - if is_added_kv_processor and (is_lora or is_custom_diffusion): - raise NotImplementedError( - f"Memory efficient attention is currently not supported for LoRA or custom diffuson for attention processor type {self.processor}" - ) - if not is_xformers_available(): - raise ModuleNotFoundError( - ( - "Refer to https://github.com/facebookresearch/xformers for more information on how to install" - " xformers" - ), - name="xformers", - ) - elif not torch.cuda.is_available(): - raise ValueError( - "torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is" - " only available for GPU " - ) - else: - try: - # Make sure we can run the memory efficient attention - _ = xformers.ops.memory_efficient_attention( - torch.randn((1, 2, 40), device="cuda"), - torch.randn((1, 2, 40), device="cuda"), - torch.randn((1, 2, 40), device="cuda"), - ) - except Exception as e: - raise e - - if is_lora: - # TODO (sayakpaul): should we throw a warning if someone wants to use the xformers - # variant when using PT 2.0 now that we have LoRAAttnProcessor2_0? - processor = LoRAXFormersAttnProcessor( - hidden_size=self.processor.hidden_size, - cross_attention_dim=self.processor.cross_attention_dim, - rank=self.processor.rank, - attention_op=attention_op, - ) - processor.load_state_dict(self.processor.state_dict()) - processor.to(self.processor.to_q_lora.up.weight.device) - elif is_custom_diffusion: - processor = CustomDiffusionXFormersAttnProcessor( - train_kv=self.processor.train_kv, - train_q_out=self.processor.train_q_out, - hidden_size=self.processor.hidden_size, - cross_attention_dim=self.processor.cross_attention_dim, - attention_op=attention_op, - ) - processor.load_state_dict(self.processor.state_dict()) - if hasattr(self.processor, "to_k_custom_diffusion"): - processor.to(self.processor.to_k_custom_diffusion.weight.device) - elif is_added_kv_processor: - # TODO(Patrick, Suraj, William) - currently xformers doesn't work for UnCLIP - # which uses this type of cross attention ONLY because the attention mask of format - # [0, ..., -10.000, ..., 0, ...,] is not supported - # throw warning - logger.info( - "Memory efficient attention with `xformers` might currently not work correctly if an attention mask is required for the attention operation." - ) - processor = XFormersAttnAddedKVProcessor(attention_op=attention_op) - else: - processor = XFormersAttnProcessor(attention_op=attention_op) - else: - if is_lora: - attn_processor_class = ( - LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor - ) - processor = attn_processor_class( - hidden_size=self.processor.hidden_size, - cross_attention_dim=self.processor.cross_attention_dim, - rank=self.processor.rank, - ) - processor.load_state_dict(self.processor.state_dict()) - processor.to(self.processor.to_q_lora.up.weight.device) - elif is_custom_diffusion: - processor = CustomDiffusionAttnProcessor( - train_kv=self.processor.train_kv, - train_q_out=self.processor.train_q_out, - hidden_size=self.processor.hidden_size, - cross_attention_dim=self.processor.cross_attention_dim, - ) - processor.load_state_dict(self.processor.state_dict()) - if hasattr(self.processor, "to_k_custom_diffusion"): - processor.to(self.processor.to_k_custom_diffusion.weight.device) - else: - # set attention processor - # We use the AttnProcessor2_0 by default when torch 2.x is used which uses - # torch.nn.functional.scaled_dot_product_attention for native Flash/memory_efficient_attention - # but only if it has the default `scale` argument. TODO remove scale_qk check when we move to torch 2.1 - processor = ( - AttnProcessor2_0() - if hasattr(F, "scaled_dot_product_attention") and self.scale_qk - else AttnProcessor() - ) - - self.set_processor(processor) - - def set_attention_slice(self, slice_size): - if slice_size is not None and slice_size > self.sliceable_head_dim: - raise ValueError(f"slice_size {slice_size} has to be smaller or equal to {self.sliceable_head_dim}.") - - if slice_size is not None and self.added_kv_proj_dim is not None: - processor = SlicedAttnAddedKVProcessor(slice_size) - elif slice_size is not None: - processor = SlicedAttnProcessor(slice_size) - elif self.added_kv_proj_dim is not None: - processor = AttnAddedKVProcessor() - else: - # set attention processor - # We use the AttnProcessor2_0 by default when torch 2.x is used which uses - # torch.nn.functional.scaled_dot_product_attention for native Flash/memory_efficient_attention - # but only if it has the default `scale` argument. TODO remove scale_qk check when we move to torch 2.1 - processor = ( - AttnProcessor2_0() if hasattr(F, "scaled_dot_product_attention") and self.scale_qk else AttnProcessor() - ) - - self.set_processor(processor) - - def set_processor(self, processor: "AttnProcessor"): - # if current processor is in `self._modules` and if passed `processor` is not, we need to - # pop `processor` from `self._modules` - if ( - hasattr(self, "processor") - and isinstance(self.processor, torch.nn.Module) - and not isinstance(processor, torch.nn.Module) - ): - logger.info(f"You are removing possibly trained weights of {self.processor} with {processor}") - self._modules.pop("processor") - - self.processor = processor - - def forward(self, hidden_states, encoder_hidden_states=None, attention_mask=None, **cross_attention_kwargs): - # The `Attention` class can call different attention processors / attention functions - # here we simply pass along all tensors to the selected processor class - # For standard processors that are defined here, `**cross_attention_kwargs` is empty - return self.processor( - self, - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - def batch_to_head_dim(self, tensor): - head_size = self.heads - batch_size, seq_len, dim = tensor.shape - tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size // head_size, seq_len, dim * head_size) - return tensor - - def head_to_batch_dim(self, tensor, out_dim=3): - head_size = self.heads - batch_size, seq_len, dim = tensor.shape - tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size) - tensor = tensor.permute(0, 2, 1, 3) - - if out_dim == 3: - tensor = tensor.reshape(batch_size * head_size, seq_len, dim // head_size) - - return tensor - - def get_attention_scores(self, query, key, attention_mask=None): - dtype = query.dtype - if self.upcast_attention: - query = query.float() - key = key.float() - - if attention_mask is None: - baddbmm_input = torch.empty( - query.shape[0], query.shape[1], key.shape[1], dtype=query.dtype, device=query.device - ) - beta = 0 - else: - baddbmm_input = attention_mask - beta = 1 - - attention_scores = torch.baddbmm( - baddbmm_input, - query, - key.transpose(-1, -2), - beta=beta, - alpha=self.scale, - ) - del baddbmm_input - - if self.upcast_softmax: - attention_scores = attention_scores.float() - - attention_probs = attention_scores.softmax(dim=-1) - del attention_scores - - attention_probs = attention_probs.to(dtype) - - return attention_probs - - def prepare_attention_mask(self, attention_mask, target_length, batch_size=None, out_dim=3): - if batch_size is None: - deprecate( - "batch_size=None", - "0.0.15", - ( - "Not passing the `batch_size` parameter to `prepare_attention_mask` can lead to incorrect" - " attention mask preparation and is deprecated behavior. Please make sure to pass `batch_size` to" - " `prepare_attention_mask` when preparing the attention_mask." - ), - ) - batch_size = 1 - - head_size = self.heads - if attention_mask is None: - return attention_mask - - current_length: int = attention_mask.shape[-1] - if current_length != target_length: - if attention_mask.device.type == "mps": - # HACK: MPS: Does not support padding by greater than dimension of input tensor. - # Instead, we can manually construct the padding tensor. - padding_shape = (attention_mask.shape[0], attention_mask.shape[1], target_length) - padding = torch.zeros(padding_shape, dtype=attention_mask.dtype, device=attention_mask.device) - attention_mask = torch.cat([attention_mask, padding], dim=2) - else: - # TODO: for pipelines such as stable-diffusion, padding cross-attn mask: - # we want to instead pad by (0, remaining_length), where remaining_length is: - # remaining_length: int = target_length - current_length - # TODO: re-enable tests/models/test_models_unet_2d_condition.py#test_model_xattn_padding - attention_mask = F.pad(attention_mask, (0, target_length), value=0.0) - - if out_dim == 3: - if attention_mask.shape[0] < batch_size * head_size: - attention_mask = attention_mask.repeat_interleave(head_size, dim=0) - elif out_dim == 4: - attention_mask = attention_mask.unsqueeze(1) - attention_mask = attention_mask.repeat_interleave(head_size, dim=1) - - return attention_mask - - def norm_encoder_hidden_states(self, encoder_hidden_states): - assert self.norm_cross is not None, "self.norm_cross must be defined to call self.norm_encoder_hidden_states" - - if isinstance(self.norm_cross, nn.LayerNorm): - encoder_hidden_states = self.norm_cross(encoder_hidden_states) - elif isinstance(self.norm_cross, nn.GroupNorm): - # Group norm norms along the channels dimension and expects - # input to be in the shape of (N, C, *). In this case, we want - # to norm along the hidden dimension, so we need to move - # (batch_size, sequence_length, hidden_size) -> - # (batch_size, hidden_size, sequence_length) - encoder_hidden_states = encoder_hidden_states.transpose(1, 2) - encoder_hidden_states = self.norm_cross(encoder_hidden_states) - encoder_hidden_states = encoder_hidden_states.transpose(1, 2) - else: - assert False - - return encoder_hidden_states - - -class AttnProcessor: - r""" - Default processor for performing attention-related computations. - """ - - def __call__( - self, - attn: Attention, - hidden_states, - encoder_hidden_states=None, - attention_mask=None, - temb=None, - ): - residual = hidden_states - - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class LoRAAttnProcessor(nn.Module): - r""" - Processor for implementing the LoRA attention mechanism. - - Args: - hidden_size (`int`, *optional*): - The hidden size of the attention layer. - cross_attention_dim (`int`, *optional*): - The number of channels in the `encoder_hidden_states`. - rank (`int`, defaults to 4): - The dimension of the LoRA update matrices. - network_alpha (`int`, *optional*): - Equivalent to `alpha` but it's usage is specific to Kohya (A1111) style LoRAs. - """ - - def __init__(self, hidden_size, cross_attention_dim=None, rank=4, network_alpha=None, **kwargs): - super().__init__() - - self.hidden_size = hidden_size - self.cross_attention_dim = cross_attention_dim - self.rank = rank - - q_rank = kwargs.pop("q_rank", None) - q_hidden_size = kwargs.pop("q_hidden_size", None) - q_rank = q_rank if q_rank is not None else rank - q_hidden_size = q_hidden_size if q_hidden_size is not None else hidden_size - - v_rank = kwargs.pop("v_rank", None) - v_hidden_size = kwargs.pop("v_hidden_size", None) - v_rank = v_rank if v_rank is not None else rank - v_hidden_size = v_hidden_size if v_hidden_size is not None else hidden_size - - out_rank = kwargs.pop("out_rank", None) - out_hidden_size = kwargs.pop("out_hidden_size", None) - out_rank = out_rank if out_rank is not None else rank - out_hidden_size = out_hidden_size if out_hidden_size is not None else hidden_size - - self.to_q_lora = LoRALinearLayer(q_hidden_size, q_hidden_size, q_rank, network_alpha) - self.to_k_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank, network_alpha) - self.to_v_lora = LoRALinearLayer(cross_attention_dim or v_hidden_size, v_hidden_size, v_rank, network_alpha) - self.to_out_lora = LoRALinearLayer(out_hidden_size, out_hidden_size, out_rank, network_alpha) - - def __call__( - self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None, scale=1.0, temb=None - ): - residual = hidden_states - - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) + scale * self.to_q_lora(hidden_states) - query = attn.head_to_batch_dim(query) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) + scale * self.to_k_lora(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) + scale * self.to_v_lora(encoder_hidden_states) - - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) + scale * self.to_out_lora(hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class CustomDiffusionAttnProcessor(nn.Module): - r""" - Processor for implementing attention for the Custom Diffusion method. - - Args: - train_kv (`bool`, defaults to `True`): - Whether to newly train the key and value matrices corresponding to the text features. - train_q_out (`bool`, defaults to `True`): - Whether to newly train query matrices corresponding to the latent image features. - hidden_size (`int`, *optional*, defaults to `None`): - The hidden size of the attention layer. - cross_attention_dim (`int`, *optional*, defaults to `None`): - The number of channels in the `encoder_hidden_states`. - out_bias (`bool`, defaults to `True`): - Whether to include the bias parameter in `train_q_out`. - dropout (`float`, *optional*, defaults to 0.0): - The dropout probability to use. - """ - - def __init__( - self, - train_kv=True, - train_q_out=True, - hidden_size=None, - cross_attention_dim=None, - out_bias=True, - dropout=0.0, - ): - super().__init__() - self.train_kv = train_kv - self.train_q_out = train_q_out - - self.hidden_size = hidden_size - self.cross_attention_dim = cross_attention_dim - - # `_custom_diffusion` id for easy serialization and loading. - if self.train_kv: - self.to_k_custom_diffusion = nn.Linear(cross_attention_dim or hidden_size, hidden_size, bias=False) - self.to_v_custom_diffusion = nn.Linear(cross_attention_dim or hidden_size, hidden_size, bias=False) - if self.train_q_out: - self.to_q_custom_diffusion = nn.Linear(hidden_size, hidden_size, bias=False) - self.to_out_custom_diffusion = nn.ModuleList([]) - self.to_out_custom_diffusion.append(nn.Linear(hidden_size, hidden_size, bias=out_bias)) - self.to_out_custom_diffusion.append(nn.Dropout(dropout)) - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - batch_size, sequence_length, _ = hidden_states.shape - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - if self.train_q_out: - query = self.to_q_custom_diffusion(hidden_states).to(attn.to_q.weight.dtype) - else: - query = attn.to_q(hidden_states.to(attn.to_q.weight.dtype)) - - if encoder_hidden_states is None: - crossattn = False - encoder_hidden_states = hidden_states - else: - crossattn = True - if attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - if self.train_kv: - key = self.to_k_custom_diffusion(encoder_hidden_states.to(self.to_k_custom_diffusion.weight.dtype)) - value = self.to_v_custom_diffusion(encoder_hidden_states.to(self.to_v_custom_diffusion.weight.dtype)) - key = key.to(attn.to_q.weight.dtype) - value = value.to(attn.to_q.weight.dtype) - else: - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - if crossattn: - detach = torch.ones_like(key) - detach[:, :1, :] = detach[:, :1, :] * 0.0 - key = detach * key + (1 - detach) * key.detach() - value = detach * value + (1 - detach) * value.detach() - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - if self.train_q_out: - # linear proj - hidden_states = self.to_out_custom_diffusion[0](hidden_states) - # dropout - hidden_states = self.to_out_custom_diffusion[1](hidden_states) - else: - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - return hidden_states - - -class AttnAddedKVProcessor: - r""" - Processor for performing attention-related computations with extra learnable key and value matrices for the text - encoder. - """ - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - residual = hidden_states - hidden_states = hidden_states.view(hidden_states.shape[0], hidden_states.shape[1], -1).transpose(1, 2) - batch_size, sequence_length, _ = hidden_states.shape - - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - query = attn.head_to_batch_dim(query) - - encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states) - encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states) - encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj) - encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj) - - if not attn.only_cross_attention: - key = attn.to_k(hidden_states) - value = attn.to_v(hidden_states) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - key = torch.cat([encoder_hidden_states_key_proj, key], dim=1) - value = torch.cat([encoder_hidden_states_value_proj, value], dim=1) - else: - key = encoder_hidden_states_key_proj - value = encoder_hidden_states_value_proj - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - hidden_states = hidden_states.transpose(-1, -2).reshape(residual.shape) - hidden_states = hidden_states + residual - - return hidden_states - - -class AttnAddedKVProcessor2_0: - r""" - Processor for performing scaled dot-product attention (enabled by default if you're using PyTorch 2.0), with extra - learnable key and value matrices for the text encoder. - """ - - def __init__(self): - if not hasattr(F, "scaled_dot_product_attention"): - raise ImportError( - "AttnAddedKVProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0." - ) - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - residual = hidden_states - hidden_states = hidden_states.view(hidden_states.shape[0], hidden_states.shape[1], -1).transpose(1, 2) - batch_size, sequence_length, _ = hidden_states.shape - - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size, out_dim=4) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - query = attn.head_to_batch_dim(query, out_dim=4) - - encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states) - encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states) - encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj, out_dim=4) - encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj, out_dim=4) - - if not attn.only_cross_attention: - key = attn.to_k(hidden_states) - value = attn.to_v(hidden_states) - key = attn.head_to_batch_dim(key, out_dim=4) - value = attn.head_to_batch_dim(value, out_dim=4) - key = torch.cat([encoder_hidden_states_key_proj, key], dim=2) - value = torch.cat([encoder_hidden_states_value_proj, value], dim=2) - else: - key = encoder_hidden_states_key_proj - value = encoder_hidden_states_value_proj - - # the output of sdp = (batch, num_heads, seq_len, head_dim) - # TODO: add support for attn.scale when we move to Torch 2.1 - hidden_states = F.scaled_dot_product_attention( - query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False - ) - hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, residual.shape[1]) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - hidden_states = hidden_states.transpose(-1, -2).reshape(residual.shape) - hidden_states = hidden_states + residual - - return hidden_states - - -class LoRAAttnAddedKVProcessor(nn.Module): - r""" - Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text - encoder. - - Args: - hidden_size (`int`, *optional*): - The hidden size of the attention layer. - cross_attention_dim (`int`, *optional*, defaults to `None`): - The number of channels in the `encoder_hidden_states`. - rank (`int`, defaults to 4): - The dimension of the LoRA update matrices. - - """ - - def __init__(self, hidden_size, cross_attention_dim=None, rank=4, network_alpha=None): - super().__init__() - - self.hidden_size = hidden_size - self.cross_attention_dim = cross_attention_dim - self.rank = rank - - self.to_q_lora = LoRALinearLayer(hidden_size, hidden_size, rank, network_alpha) - self.add_k_proj_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank, network_alpha) - self.add_v_proj_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank, network_alpha) - self.to_k_lora = LoRALinearLayer(hidden_size, hidden_size, rank, network_alpha) - self.to_v_lora = LoRALinearLayer(hidden_size, hidden_size, rank, network_alpha) - self.to_out_lora = LoRALinearLayer(hidden_size, hidden_size, rank, network_alpha) - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None, scale=1.0): - residual = hidden_states - hidden_states = hidden_states.view(hidden_states.shape[0], hidden_states.shape[1], -1).transpose(1, 2) - batch_size, sequence_length, _ = hidden_states.shape - - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) + scale * self.to_q_lora(hidden_states) - query = attn.head_to_batch_dim(query) - - encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states) + scale * self.add_k_proj_lora( - encoder_hidden_states - ) - encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states) + scale * self.add_v_proj_lora( - encoder_hidden_states - ) - encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj) - encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj) - - if not attn.only_cross_attention: - key = attn.to_k(hidden_states) + scale * self.to_k_lora(hidden_states) - value = attn.to_v(hidden_states) + scale * self.to_v_lora(hidden_states) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - key = torch.cat([encoder_hidden_states_key_proj, key], dim=1) - value = torch.cat([encoder_hidden_states_value_proj, value], dim=1) - else: - key = encoder_hidden_states_key_proj - value = encoder_hidden_states_value_proj - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) + scale * self.to_out_lora(hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - hidden_states = hidden_states.transpose(-1, -2).reshape(residual.shape) - hidden_states = hidden_states + residual - - return hidden_states - - -class XFormersAttnAddedKVProcessor: - r""" - Processor for implementing memory efficient attention using xFormers. - - Args: - attention_op (`Callable`, *optional*, defaults to `None`): - The base - [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to - use as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best - operator. - """ - - def __init__(self, attention_op: Optional[Callable] = None): - self.attention_op = attention_op - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - residual = hidden_states - hidden_states = hidden_states.view(hidden_states.shape[0], hidden_states.shape[1], -1).transpose(1, 2) - batch_size, sequence_length, _ = hidden_states.shape - - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - query = attn.head_to_batch_dim(query) - - encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states) - encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states) - encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj) - encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj) - - if not attn.only_cross_attention: - key = attn.to_k(hidden_states) - value = attn.to_v(hidden_states) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - key = torch.cat([encoder_hidden_states_key_proj, key], dim=1) - value = torch.cat([encoder_hidden_states_value_proj, value], dim=1) - else: - key = encoder_hidden_states_key_proj - value = encoder_hidden_states_value_proj - - hidden_states = xformers.ops.memory_efficient_attention( - query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale - ) - hidden_states = hidden_states.to(query.dtype) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - hidden_states = hidden_states.transpose(-1, -2).reshape(residual.shape) - hidden_states = hidden_states + residual - - return hidden_states - - -class XFormersAttnProcessor: - r""" - Processor for implementing memory efficient attention using xFormers. - - Args: - attention_op (`Callable`, *optional*, defaults to `None`): - The base - [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to - use as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best - operator. - """ - - def __init__(self, attention_op: Optional[Callable] = None): - self.attention_op = attention_op - - def __call__( - self, - attn: Attention, - hidden_states: torch.FloatTensor, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - temb: Optional[torch.FloatTensor] = None, - ): - residual = hidden_states - - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) - - batch_size, key_tokens, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - - attention_mask = attn.prepare_attention_mask(attention_mask, key_tokens, batch_size) - if attention_mask is not None: - # expand our mask's singleton query_tokens dimension: - # [batch*heads, 1, key_tokens] -> - # [batch*heads, query_tokens, key_tokens] - # so that it can be added as a bias onto the attention scores that xformers computes: - # [batch*heads, query_tokens, key_tokens] - # we do this explicitly because xformers doesn't broadcast the singleton dimension for us. - _, query_tokens, _ = hidden_states.shape - attention_mask = attention_mask.expand(-1, query_tokens, -1) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - query = attn.head_to_batch_dim(query).contiguous() - key = attn.head_to_batch_dim(key).contiguous() - value = attn.head_to_batch_dim(value).contiguous() - - hidden_states = xformers.ops.memory_efficient_attention( - query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale - ) - hidden_states = hidden_states.to(query.dtype) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class AttnProcessor2_0: - r""" - Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). - """ - - def __init__(self): - if not hasattr(F, "scaled_dot_product_attention"): - raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.") - - def __call__( - self, - attn: Attention, - hidden_states, - encoder_hidden_states=None, - attention_mask=None, - temb=None, - ): - residual = hidden_states - - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - - if attention_mask is not None: - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - # scaled_dot_product_attention expects attention_mask shape to be - # (batch, heads, source_length, target_length) - attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1]) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - inner_dim = key.shape[-1] - head_dim = inner_dim // attn.heads - - query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - - key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - - # the output of sdp = (batch, num_heads, seq_len, head_dim) - # TODO: add support for attn.scale when we move to Torch 2.1 - hidden_states = F.scaled_dot_product_attention( - query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False - ) - - hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim) - hidden_states = hidden_states.to(query.dtype) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class LoRAXFormersAttnProcessor(nn.Module): - r""" - Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. - - Args: - hidden_size (`int`, *optional*): - The hidden size of the attention layer. - cross_attention_dim (`int`, *optional*): - The number of channels in the `encoder_hidden_states`. - rank (`int`, defaults to 4): - The dimension of the LoRA update matrices. - attention_op (`Callable`, *optional*, defaults to `None`): - The base - [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to - use as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best - operator. - network_alpha (`int`, *optional*): - Equivalent to `alpha` but it's usage is specific to Kohya (A1111) style LoRAs. - - """ - - def __init__( - self, - hidden_size, - cross_attention_dim, - rank=4, - attention_op: Optional[Callable] = None, - network_alpha=None, - **kwargs, - ): - super().__init__() - - self.hidden_size = hidden_size - self.cross_attention_dim = cross_attention_dim - self.rank = rank - self.attention_op = attention_op - - q_rank = kwargs.pop("q_rank", None) - q_hidden_size = kwargs.pop("q_hidden_size", None) - q_rank = q_rank if q_rank is not None else rank - q_hidden_size = q_hidden_size if q_hidden_size is not None else hidden_size - - v_rank = kwargs.pop("v_rank", None) - v_hidden_size = kwargs.pop("v_hidden_size", None) - v_rank = v_rank if v_rank is not None else rank - v_hidden_size = v_hidden_size if v_hidden_size is not None else hidden_size - - out_rank = kwargs.pop("out_rank", None) - out_hidden_size = kwargs.pop("out_hidden_size", None) - out_rank = out_rank if out_rank is not None else rank - out_hidden_size = out_hidden_size if out_hidden_size is not None else hidden_size - - self.to_q_lora = LoRALinearLayer(q_hidden_size, q_hidden_size, q_rank, network_alpha) - self.to_k_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank, network_alpha) - self.to_v_lora = LoRALinearLayer(cross_attention_dim or v_hidden_size, v_hidden_size, v_rank, network_alpha) - self.to_out_lora = LoRALinearLayer(out_hidden_size, out_hidden_size, out_rank, network_alpha) - - def __call__( - self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None, scale=1.0, temb=None - ): - residual = hidden_states - - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) + scale * self.to_q_lora(hidden_states) - query = attn.head_to_batch_dim(query).contiguous() - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) + scale * self.to_k_lora(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) + scale * self.to_v_lora(encoder_hidden_states) - - key = attn.head_to_batch_dim(key).contiguous() - value = attn.head_to_batch_dim(value).contiguous() - - hidden_states = xformers.ops.memory_efficient_attention( - query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale - ) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) + scale * self.to_out_lora(hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class LoRAAttnProcessor2_0(nn.Module): - r""" - Processor for implementing the LoRA attention mechanism using PyTorch 2.0's memory-efficient scaled dot-product - attention. - - Args: - hidden_size (`int`): - The hidden size of the attention layer. - cross_attention_dim (`int`, *optional*): - The number of channels in the `encoder_hidden_states`. - rank (`int`, defaults to 4): - The dimension of the LoRA update matrices. - network_alpha (`int`, *optional*): - Equivalent to `alpha` but it's usage is specific to Kohya (A1111) style LoRAs. - """ - - def __init__(self, hidden_size, cross_attention_dim=None, rank=4, network_alpha=None, **kwargs): - super().__init__() - if not hasattr(F, "scaled_dot_product_attention"): - raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.") - - self.hidden_size = hidden_size - self.cross_attention_dim = cross_attention_dim - self.rank = rank - - q_rank = kwargs.pop("q_rank", None) - q_hidden_size = kwargs.pop("q_hidden_size", None) - q_rank = q_rank if q_rank is not None else rank - q_hidden_size = q_hidden_size if q_hidden_size is not None else hidden_size - - v_rank = kwargs.pop("v_rank", None) - v_hidden_size = kwargs.pop("v_hidden_size", None) - v_rank = v_rank if v_rank is not None else rank - v_hidden_size = v_hidden_size if v_hidden_size is not None else hidden_size - - out_rank = kwargs.pop("out_rank", None) - out_hidden_size = kwargs.pop("out_hidden_size", None) - out_rank = out_rank if out_rank is not None else rank - out_hidden_size = out_hidden_size if out_hidden_size is not None else hidden_size - - self.to_q_lora = LoRALinearLayer(q_hidden_size, q_hidden_size, q_rank, network_alpha) - self.to_k_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank, network_alpha) - self.to_v_lora = LoRALinearLayer(cross_attention_dim or v_hidden_size, v_hidden_size, v_rank, network_alpha) - self.to_out_lora = LoRALinearLayer(out_hidden_size, out_hidden_size, out_rank, network_alpha) - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None, scale=1.0): - residual = hidden_states - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - inner_dim = hidden_states.shape[-1] - - if attention_mask is not None: - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - # scaled_dot_product_attention expects attention_mask shape to be - # (batch, heads, source_length, target_length) - attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1]) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) + scale * self.to_q_lora(hidden_states) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) + scale * self.to_k_lora(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) + scale * self.to_v_lora(encoder_hidden_states) - - head_dim = inner_dim // attn.heads - query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - - # TODO: add support for attn.scale when we move to Torch 2.1 - hidden_states = F.scaled_dot_product_attention( - query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False - ) - hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim) - hidden_states = hidden_states.to(query.dtype) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) + scale * self.to_out_lora(hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class CustomDiffusionXFormersAttnProcessor(nn.Module): - r""" - Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. - - Args: - train_kv (`bool`, defaults to `True`): - Whether to newly train the key and value matrices corresponding to the text features. - train_q_out (`bool`, defaults to `True`): - Whether to newly train query matrices corresponding to the latent image features. - hidden_size (`int`, *optional*, defaults to `None`): - The hidden size of the attention layer. - cross_attention_dim (`int`, *optional*, defaults to `None`): - The number of channels in the `encoder_hidden_states`. - out_bias (`bool`, defaults to `True`): - Whether to include the bias parameter in `train_q_out`. - dropout (`float`, *optional*, defaults to 0.0): - The dropout probability to use. - attention_op (`Callable`, *optional*, defaults to `None`): - The base - [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to use - as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best operator. - """ - - def __init__( - self, - train_kv=True, - train_q_out=False, - hidden_size=None, - cross_attention_dim=None, - out_bias=True, - dropout=0.0, - attention_op: Optional[Callable] = None, - ): - super().__init__() - self.train_kv = train_kv - self.train_q_out = train_q_out - - self.hidden_size = hidden_size - self.cross_attention_dim = cross_attention_dim - self.attention_op = attention_op - - # `_custom_diffusion` id for easy serialization and loading. - if self.train_kv: - self.to_k_custom_diffusion = nn.Linear(cross_attention_dim or hidden_size, hidden_size, bias=False) - self.to_v_custom_diffusion = nn.Linear(cross_attention_dim or hidden_size, hidden_size, bias=False) - if self.train_q_out: - self.to_q_custom_diffusion = nn.Linear(hidden_size, hidden_size, bias=False) - self.to_out_custom_diffusion = nn.ModuleList([]) - self.to_out_custom_diffusion.append(nn.Linear(hidden_size, hidden_size, bias=out_bias)) - self.to_out_custom_diffusion.append(nn.Dropout(dropout)) - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if self.train_q_out: - query = self.to_q_custom_diffusion(hidden_states).to(attn.to_q.weight.dtype) - else: - query = attn.to_q(hidden_states.to(attn.to_q.weight.dtype)) - - if encoder_hidden_states is None: - crossattn = False - encoder_hidden_states = hidden_states - else: - crossattn = True - if attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - if self.train_kv: - key = self.to_k_custom_diffusion(encoder_hidden_states.to(self.to_k_custom_diffusion.weight.dtype)) - value = self.to_v_custom_diffusion(encoder_hidden_states.to(self.to_v_custom_diffusion.weight.dtype)) - key = key.to(attn.to_q.weight.dtype) - value = value.to(attn.to_q.weight.dtype) - else: - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - if crossattn: - detach = torch.ones_like(key) - detach[:, :1, :] = detach[:, :1, :] * 0.0 - key = detach * key + (1 - detach) * key.detach() - value = detach * value + (1 - detach) * value.detach() - - query = attn.head_to_batch_dim(query).contiguous() - key = attn.head_to_batch_dim(key).contiguous() - value = attn.head_to_batch_dim(value).contiguous() - - hidden_states = xformers.ops.memory_efficient_attention( - query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale - ) - hidden_states = hidden_states.to(query.dtype) - hidden_states = attn.batch_to_head_dim(hidden_states) - - if self.train_q_out: - # linear proj - hidden_states = self.to_out_custom_diffusion[0](hidden_states) - # dropout - hidden_states = self.to_out_custom_diffusion[1](hidden_states) - else: - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - return hidden_states - - -class SlicedAttnProcessor: - r""" - Processor for implementing sliced attention. - - Args: - slice_size (`int`, *optional*): - The number of steps to compute attention. Uses as many slices as `attention_head_dim // slice_size`, and - `attention_head_dim` must be a multiple of the `slice_size`. - """ - - def __init__(self, slice_size): - self.slice_size = slice_size - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - residual = hidden_states - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - dim = query.shape[-1] - query = attn.head_to_batch_dim(query) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - batch_size_attention, query_tokens, _ = query.shape - hidden_states = torch.zeros( - (batch_size_attention, query_tokens, dim // attn.heads), device=query.device, dtype=query.dtype - ) - - for i in range(batch_size_attention // self.slice_size): - start_idx = i * self.slice_size - end_idx = (i + 1) * self.slice_size - - query_slice = query[start_idx:end_idx] - key_slice = key[start_idx:end_idx] - attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None - - attn_slice = attn.get_attention_scores(query_slice, key_slice, attn_mask_slice) - - attn_slice = torch.bmm(attn_slice, value[start_idx:end_idx]) - - hidden_states[start_idx:end_idx] = attn_slice - - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class SlicedAttnAddedKVProcessor: - r""" - Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. - - Args: - slice_size (`int`, *optional*): - The number of steps to compute attention. Uses as many slices as `attention_head_dim // slice_size`, and - `attention_head_dim` must be a multiple of the `slice_size`. - """ - - def __init__(self, slice_size): - self.slice_size = slice_size - - def __call__(self, attn: "Attention", hidden_states, encoder_hidden_states=None, attention_mask=None, temb=None): - residual = hidden_states - - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - hidden_states = hidden_states.view(hidden_states.shape[0], hidden_states.shape[1], -1).transpose(1, 2) - - batch_size, sequence_length, _ = hidden_states.shape - - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = attn.to_q(hidden_states) - dim = query.shape[-1] - query = attn.head_to_batch_dim(query) - - encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states) - encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states) - - encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj) - encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj) - - if not attn.only_cross_attention: - key = attn.to_k(hidden_states) - value = attn.to_v(hidden_states) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - key = torch.cat([encoder_hidden_states_key_proj, key], dim=1) - value = torch.cat([encoder_hidden_states_value_proj, value], dim=1) - else: - key = encoder_hidden_states_key_proj - value = encoder_hidden_states_value_proj - - batch_size_attention, query_tokens, _ = query.shape - hidden_states = torch.zeros( - (batch_size_attention, query_tokens, dim // attn.heads), device=query.device, dtype=query.dtype - ) - - for i in range(batch_size_attention // self.slice_size): - start_idx = i * self.slice_size - end_idx = (i + 1) * self.slice_size - - query_slice = query[start_idx:end_idx] - key_slice = key[start_idx:end_idx] - attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None - - attn_slice = attn.get_attention_scores(query_slice, key_slice, attn_mask_slice) - - attn_slice = torch.bmm(attn_slice, value[start_idx:end_idx]) - - hidden_states[start_idx:end_idx] = attn_slice - - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - hidden_states = hidden_states.transpose(-1, -2).reshape(residual.shape) - hidden_states = hidden_states + residual - - return hidden_states - - -AttentionProcessor = Union[ - AttnProcessor, - AttnProcessor2_0, - XFormersAttnProcessor, - SlicedAttnProcessor, - AttnAddedKVProcessor, - SlicedAttnAddedKVProcessor, - AttnAddedKVProcessor2_0, - XFormersAttnAddedKVProcessor, - LoRAAttnProcessor, - LoRAXFormersAttnProcessor, - LoRAAttnProcessor2_0, - LoRAAttnAddedKVProcessor, - CustomDiffusionAttnProcessor, - CustomDiffusionXFormersAttnProcessor, -] - -LORA_ATTENTION_PROCESSORS = ( - LoRAAttnProcessor, - LoRAAttnProcessor2_0, - LoRAXFormersAttnProcessor, - LoRAAttnAddedKVProcessor, -) - - -class SpatialNorm(nn.Module): - """ - Spatially conditioned normalization as defined in https://arxiv.org/abs/2209.09002 - """ - - def __init__( - self, - f_channels, - zq_channels, - ): - super().__init__() - self.norm_layer = nn.GroupNorm(num_channels=f_channels, num_groups=32, eps=1e-6, affine=True) - self.conv_y = nn.Conv2d(zq_channels, f_channels, kernel_size=1, stride=1, padding=0) - self.conv_b = nn.Conv2d(zq_channels, f_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, f, zq): - f_size = f.shape[-2:] - zq = F.interpolate(zq, size=f_size, mode="nearest") - norm_f = self.norm_layer(f) - new_f = norm_f * self.conv_y(zq) + self.conv_b(zq) - return new_f diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/__init__.py deleted file mode 100644 index fb54c151b21aa9dbcd6633ac0cd169dcd487fd26..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/__init__.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os - -from packaging import version - -from .. import __version__ -from .accelerate_utils import apply_forward_hook -from .constants import ( - CONFIG_NAME, - DEPRECATED_REVISION_ARGS, - DIFFUSERS_CACHE, - DIFFUSERS_DYNAMIC_MODULE_NAME, - FLAX_WEIGHTS_NAME, - HF_MODULES_CACHE, - HUGGINGFACE_CO_RESOLVE_ENDPOINT, - ONNX_EXTERNAL_WEIGHTS_NAME, - ONNX_WEIGHTS_NAME, - SAFETENSORS_WEIGHTS_NAME, - WEIGHTS_NAME, -) -from .deprecation_utils import deprecate -from .doc_utils import replace_example_docstring -from .dynamic_modules_utils import get_class_from_dynamic_module -from .hub_utils import ( - HF_HUB_OFFLINE, - _add_variant, - _get_model_file, - extract_commit_hash, - http_user_agent, -) -from .import_utils import ( - BACKENDS_MAPPING, - ENV_VARS_TRUE_AND_AUTO_VALUES, - ENV_VARS_TRUE_VALUES, - USE_JAX, - USE_TF, - USE_TORCH, - DummyObject, - OptionalDependencyNotAvailable, - is_accelerate_available, - is_accelerate_version, - is_bs4_available, - is_flax_available, - is_ftfy_available, - is_inflect_available, - is_invisible_watermark_available, - is_k_diffusion_available, - is_k_diffusion_version, - is_librosa_available, - is_note_seq_available, - is_omegaconf_available, - is_onnx_available, - is_safetensors_available, - is_scipy_available, - is_tensorboard_available, - is_tf_available, - is_torch_available, - is_torch_version, - is_torchsde_available, - is_transformers_available, - is_transformers_version, - is_unidecode_available, - is_wandb_available, - is_xformers_available, - requires_backends, -) -from .logging import get_logger -from .outputs import BaseOutput -from .pil_utils import PIL_INTERPOLATION, numpy_to_pil, pt_to_pil -from .torch_utils import is_compiled_module, randn_tensor - - -if is_torch_available(): - from .testing_utils import ( - floats_tensor, - load_hf_numpy, - load_image, - load_numpy, - load_pt, - nightly, - parse_flag_from_env, - print_tensor_test, - require_torch_2, - require_torch_gpu, - skip_mps, - slow, - torch_all_close, - torch_device, - ) - from .torch_utils import maybe_allow_in_graph - -from .testing_utils import export_to_gif, export_to_obj, export_to_ply, export_to_video - - -logger = get_logger(__name__) - - -def check_min_version(min_version): - if version.parse(__version__) < version.parse(min_version): - if "dev" in min_version: - error_message = ( - "This example requires a source install from HuggingFace diffusers (see " - "`https://huggingface.co/docs/diffusers/installation#install-from-source`)," - ) - else: - error_message = f"This example requires a minimum version of {min_version}," - error_message += f" but the version found is {__version__}.\n" - raise ImportError(error_message) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_kdpm2_discrete.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_kdpm2_discrete.py deleted file mode 100644 index 4f1bd1f8aeb78a9266a319fe1f097e7c4a5d0e2a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_kdpm2_discrete.py +++ /dev/null @@ -1,132 +0,0 @@ -import torch - -from diffusers import KDPM2DiscreteScheduler -from diffusers.utils import torch_device - -from .test_schedulers import SchedulerCommonTest - - -class KDPM2DiscreteSchedulerTest(SchedulerCommonTest): - scheduler_classes = (KDPM2DiscreteScheduler,) - num_inference_steps = 10 - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1100, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - } - - config.update(**kwargs) - return config - - def test_timesteps(self): - for timesteps in [10, 50, 100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_betas(self): - for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "scaled_linear"]: - self.check_over_configs(beta_schedule=schedule) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_full_loop_with_v_prediction(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(prediction_type="v_prediction") - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - sample = sample.to(torch_device) - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - if torch_device in ["cpu", "mps"]: - assert abs(result_sum.item() - 4.6934e-07) < 1e-2 - assert abs(result_mean.item() - 6.1112e-10) < 1e-3 - else: - # CUDA - assert abs(result_sum.item() - 4.693428650170972e-07) < 1e-2 - assert abs(result_mean.item() - 0.0002) < 1e-3 - - def test_full_loop_no_noise(self): - if torch_device == "mps": - return - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - sample = sample.to(torch_device) - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - if torch_device in ["cpu", "mps"]: - assert abs(result_sum.item() - 20.4125) < 1e-2 - assert abs(result_mean.item() - 0.0266) < 1e-3 - else: - # CUDA - assert abs(result_sum.item() - 20.4125) < 1e-2 - assert abs(result_mean.item() - 0.0266) < 1e-3 - - def test_full_loop_device(self): - if torch_device == "mps": - return - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps, device=torch_device) - - model = self.dummy_model() - sample = self.dummy_sample_deter.to(torch_device) * scheduler.init_noise_sigma - - for t in scheduler.timesteps: - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - if str(torch_device).startswith("cpu"): - # The following sum varies between 148 and 156 on mps. Why? - assert abs(result_sum.item() - 20.4125) < 1e-2 - assert abs(result_mean.item() - 0.0266) < 1e-3 - else: - # CUDA - assert abs(result_sum.item() - 20.4125) < 1e-2 - assert abs(result_mean.item() - 0.0266) < 1e-3 diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_inits.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_inits.py deleted file mode 100644 index 6b1cdb6fcefd9475bc6bb94a79200913c3601f95..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_inits.py +++ /dev/null @@ -1,299 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import collections -import importlib.util -import os -import re -from pathlib import Path - - -PATH_TO_TRANSFORMERS = "src/transformers" - - -# Matches is_xxx_available() -_re_backend = re.compile(r"is\_([a-z_]*)_available()") -# Catches a one-line _import_struct = {xxx} -_re_one_line_import_struct = re.compile(r"^_import_structure\s+=\s+\{([^\}]+)\}") -# Catches a line with a key-values pattern: "bla": ["foo", "bar"] -_re_import_struct_key_value = re.compile(r'\s+"\S*":\s+\[([^\]]*)\]') -# Catches a line if not is_foo_available -_re_test_backend = re.compile(r"^\s*if\s+not\s+is\_[a-z_]*\_available\(\)") -# Catches a line _import_struct["bla"].append("foo") -_re_import_struct_add_one = re.compile(r'^\s*_import_structure\["\S*"\]\.append\("(\S*)"\)') -# Catches a line _import_struct["bla"].extend(["foo", "bar"]) or _import_struct["bla"] = ["foo", "bar"] -_re_import_struct_add_many = re.compile(r"^\s*_import_structure\[\S*\](?:\.extend\(|\s*=\s+)\[([^\]]*)\]") -# Catches a line with an object between quotes and a comma: "MyModel", -_re_quote_object = re.compile('^\s+"([^"]+)",') -# Catches a line with objects between brackets only: ["foo", "bar"], -_re_between_brackets = re.compile("^\s+\[([^\]]+)\]") -# Catches a line with from foo import bar, bla, boo -_re_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n") -# Catches a line with try: -_re_try = re.compile(r"^\s*try:") -# Catches a line with else: -_re_else = re.compile(r"^\s*else:") - - -def find_backend(line): - """Find one (or multiple) backend in a code line of the init.""" - if _re_test_backend.search(line) is None: - return None - backends = [b[0] for b in _re_backend.findall(line)] - backends.sort() - return "_and_".join(backends) - - -def parse_init(init_file): - """ - Read an init_file and parse (per backend) the _import_structure objects defined and the TYPE_CHECKING objects - defined - """ - with open(init_file, "r", encoding="utf-8", newline="\n") as f: - lines = f.readlines() - - line_index = 0 - while line_index < len(lines) and not lines[line_index].startswith("_import_structure = {"): - line_index += 1 - - # If this is a traditional init, just return. - if line_index >= len(lines): - return None - - # First grab the objects without a specific backend in _import_structure - objects = [] - while not lines[line_index].startswith("if TYPE_CHECKING") and find_backend(lines[line_index]) is None: - line = lines[line_index] - # If we have everything on a single line, let's deal with it. - if _re_one_line_import_struct.search(line): - content = _re_one_line_import_struct.search(line).groups()[0] - imports = re.findall("\[([^\]]+)\]", content) - for imp in imports: - objects.extend([obj[1:-1] for obj in imp.split(", ")]) - line_index += 1 - continue - single_line_import_search = _re_import_struct_key_value.search(line) - if single_line_import_search is not None: - imports = [obj[1:-1] for obj in single_line_import_search.groups()[0].split(", ") if len(obj) > 0] - objects.extend(imports) - elif line.startswith(" " * 8 + '"'): - objects.append(line[9:-3]) - line_index += 1 - - import_dict_objects = {"none": objects} - # Let's continue with backend-specific objects in _import_structure - while not lines[line_index].startswith("if TYPE_CHECKING"): - # If the line is an if not is_backend_available, we grab all objects associated. - backend = find_backend(lines[line_index]) - # Check if the backend declaration is inside a try block: - if _re_try.search(lines[line_index - 1]) is None: - backend = None - - if backend is not None: - line_index += 1 - - # Scroll until we hit the else block of try-except-else - while _re_else.search(lines[line_index]) is None: - line_index += 1 - - line_index += 1 - - objects = [] - # Until we unindent, add backend objects to the list - while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 4): - line = lines[line_index] - if _re_import_struct_add_one.search(line) is not None: - objects.append(_re_import_struct_add_one.search(line).groups()[0]) - elif _re_import_struct_add_many.search(line) is not None: - imports = _re_import_struct_add_many.search(line).groups()[0].split(", ") - imports = [obj[1:-1] for obj in imports if len(obj) > 0] - objects.extend(imports) - elif _re_between_brackets.search(line) is not None: - imports = _re_between_brackets.search(line).groups()[0].split(", ") - imports = [obj[1:-1] for obj in imports if len(obj) > 0] - objects.extend(imports) - elif _re_quote_object.search(line) is not None: - objects.append(_re_quote_object.search(line).groups()[0]) - elif line.startswith(" " * 8 + '"'): - objects.append(line[9:-3]) - elif line.startswith(" " * 12 + '"'): - objects.append(line[13:-3]) - line_index += 1 - - import_dict_objects[backend] = objects - else: - line_index += 1 - - # At this stage we are in the TYPE_CHECKING part, first grab the objects without a specific backend - objects = [] - while ( - line_index < len(lines) - and find_backend(lines[line_index]) is None - and not lines[line_index].startswith("else") - ): - line = lines[line_index] - single_line_import_search = _re_import.search(line) - if single_line_import_search is not None: - objects.extend(single_line_import_search.groups()[0].split(", ")) - elif line.startswith(" " * 8): - objects.append(line[8:-2]) - line_index += 1 - - type_hint_objects = {"none": objects} - # Let's continue with backend-specific objects - while line_index < len(lines): - # If the line is an if is_backend_available, we grab all objects associated. - backend = find_backend(lines[line_index]) - # Check if the backend declaration is inside a try block: - if _re_try.search(lines[line_index - 1]) is None: - backend = None - - if backend is not None: - line_index += 1 - - # Scroll until we hit the else block of try-except-else - while _re_else.search(lines[line_index]) is None: - line_index += 1 - - line_index += 1 - - objects = [] - # Until we unindent, add backend objects to the list - while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8): - line = lines[line_index] - single_line_import_search = _re_import.search(line) - if single_line_import_search is not None: - objects.extend(single_line_import_search.groups()[0].split(", ")) - elif line.startswith(" " * 12): - objects.append(line[12:-2]) - line_index += 1 - - type_hint_objects[backend] = objects - else: - line_index += 1 - - return import_dict_objects, type_hint_objects - - -def analyze_results(import_dict_objects, type_hint_objects): - """ - Analyze the differences between _import_structure objects and TYPE_CHECKING objects found in an init. - """ - - def find_duplicates(seq): - return [k for k, v in collections.Counter(seq).items() if v > 1] - - if list(import_dict_objects.keys()) != list(type_hint_objects.keys()): - return ["Both sides of the init do not have the same backends!"] - - errors = [] - for key in import_dict_objects.keys(): - duplicate_imports = find_duplicates(import_dict_objects[key]) - if duplicate_imports: - errors.append(f"Duplicate _import_structure definitions for: {duplicate_imports}") - duplicate_type_hints = find_duplicates(type_hint_objects[key]) - if duplicate_type_hints: - errors.append(f"Duplicate TYPE_CHECKING objects for: {duplicate_type_hints}") - - if sorted(set(import_dict_objects[key])) != sorted(set(type_hint_objects[key])): - name = "base imports" if key == "none" else f"{key} backend" - errors.append(f"Differences for {name}:") - for a in type_hint_objects[key]: - if a not in import_dict_objects[key]: - errors.append(f" {a} in TYPE_HINT but not in _import_structure.") - for a in import_dict_objects[key]: - if a not in type_hint_objects[key]: - errors.append(f" {a} in _import_structure but not in TYPE_HINT.") - return errors - - -def check_all_inits(): - """ - Check all inits in the transformers repo and raise an error if at least one does not define the same objects in - both halves. - """ - failures = [] - for root, _, files in os.walk(PATH_TO_TRANSFORMERS): - if "__init__.py" in files: - fname = os.path.join(root, "__init__.py") - objects = parse_init(fname) - if objects is not None: - errors = analyze_results(*objects) - if len(errors) > 0: - errors[0] = f"Problem in {fname}, both halves do not define the same objects.\n{errors[0]}" - failures.append("\n".join(errors)) - if len(failures) > 0: - raise ValueError("\n\n".join(failures)) - - -def get_transformers_submodules(): - """ - Returns the list of Transformers submodules. - """ - submodules = [] - for path, directories, files in os.walk(PATH_TO_TRANSFORMERS): - for folder in directories: - # Ignore private modules - if folder.startswith("_"): - directories.remove(folder) - continue - # Ignore leftovers from branches (empty folders apart from pycache) - if len(list((Path(path) / folder).glob("*.py"))) == 0: - continue - short_path = str((Path(path) / folder).relative_to(PATH_TO_TRANSFORMERS)) - submodule = short_path.replace(os.path.sep, ".") - submodules.append(submodule) - for fname in files: - if fname == "__init__.py": - continue - short_path = str((Path(path) / fname).relative_to(PATH_TO_TRANSFORMERS)) - submodule = short_path.replace(".py", "").replace(os.path.sep, ".") - if len(submodule.split(".")) == 1: - submodules.append(submodule) - return submodules - - -IGNORE_SUBMODULES = [ - "convert_pytorch_checkpoint_to_tf2", - "modeling_flax_pytorch_utils", -] - - -def check_submodules(): - # This is to make sure the transformers module imported is the one in the repo. - spec = importlib.util.spec_from_file_location( - "transformers", - os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), - submodule_search_locations=[PATH_TO_TRANSFORMERS], - ) - transformers = spec.loader.load_module() - - module_not_registered = [ - module - for module in get_transformers_submodules() - if module not in IGNORE_SUBMODULES and module not in transformers._import_structure.keys() - ] - if len(module_not_registered) > 0: - list_of_modules = "\n".join(f"- {module}" for module in module_not_registered) - raise ValueError( - "The following submodules are not properly registered in the main init of Transformers:\n" - f"{list_of_modules}\n" - "Make sure they appear somewhere in the keys of `_import_structure` with an empty list as value." - ) - - -if __name__ == "__main__": - check_all_inits() - check_submodules() diff --git a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco.py deleted file mode 100644 index 55ca62b7bc6c9cdc97018bcfbe5b109038470dd3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='InstaBoost', - action_candidate=('normal', 'horizontal', 'skip'), - action_prob=(1, 0, 0), - scale=(0.8, 1.2), - dx=15, - dy=15, - theta=(-1, 1), - color_prob=0.5, - hflag=False, - aug_ratio=0.5), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) -# learning policy -lr_config = dict(step=[32, 44]) -runner = dict(type='EpochBasedRunner', max_epochs=48) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/scnet/scnet_r50_fpn_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/scnet/scnet_r50_fpn_20e_coco.py deleted file mode 100644 index 3b121a6a2836ac7626f7b383ada9508f8b9d972d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/scnet/scnet_r50_fpn_20e_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './scnet_r50_fpn_1x_coco.py' -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(type='EpochBasedRunner', max_epochs=20) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/README.md deleted file mode 100644 index be46e329b6b602f2f6fe77eb1af161b072c92534..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/README.md +++ /dev/null @@ -1,75 +0,0 @@ -# Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation - -## Introduction - - - -```latex -@inproceedings{deeplabv3plus2018, - title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation}, - author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam}, - booktitle={ECCV}, - year={2018} -} -``` - -## Results and models - -Note: -`D-8`/`D-16` here corresponding to the output stride 8/16 setting for DeepLab series. -`MG-124` stands for multi-grid dilation in the last stage of ResNet. - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ---------- | --------------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| DeepLabV3+ | R-50-D8 | 512x1024 | 40000 | 7.5 | 3.94 | 79.61 | 81.01 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes/deeplabv3plus_r50-d8_512x1024_40k_cityscapes_20200605_094610-d222ffcd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_40k_cityscapes/deeplabv3plus_r50-d8_512x1024_40k_cityscapes_20200605_094610.log.json) | -| DeepLabV3+ | R-101-D8 | 512x1024 | 40000 | 11 | 2.60 | 80.21 | 81.82 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_40k_cityscapes/deeplabv3plus_r101-d8_512x1024_40k_cityscapes_20200605_094614-3769eecf.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_40k_cityscapes/deeplabv3plus_r101-d8_512x1024_40k_cityscapes_20200605_094614.log.json) | -| DeepLabV3+ | R-50-D8 | 769x769 | 40000 | 8.5 | 1.72 | 78.97 | 80.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_40k_cityscapes/deeplabv3plus_r50-d8_769x769_40k_cityscapes_20200606_114143-1dcb0e3c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_40k_cityscapes/deeplabv3plus_r50-d8_769x769_40k_cityscapes_20200606_114143.log.json) | -| DeepLabV3+ | R-101-D8 | 769x769 | 40000 | 12.5 | 1.15 | 79.46 | 80.50 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes/deeplabv3plus_r101-d8_769x769_40k_cityscapes_20200606_114304-ff414b9e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes/deeplabv3plus_r101-d8_769x769_40k_cityscapes_20200606_114304.log.json) | -| DeepLabV3+ | R-18-D8 | 512x1024 | 80000 | 2.2 | 14.27 | 76.89 | 78.76 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes/deeplabv3plus_r18-d8_512x1024_80k_cityscapes_20201226_080942-cff257fe.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes/deeplabv3plus_r18-d8_512x1024_80k_cityscapes-20201226_080942.log.json) | -| DeepLabV3+ | R-50-D8 | 512x1024 | 80000 | - | - | 80.09 | 81.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_80k_cityscapes/deeplabv3plus_r50-d8_512x1024_80k_cityscapes_20200606_114049-f9fb496d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_80k_cityscapes/deeplabv3plus_r50-d8_512x1024_80k_cityscapes_20200606_114049.log.json) | -| DeepLabV3+ | R-101-D8 | 512x1024 | 80000 | - | - | 80.97 | 82.03 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes/deeplabv3plus_r101-d8_512x1024_80k_cityscapes_20200606_114143-068fcfe9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes/deeplabv3plus_r101-d8_512x1024_80k_cityscapes_20200606_114143.log.json) | -| DeepLabV3+ | R-18-D8 | 769x769 | 80000 | 2.5 | 5.74 | 76.26 | 77.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_769x769_80k_cityscapes/deeplabv3plus_r18-d8_769x769_80k_cityscapes_20201226_083346-f326e06a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_769x769_80k_cityscapes/deeplabv3plus_r18-d8_769x769_80k_cityscapes-20201226_083346.log.json) | -| DeepLabV3+ | R-50-D8 | 769x769 | 80000 | - | - | 79.83 | 81.48 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_80k_cityscapes/deeplabv3plus_r50-d8_769x769_80k_cityscapes_20200606_210233-0e9dfdc4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_769x769_80k_cityscapes/deeplabv3plus_r50-d8_769x769_80k_cityscapes_20200606_210233.log.json) | -| DeepLabV3+ | R-101-D8 | 769x769 | 80000 | - | - | 80.98 | 82.18 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_80k_cityscapes/deeplabv3plus_r101-d8_769x769_80k_cityscapes_20200607_000405-a7573d20.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_769x769_80k_cityscapes/deeplabv3plus_r101-d8_769x769_80k_cityscapes_20200607_000405.log.json) | -| DeepLabV3+ | R-101-D16-MG124 | 512x1024 | 40000 | 5.8 | 7.48 | 79.09 | 80.36 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes_20200908_005644-cf9ce186.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes-20200908_005644.log.json) | -| DeepLabV3+ | R-101-D16-MG124 | 512x1024 | 80000 | 9.9 | - | 79.90 | 81.33 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes_20200908_005644-ee6158e0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes-20200908_005644.log.json) | -| DeepLabV3+ | R-18b-D8 | 512x1024 | 80000 | 2.1 | 14.95 | 75.87 | 77.52 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes_20201226_090828-e451abd9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes/deeplabv3plus_r18b-d8_512x1024_80k_cityscapes-20201226_090828.log.json) | -| DeepLabV3+ | R-50b-D8 | 512x1024 | 80000 | 7.4 | 3.94 | 80.28 | 81.44 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes_20201225_213645-a97e4e43.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes/deeplabv3plus_r50b-d8_512x1024_80k_cityscapes-20201225_213645.log.json) | -| DeepLabV3+ | R-101b-D8 | 512x1024 | 80000 | 10.9 | 2.60 | 80.16 | 81.41 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes_20201226_190843-9c3c93a4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes/deeplabv3plus_r101b-d8_512x1024_80k_cityscapes-20201226_190843.log.json) | -| DeepLabV3+ | R-18b-D8 | 769x769 | 80000 | 2.4 | 5.96 | 76.36 | 78.24 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes/deeplabv3plus_r18b-d8_769x769_80k_cityscapes_20201226_151312-2c868aff.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes/deeplabv3plus_r18b-d8_769x769_80k_cityscapes-20201226_151312.log.json) | -| DeepLabV3+ | R-50b-D8 | 769x769 | 80000 | 8.4 | 1.72 | 79.41 | 80.56 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_769x769_80k_cityscapes/deeplabv3plus_r50b-d8_769x769_80k_cityscapes_20201225_224655-8b596d1c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50b-d8_769x769_80k_cityscapes/deeplabv3plus_r50b-d8_769x769_80k_cityscapes-20201225_224655.log.json) | -| DeepLabV3+ | R-101b-D8 | 769x769 | 80000 | 12.3 | 1.10 | 79.88 | 81.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes/deeplabv3plus_r101b-d8_769x769_80k_cityscapes_20201226_205041-227cdf7c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes/deeplabv3plus_r101b-d8_769x769_80k_cityscapes-20201226_205041.log.json) | - -### ADE20K - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| DeepLabV3+ | R-50-D8 | 512x512 | 80000 | 10.6 | 21.01 | 42.72 | 43.75 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k/deeplabv3plus_r50-d8_512x512_80k_ade20k_20200614_185028-bf1400d8.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k/deeplabv3plus_r50-d8_512x512_80k_ade20k_20200614_185028.log.json) | -| DeepLabV3+ | R-101-D8 | 512x512 | 80000 | 14.1 | 14.16 | 44.60 | 46.06 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k/deeplabv3plus_r101-d8_512x512_80k_ade20k_20200615_014139-d5730af7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k/deeplabv3plus_r101-d8_512x512_80k_ade20k_20200615_014139.log.json) | -| DeepLabV3+ | R-50-D8 | 512x512 | 160000 | - | - | 43.95 | 44.93 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k/deeplabv3plus_r50-d8_512x512_160k_ade20k_20200615_124504-6135c7e0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k/deeplabv3plus_r50-d8_512x512_160k_ade20k_20200615_124504.log.json) | -| DeepLabV3+ | R-101-D8 | 512x512 | 160000 | - | - | 45.47 | 46.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k/deeplabv3plus_r101-d8_512x512_160k_ade20k_20200615_123232-38ed86bb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k/deeplabv3plus_r101-d8_512x512_160k_ade20k_20200615_123232.log.json) | - -#### Pascal VOC 2012 + Aug - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| DeepLabV3+ | R-50-D8 | 512x512 | 20000 | 7.6 | 21 | 75.93 | 77.50 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug/deeplabv3plus_r50-d8_512x512_20k_voc12aug_20200617_102323-aad58ef1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_20k_voc12aug/deeplabv3plus_r50-d8_512x512_20k_voc12aug_20200617_102323.log.json) | -| DeepLabV3+ | R-101-D8 | 512x512 | 20000 | 11 | 13.88 | 77.22 | 78.59 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_20k_voc12aug/deeplabv3plus_r101-d8_512x512_20k_voc12aug_20200617_102345-c7ff3d56.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_20k_voc12aug/deeplabv3plus_r101-d8_512x512_20k_voc12aug_20200617_102345.log.json) | -| DeepLabV3+ | R-50-D8 | 512x512 | 40000 | - | - | 76.81 | 77.57 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug/deeplabv3plus_r50-d8_512x512_40k_voc12aug_20200613_161759-e1b43aa9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug/deeplabv3plus_r50-d8_512x512_40k_voc12aug_20200613_161759.log.json) | -| DeepLabV3+ | R-101-D8 | 512x512 | 40000 | - | - | 78.62 | 79.53 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_40k_voc12aug/deeplabv3plus_r101-d8_512x512_40k_voc12aug_20200613_205333-faf03387.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_512x512_40k_voc12aug/deeplabv3plus_r101-d8_512x512_40k_voc12aug_20200613_205333.log.json) | - -#### Pascal Context - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| DeepLabV3+ | R-101-D8 | 480x480 | 40000 | - | 9.09 | 47.30 | 48.47 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context/deeplabv3plus_r101-d8_480x480_40k_pascal_context_20200911_165459-d3c8a29e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context/deeplabv3plus_r101-d8_480x480_40k_pascal_context-20200911_165459.log.json) | -| DeepLabV3+ | R-101-D8 | 480x480 | 80000 | - | - | 47.23 | 48.26 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context/deeplabv3plus_r101-d8_480x480_80k_pascal_context_20200911_155322-145d3ee8.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context/deeplabv3plus_r101-d8_480x480_80k_pascal_context-20200911_155322.log.json) | - -#### Pascal Context 59 - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ---------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| DeepLabV3+ | R-101-D8 | 480x480 | 40000 | - | - | 52.86 | 54.54 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59_20210416_111233-ed937f15.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59-20210416_111233.log.json) | -| DeepLabV3+ | R-101-D8 | 480x480 | 80000 | - | - | 53.2 | 54.67 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59_20210416_111127-7ca0331d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59-20210416_111127.log.json) | diff --git a/spaces/Annelisseishere/Streamlit_GPT/app.py b/spaces/Annelisseishere/Streamlit_GPT/app.py deleted file mode 100644 index eeaec923e8702ad23ca62325837ccaf2592ce573..0000000000000000000000000000000000000000 --- a/spaces/Annelisseishere/Streamlit_GPT/app.py +++ /dev/null @@ -1,142 +0,0 @@ -from dotenv import load_dotenv -import os -import streamlit as st -from PyPDF2 import PdfFileReader -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.chains.question_answering import load_qa_chain -from langchain.llms import OpenAI as LLMSOpenAI -from langchain.llms import AzureOpenAI -from langchain.callbacks import get_openai_callback -from langchain.chat_models import ChatOpenAI -from docx import Document -from openpyxl import load_workbook -import pdfplumber - - -def extract_text_from_pdf(pdf_file): - with pdfplumber.open(pdf_file) as pdf: - text = "" - for page in pdf.pages: - text += page.extract_text() - return text - - -def extract_text_from_docx(docx_file): - doc = Document(docx_file) - paragraphs = [paragraph.text for paragraph in doc.paragraphs] - return "\n".join(paragraphs) - - -def extract_text_from_excel(excel_file): - workbook = load_workbook(excel_file) - text = "" - for sheet in workbook.sheetnames: - worksheet = workbook[sheet] - for row in worksheet.iter_rows(): - for cell in row: - if cell.value: - text += str(cell.value) + "\n" - return text - - -def split_text_into_chunks(text): - text_splitter = CharacterTextSplitter( - separator="\n", - chunk_size=1000, - chunk_overlap=200, - length_function=len - ) - return text_splitter.split_text(text) - - -def create_knowledge_base(chunks, api_key=None): - embeddings = OpenAIEmbeddings(openai_api_key=api_key) - knowledge_base = FAISS.from_texts(chunks, embeddings) - return knowledge_base - - -def answer_question(question, knowledge_base, model): - docs = knowledge_base.similarity_search(question) - llm = model(model_name="gpt-3.5-turbo", openai_api_key=st.session_state.api_key) - chain = load_qa_chain(llm, chain_type="stuff") - with get_openai_callback() as cb: - response = chain.run(input_documents=docs, question=question) - return response - - -def save_api_key(api_key): - st.session_state.api_key = api_key - - -def main(): - load_dotenv() - st.set_page_config(page_title="Ask Your PDF", layout="wide") - - # Sidebar - st.sidebar.title("Settings") - - # API Key input - st.sidebar.subheader("API Key") - api_key = st.sidebar.text_input("Insert your API Key", type="password") - st.sidebar.button("Save API Key", on_click=save_api_key, args=(api_key,)) - - model_type = st.sidebar.selectbox("Select Language Model", ["OpenAI", "AzureOpenAI"]) - if model_type == "AzureOpenAI": - model = AzureOpenAI - else: - model = ChatOpenAI - - chunk_size = st.sidebar.slider("Chunk Size", min_value=500, max_value=2000, value=1000, step=100) - chunk_overlap = st.sidebar.slider("Chunk Overlap", min_value=100, max_value=500, value=200, step=50) - show_content = st.sidebar.checkbox("Show Document Content") - show_answers = st.sidebar.checkbox("Show Previous Answers") - - # Main content - st.title("Ask Your Document 💭") - file_format = st.selectbox("Select File Format", ["PDF", "docx", "xlsx"]) - document = st.file_uploader("Upload Document", type=[file_format.lower()]) - - if not hasattr(st.session_state, "api_key") or not st.session_state.api_key: - st.warning("You need to insert your API Key first.") - elif document is not None: - if file_format == "PDF": - text = extract_text_from_pdf(document) - elif file_format == "docx": - text = extract_text_from_docx(document) - elif file_format == "xlsx": - text = extract_text_from_excel(document) - else: - text = "" - - if show_content: - st.subheader("Document Text:") - st.text_area("Content", value=text, height=300) - - chunks = split_text_into_chunks(text) - knowledge_base = create_knowledge_base(chunks, api_key=st.session_state.api_key) - - user_question = st.text_input("Ask a question based on the document content:") - - if user_question: - response = answer_question(user_question, knowledge_base, model) - st.subheader("Answer:") - st.write(response) - - # Store and display previous answers - if "answers" not in st.session_state: - st.session_state.answers = [] - st.session_state.answers.append((user_question, response)) - - if show_answers: - st.subheader("Previous Answers:") - for question, answer in st.session_state.answers: - st.write(f"Question: {question}") - st.write(f"Answer: {answer}") - st.write("------") - - -if __name__ == '__main__': - main() - diff --git a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/dataset.py b/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/dataset.py deleted file mode 100644 index d5f04f5f67ac4174334acaba608782950d0f7c25..0000000000000000000000000000000000000000 --- a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/dataset.py +++ /dev/null @@ -1,48 +0,0 @@ -import fastai -from fastai import * -from fastai.core import * -from fastai.vision.transform import get_transforms -from fastai.vision.data import ImageImageList, ImageDataBunch, imagenet_stats -from .augs import noisify - - -def get_colorize_data( - sz: int, - bs: int, - crappy_path: Path, - good_path: Path, - random_seed: int = None, - keep_pct: float = 1.0, - num_workers: int = 8, - stats: tuple = imagenet_stats, - xtra_tfms=[], -) -> ImageDataBunch: - - src = ( - ImageImageList.from_folder(crappy_path, convert_mode='RGB') - .use_partial_data(sample_pct=keep_pct, seed=random_seed) - .split_by_rand_pct(0.1, seed=random_seed) - ) - - data = ( - src.label_from_func(lambda x: good_path / x.relative_to(crappy_path)) - .transform( - get_transforms( - max_zoom=1.2, max_lighting=0.5, max_warp=0.25, xtra_tfms=xtra_tfms - ), - size=sz, - tfm_y=True, - ) - .databunch(bs=bs, num_workers=num_workers, no_check=True) - .normalize(stats, do_y=True) - ) - - data.c = 3 - return data - - -def get_dummy_databunch() -> ImageDataBunch: - path = Path('./assets/dummy/') - return get_colorize_data( - sz=1, bs=1, crappy_path=path, good_path=path, keep_pct=0.001 - ) diff --git a/spaces/Arijit-hazra/my-image-captioner/README.md b/spaces/Arijit-hazra/my-image-captioner/README.md deleted file mode 100644 index b4daf8172747e6f8b620707eee16746ef8a5fa63..0000000000000000000000000000000000000000 --- a/spaces/Arijit-hazra/my-image-captioner/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: My Image Captioner -emoji: 👀 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ashwanthram/myGenVoiceBot/app.py b/spaces/Ashwanthram/myGenVoiceBot/app.py deleted file mode 100644 index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000 --- a/spaces/Ashwanthram/myGenVoiceBot/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import os -import re -import requests -import json -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') -PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY') -PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID') - -PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID') -play_ht_api_get_audio_url = "https://play.ht/api/v2/tts" - - -template = """You are a helpful assistant to answer user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -headers = { - "accept": "text/event-stream", - "content-type": "application/json", - "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY, - "X-USER-ID": PLAY_HT_USER_ID -} - - -def get_payload(text): - return { - "text": text, - "voice": PLAY_HT_VOICE_ID, - "quality": "medium", - "output_format": "mp3", - "speed": 1, - "sample_rate": 24000, - "seed": None, - "temperature": None - } - -def get_generated_audio(text): - payload = get_payload(text) - generated_response = {} - try: - response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers) - response.raise_for_status() - generated_response["type"]= 'SUCCESS' - generated_response["response"] = response.text - except requests.exceptions.RequestException as e: - generated_response["type"]= 'ERROR' - try: - response_text = json.loads(response.text) - if response_text['error_message']: - generated_response["response"] = response_text['error_message'] - else: - generated_response["response"] = response.text - except Exception as e: - generated_response["response"] = response.text - except Exception as e: - generated_response["type"]= 'ERROR' - generated_response["response"] = response.text - return generated_response - -def extract_urls(text): - # Define the regex pattern for URLs - url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*' - - # Find all occurrences of URLs in the text - urls = re.findall(url_pattern, text) - - return urls - -def get_audio_reply_for_question(text): - generated_audio_event = get_generated_audio(text) - #From get_generated_audio, you will get events in a string format, from that we need to extract the url - final_response = { - "audio_url": '', - "message": '' - } - if generated_audio_event["type"] == 'SUCCESS': - audio_urls = extract_urls(generated_audio_event["response"]) - if len(audio_urls) == 0: - final_response['message'] = "No audio file link found in generated event" - else: - final_response['audio_url'] = audio_urls[-1] - else: - final_response['message'] = generated_audio_event['response'] - return final_response - -def download_url(url): - try: - # Send a GET request to the URL to fetch the content - final_response = { - 'content':'', - 'error':'' - } - response = requests.get(url) - # Check if the request was successful (status code 200) - if response.status_code == 200: - final_response['content'] = response.content - else: - final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}" - except Exception as e: - final_response['error'] = f"Failed to download the URL. Error: {e}" - return final_response - -def get_filename_from_url(url): - # Use os.path.basename() to extract the file name from the URL - file_name = os.path.basename(url) - return file_name - -def get_text_response(user_message): - response = llm_chain.predict(user_message = user_message) - return response - -def get_text_response_and_audio_response(user_message): - response = get_text_response(user_message) # Getting the reply from Open AI - audio_reply_for_question_response = get_audio_reply_for_question(response) - final_response = { - 'output_file_path': '', - 'message':'' - } - audio_url = audio_reply_for_question_response['audio_url'] - if audio_url: - output_file_path=get_filename_from_url(audio_url) - download_url_response = download_url(audio_url) - audio_content = download_url_response['content'] - if audio_content: - with open(output_file_path, "wb") as audio_file: - audio_file.write(audio_content) - final_response['output_file_path'] = output_file_path - else: - final_response['message'] = download_url_response['error'] - else: - final_response['message'] = audio_reply_for_question_response['message'] - return final_response - -def chat_bot_response(message, history): - text_and_audio_response = get_text_response_and_audio_response(message) - output_file_path = text_and_audio_response['output_file_path'] - if output_file_path: - return (text_and_audio_response['output_file_path'],) - else: - return text_and_audio_response['message'] - -demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"]) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/langhebrewmodel.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/langhebrewmodel.py deleted file mode 100644 index 56d2975877f092ac62ad403803f6456858affcba..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/langhebrewmodel.py +++ /dev/null @@ -1,4380 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -HEBREW_LANG_MODEL = { - 50: { # 'a' - 50: 0, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 2, # 'l' - 54: 2, # 'n' - 49: 0, # 'o' - 51: 2, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 1, # 'ק' - 7: 0, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 60: { # 'c' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 0, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 0, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 61: { # 'd' - 50: 1, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 2, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 0, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 1, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 42: { # 'e' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 2, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 2, # 'l' - 54: 2, # 'n' - 49: 1, # 'o' - 51: 2, # 'r' - 43: 2, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 1, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 53: { # 'i' - 50: 1, # 'a' - 60: 2, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 0, # 'i' - 56: 1, # 'l' - 54: 2, # 'n' - 49: 2, # 'o' - 51: 1, # 'r' - 43: 2, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 56: { # 'l' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 2, # 'e' - 53: 2, # 'i' - 56: 2, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 54: { # 'n' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 49: { # 'o' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 2, # 'n' - 49: 1, # 'o' - 51: 2, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 51: { # 'r' - 50: 2, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 2, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 2, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 43: { # 's' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 0, # 'd' - 42: 2, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 2, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 44: { # 't' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 0, # 'd' - 42: 2, # 'e' - 53: 2, # 'i' - 56: 1, # 'l' - 54: 0, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 63: { # 'u' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 0, # 'o' - 51: 1, # 'r' - 43: 2, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 34: { # '\xa0' - 50: 1, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 0, # 'e' - 53: 1, # 'i' - 56: 0, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 2, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 55: { # '´' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 2, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 1, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 48: { # '¼' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 39: { # '½' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 57: { # '¾' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 30: { # 'ְ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 2, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 1, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 2, # 'ע' - 26: 0, # 'ף' - 18: 2, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 59: { # 'ֱ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 1, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 0, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 41: { # 'ֲ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 0, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 33: { # 'ִ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 1, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 0, # 'ַ' - 29: 1, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 2, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 37: { # 'ֵ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 1, # 'ַ' - 29: 1, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 1, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 36: { # 'ֶ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 1, # 'ַ' - 29: 1, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 1, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 2, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 31: { # 'ַ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 0, # 'ַ' - 29: 2, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 2, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 29: { # 'ָ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 1, # 'ַ' - 29: 2, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 2, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 35: { # 'ֹ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 62: { # 'ֻ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 1, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 28: { # 'ּ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 3, # 'ְ' - 59: 0, # 'ֱ' - 41: 1, # 'ֲ' - 33: 3, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 3, # 'ַ' - 29: 3, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 2, # 'ׁ' - 45: 1, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 1, # 'ה' - 2: 2, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 1, # 'ם' - 6: 2, # 'מ' - 23: 1, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 38: { # 'ׁ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 2, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 45: { # 'ׂ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 2, # 'ֶ' - 31: 1, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 2, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 0, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 9: { # 'א' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 2, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 2, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 8: { # 'ב' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 3, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 1, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 20: { # 'ג' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 2, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 1, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 16: { # 'ד' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 1, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 3: { # 'ה' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 1, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 3, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 0, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 2: { # 'ו' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 3, # 'ֹ' - 62: 0, # 'ֻ' - 28: 3, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 3, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 24: { # 'ז' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 1, # 'ֲ' - 33: 1, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 1, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 14: { # 'ח' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 1, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 1, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 22: { # 'ט' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 1, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 1, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 3, # 'ר' - 10: 2, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 1: { # 'י' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 3, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 25: { # 'ך' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 2, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 15: { # 'כ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 3, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 2, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 4: { # 'ל' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 3, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 11: { # 'ם' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 6: { # 'מ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 0, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 23: { # 'ן' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 12: { # 'נ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 19: { # 'ס' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 1, # 'ָ' - 35: 1, # 'ֹ' - 62: 2, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 1, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 1, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 13: { # 'ע' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 1, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 1, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 2, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 26: { # 'ף' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 18: { # 'פ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 2, # 'ֶ' - 31: 1, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 2, # 'ב' - 20: 3, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 27: { # 'ץ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 0, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 21: { # 'צ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 1, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 1, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 0, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 17: { # 'ק' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 2, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 7: { # 'ר' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 2, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 1, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 3, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 10: { # 'ש' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 1, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 1, # 'ַ' - 29: 1, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 3, # 'ׁ' - 45: 2, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 5: { # 'ת' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 1, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 32: { # '–' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 1, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 52: { # '’' - 50: 1, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 1, # 'r' - 43: 2, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 47: { # '“' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 46: { # '”' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 58: { # '†' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 2, # '†' - 40: 0, # '…' - }, - 40: { # '…' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 0, # 'l' - 54: 1, # 'n' - 49: 0, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -WINDOWS_1255_HEBREW_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 69, # 'A' - 66: 91, # 'B' - 67: 79, # 'C' - 68: 80, # 'D' - 69: 92, # 'E' - 70: 89, # 'F' - 71: 97, # 'G' - 72: 90, # 'H' - 73: 68, # 'I' - 74: 111, # 'J' - 75: 112, # 'K' - 76: 82, # 'L' - 77: 73, # 'M' - 78: 95, # 'N' - 79: 85, # 'O' - 80: 78, # 'P' - 81: 121, # 'Q' - 82: 86, # 'R' - 83: 71, # 'S' - 84: 67, # 'T' - 85: 102, # 'U' - 86: 107, # 'V' - 87: 84, # 'W' - 88: 114, # 'X' - 89: 103, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 50, # 'a' - 98: 74, # 'b' - 99: 60, # 'c' - 100: 61, # 'd' - 101: 42, # 'e' - 102: 76, # 'f' - 103: 70, # 'g' - 104: 64, # 'h' - 105: 53, # 'i' - 106: 105, # 'j' - 107: 93, # 'k' - 108: 56, # 'l' - 109: 65, # 'm' - 110: 54, # 'n' - 111: 49, # 'o' - 112: 66, # 'p' - 113: 110, # 'q' - 114: 51, # 'r' - 115: 43, # 's' - 116: 44, # 't' - 117: 63, # 'u' - 118: 81, # 'v' - 119: 77, # 'w' - 120: 98, # 'x' - 121: 75, # 'y' - 122: 108, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 124, # '€' - 129: 202, # None - 130: 203, # '‚' - 131: 204, # 'ƒ' - 132: 205, # '„' - 133: 40, # '…' - 134: 58, # '†' - 135: 206, # '‡' - 136: 207, # 'ˆ' - 137: 208, # '‰' - 138: 209, # None - 139: 210, # '‹' - 140: 211, # None - 141: 212, # None - 142: 213, # None - 143: 214, # None - 144: 215, # None - 145: 83, # '‘' - 146: 52, # '’' - 147: 47, # '“' - 148: 46, # '”' - 149: 72, # '•' - 150: 32, # '–' - 151: 94, # '—' - 152: 216, # '˜' - 153: 113, # '™' - 154: 217, # None - 155: 109, # '›' - 156: 218, # None - 157: 219, # None - 158: 220, # None - 159: 221, # None - 160: 34, # '\xa0' - 161: 116, # '¡' - 162: 222, # '¢' - 163: 118, # '£' - 164: 100, # '₪' - 165: 223, # '¥' - 166: 224, # '¦' - 167: 117, # '§' - 168: 119, # '¨' - 169: 104, # '©' - 170: 125, # '×' - 171: 225, # '«' - 172: 226, # '¬' - 173: 87, # '\xad' - 174: 99, # '®' - 175: 227, # '¯' - 176: 106, # '°' - 177: 122, # '±' - 178: 123, # '²' - 179: 228, # '³' - 180: 55, # '´' - 181: 229, # 'µ' - 182: 230, # '¶' - 183: 101, # '·' - 184: 231, # '¸' - 185: 232, # '¹' - 186: 120, # '÷' - 187: 233, # '»' - 188: 48, # '¼' - 189: 39, # '½' - 190: 57, # '¾' - 191: 234, # '¿' - 192: 30, # 'ְ' - 193: 59, # 'ֱ' - 194: 41, # 'ֲ' - 195: 88, # 'ֳ' - 196: 33, # 'ִ' - 197: 37, # 'ֵ' - 198: 36, # 'ֶ' - 199: 31, # 'ַ' - 200: 29, # 'ָ' - 201: 35, # 'ֹ' - 202: 235, # None - 203: 62, # 'ֻ' - 204: 28, # 'ּ' - 205: 236, # 'ֽ' - 206: 126, # '־' - 207: 237, # 'ֿ' - 208: 238, # '׀' - 209: 38, # 'ׁ' - 210: 45, # 'ׂ' - 211: 239, # '׃' - 212: 240, # 'װ' - 213: 241, # 'ױ' - 214: 242, # 'ײ' - 215: 243, # '׳' - 216: 127, # '״' - 217: 244, # None - 218: 245, # None - 219: 246, # None - 220: 247, # None - 221: 248, # None - 222: 249, # None - 223: 250, # None - 224: 9, # 'א' - 225: 8, # 'ב' - 226: 20, # 'ג' - 227: 16, # 'ד' - 228: 3, # 'ה' - 229: 2, # 'ו' - 230: 24, # 'ז' - 231: 14, # 'ח' - 232: 22, # 'ט' - 233: 1, # 'י' - 234: 25, # 'ך' - 235: 15, # 'כ' - 236: 4, # 'ל' - 237: 11, # 'ם' - 238: 6, # 'מ' - 239: 23, # 'ן' - 240: 12, # 'נ' - 241: 19, # 'ס' - 242: 13, # 'ע' - 243: 26, # 'ף' - 244: 18, # 'פ' - 245: 27, # 'ץ' - 246: 21, # 'צ' - 247: 17, # 'ק' - 248: 7, # 'ר' - 249: 10, # 'ש' - 250: 5, # 'ת' - 251: 251, # None - 252: 252, # None - 253: 128, # '\u200e' - 254: 96, # '\u200f' - 255: 253, # None -} - -WINDOWS_1255_HEBREW_MODEL = SingleByteCharSetModel( - charset_name="windows-1255", - language="Hebrew", - char_to_order_map=WINDOWS_1255_HEBREW_CHAR_TO_ORDER, - language_model=HEBREW_LANG_MODEL, - typical_positive_ratio=0.984004, - keep_ascii_letters=False, - alphabet="אבגדהוזחטיךכלםמןנסעףפץצקרשתװױײ", -) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/unicode.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/unicode.py deleted file mode 100644 index 06526203911de55da3c2a8c5ae73f48024c3f018..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/unicode.py +++ /dev/null @@ -1,352 +0,0 @@ -# unicode.py - -import sys -from itertools import filterfalse -from typing import List, Tuple, Union - - -class _lazyclassproperty: - def __init__(self, fn): - self.fn = fn - self.__doc__ = fn.__doc__ - self.__name__ = fn.__name__ - - def __get__(self, obj, cls): - if cls is None: - cls = type(obj) - if not hasattr(cls, "_intern") or any( - cls._intern is getattr(superclass, "_intern", []) - for superclass in cls.__mro__[1:] - ): - cls._intern = {} - attrname = self.fn.__name__ - if attrname not in cls._intern: - cls._intern[attrname] = self.fn(cls) - return cls._intern[attrname] - - -UnicodeRangeList = List[Union[Tuple[int, int], Tuple[int]]] - - -class unicode_set: - """ - A set of Unicode characters, for language-specific strings for - ``alphas``, ``nums``, ``alphanums``, and ``printables``. - A unicode_set is defined by a list of ranges in the Unicode character - set, in a class attribute ``_ranges``. Ranges can be specified using - 2-tuples or a 1-tuple, such as:: - - _ranges = [ - (0x0020, 0x007e), - (0x00a0, 0x00ff), - (0x0100,), - ] - - Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x). - - A unicode set can also be defined using multiple inheritance of other unicode sets:: - - class CJK(Chinese, Japanese, Korean): - pass - """ - - _ranges: UnicodeRangeList = [] - - @_lazyclassproperty - def _chars_for_ranges(cls): - ret = [] - for cc in cls.__mro__: - if cc is unicode_set: - break - for rr in getattr(cc, "_ranges", ()): - ret.extend(range(rr[0], rr[-1] + 1)) - return [chr(c) for c in sorted(set(ret))] - - @_lazyclassproperty - def printables(cls): - "all non-whitespace characters in this range" - return "".join(filterfalse(str.isspace, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphas(cls): - "all alphabetic characters in this range" - return "".join(filter(str.isalpha, cls._chars_for_ranges)) - - @_lazyclassproperty - def nums(cls): - "all numeric digit characters in this range" - return "".join(filter(str.isdigit, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphanums(cls): - "all alphanumeric characters in this range" - return cls.alphas + cls.nums - - @_lazyclassproperty - def identchars(cls): - "all characters in this range that are valid identifier characters, plus underscore '_'" - return "".join( - sorted( - set( - "".join(filter(str.isidentifier, cls._chars_for_ranges)) - + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµº" - + "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ" - + "_" - ) - ) - ) - - @_lazyclassproperty - def identbodychars(cls): - """ - all characters in this range that are valid identifier body characters, - plus the digits 0-9 - """ - return "".join( - sorted( - set( - cls.identchars - + "0123456789" - + "".join( - [c for c in cls._chars_for_ranges if ("_" + c).isidentifier()] - ) - ) - ) - ) - - -class pyparsing_unicode(unicode_set): - """ - A namespace class for defining common language unicode_sets. - """ - - # fmt: off - - # define ranges in language character sets - _ranges: UnicodeRangeList = [ - (0x0020, sys.maxunicode), - ] - - class BasicMultilingualPlane(unicode_set): - "Unicode set for the Basic Multilingual Plane" - _ranges: UnicodeRangeList = [ - (0x0020, 0xFFFF), - ] - - class Latin1(unicode_set): - "Unicode set for Latin-1 Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0020, 0x007E), - (0x00A0, 0x00FF), - ] - - class LatinA(unicode_set): - "Unicode set for Latin-A Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0100, 0x017F), - ] - - class LatinB(unicode_set): - "Unicode set for Latin-B Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0180, 0x024F), - ] - - class Greek(unicode_set): - "Unicode set for Greek Unicode Character Ranges" - _ranges: UnicodeRangeList = [ - (0x0342, 0x0345), - (0x0370, 0x0377), - (0x037A, 0x037F), - (0x0384, 0x038A), - (0x038C,), - (0x038E, 0x03A1), - (0x03A3, 0x03E1), - (0x03F0, 0x03FF), - (0x1D26, 0x1D2A), - (0x1D5E,), - (0x1D60,), - (0x1D66, 0x1D6A), - (0x1F00, 0x1F15), - (0x1F18, 0x1F1D), - (0x1F20, 0x1F45), - (0x1F48, 0x1F4D), - (0x1F50, 0x1F57), - (0x1F59,), - (0x1F5B,), - (0x1F5D,), - (0x1F5F, 0x1F7D), - (0x1F80, 0x1FB4), - (0x1FB6, 0x1FC4), - (0x1FC6, 0x1FD3), - (0x1FD6, 0x1FDB), - (0x1FDD, 0x1FEF), - (0x1FF2, 0x1FF4), - (0x1FF6, 0x1FFE), - (0x2129,), - (0x2719, 0x271A), - (0xAB65,), - (0x10140, 0x1018D), - (0x101A0,), - (0x1D200, 0x1D245), - (0x1F7A1, 0x1F7A7), - ] - - class Cyrillic(unicode_set): - "Unicode set for Cyrillic Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0400, 0x052F), - (0x1C80, 0x1C88), - (0x1D2B,), - (0x1D78,), - (0x2DE0, 0x2DFF), - (0xA640, 0xA672), - (0xA674, 0xA69F), - (0xFE2E, 0xFE2F), - ] - - class Chinese(unicode_set): - "Unicode set for Chinese Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x2E80, 0x2E99), - (0x2E9B, 0x2EF3), - (0x31C0, 0x31E3), - (0x3400, 0x4DB5), - (0x4E00, 0x9FEF), - (0xA700, 0xA707), - (0xF900, 0xFA6D), - (0xFA70, 0xFAD9), - (0x16FE2, 0x16FE3), - (0x1F210, 0x1F212), - (0x1F214, 0x1F23B), - (0x1F240, 0x1F248), - (0x20000, 0x2A6D6), - (0x2A700, 0x2B734), - (0x2B740, 0x2B81D), - (0x2B820, 0x2CEA1), - (0x2CEB0, 0x2EBE0), - (0x2F800, 0x2FA1D), - ] - - class Japanese(unicode_set): - "Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges" - _ranges: UnicodeRangeList = [] - - class Kanji(unicode_set): - "Unicode set for Kanji Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x4E00, 0x9FBF), - (0x3000, 0x303F), - ] - - class Hiragana(unicode_set): - "Unicode set for Hiragana Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x3041, 0x3096), - (0x3099, 0x30A0), - (0x30FC,), - (0xFF70,), - (0x1B001,), - (0x1B150, 0x1B152), - (0x1F200,), - ] - - class Katakana(unicode_set): - "Unicode set for Katakana Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x3099, 0x309C), - (0x30A0, 0x30FF), - (0x31F0, 0x31FF), - (0x32D0, 0x32FE), - (0xFF65, 0xFF9F), - (0x1B000,), - (0x1B164, 0x1B167), - (0x1F201, 0x1F202), - (0x1F213,), - ] - - class Hangul(unicode_set): - "Unicode set for Hangul (Korean) Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x1100, 0x11FF), - (0x302E, 0x302F), - (0x3131, 0x318E), - (0x3200, 0x321C), - (0x3260, 0x327B), - (0x327E,), - (0xA960, 0xA97C), - (0xAC00, 0xD7A3), - (0xD7B0, 0xD7C6), - (0xD7CB, 0xD7FB), - (0xFFA0, 0xFFBE), - (0xFFC2, 0xFFC7), - (0xFFCA, 0xFFCF), - (0xFFD2, 0xFFD7), - (0xFFDA, 0xFFDC), - ] - - Korean = Hangul - - class CJK(Chinese, Japanese, Hangul): - "Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range" - - class Thai(unicode_set): - "Unicode set for Thai Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0E01, 0x0E3A), - (0x0E3F, 0x0E5B) - ] - - class Arabic(unicode_set): - "Unicode set for Arabic Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0600, 0x061B), - (0x061E, 0x06FF), - (0x0700, 0x077F), - ] - - class Hebrew(unicode_set): - "Unicode set for Hebrew Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0591, 0x05C7), - (0x05D0, 0x05EA), - (0x05EF, 0x05F4), - (0xFB1D, 0xFB36), - (0xFB38, 0xFB3C), - (0xFB3E,), - (0xFB40, 0xFB41), - (0xFB43, 0xFB44), - (0xFB46, 0xFB4F), - ] - - class Devanagari(unicode_set): - "Unicode set for Devanagari Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0900, 0x097F), - (0xA8E0, 0xA8FF) - ] - - # fmt: on - - -pyparsing_unicode.Japanese._ranges = ( - pyparsing_unicode.Japanese.Kanji._ranges - + pyparsing_unicode.Japanese.Hiragana._ranges - + pyparsing_unicode.Japanese.Katakana._ranges -) - -pyparsing_unicode.BMP = pyparsing_unicode.BasicMultilingualPlane - -# add language identifiers using language Unicode -pyparsing_unicode.العربية = pyparsing_unicode.Arabic -pyparsing_unicode.中文 = pyparsing_unicode.Chinese -pyparsing_unicode.кириллица = pyparsing_unicode.Cyrillic -pyparsing_unicode.Ελληνικά = pyparsing_unicode.Greek -pyparsing_unicode.עִברִית = pyparsing_unicode.Hebrew -pyparsing_unicode.日本語 = pyparsing_unicode.Japanese -pyparsing_unicode.Japanese.漢字 = pyparsing_unicode.Japanese.Kanji -pyparsing_unicode.Japanese.カタカナ = pyparsing_unicode.Japanese.Katakana -pyparsing_unicode.Japanese.ひらがな = pyparsing_unicode.Japanese.Hiragana -pyparsing_unicode.한국어 = pyparsing_unicode.Korean -pyparsing_unicode.ไทย = pyparsing_unicode.Thai -pyparsing_unicode.देवनागरी = pyparsing_unicode.Devanagari diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/unicode_utils.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/unicode_utils.py deleted file mode 100644 index e84e65e3e14152a2ba6e6e05d914f0e1bbef187b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/unicode_utils.py +++ /dev/null @@ -1,42 +0,0 @@ -import unicodedata -import sys - - -# HFS Plus uses decomposed UTF-8 -def decompose(path): - if isinstance(path, str): - return unicodedata.normalize('NFD', path) - try: - path = path.decode('utf-8') - path = unicodedata.normalize('NFD', path) - path = path.encode('utf-8') - except UnicodeError: - pass # Not UTF-8 - return path - - -def filesys_decode(path): - """ - Ensure that the given path is decoded, - NONE when no expected encoding works - """ - - if isinstance(path, str): - return path - - fs_enc = sys.getfilesystemencoding() or 'utf-8' - candidates = fs_enc, 'utf-8' - - for enc in candidates: - try: - return path.decode(enc) - except UnicodeDecodeError: - continue - - -def try_encode(string, enc): - "turn unicode encoding into a functional routine" - try: - return string.encode(enc) - except UnicodeEncodeError: - return None diff --git a/spaces/Audio-AGI/AudioSep/gradio_examples.py b/spaces/Audio-AGI/AudioSep/gradio_examples.py deleted file mode 100644 index 45610ecf58cc1941fb056b589ef09ae9d5f13458..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/gradio_examples.py +++ /dev/null @@ -1,16 +0,0 @@ -from pathlib import Path - -CURR_DIR = Path(__file__).resolve().parent - -EXAMPLES_DIR = CURR_DIR / "examples" - -EXAMPLES = [ - [EXAMPLES_DIR / "acoustic_guitar.wav", "acoustic guitar"], - [EXAMPLES_DIR / "laughing.wav", "laughing"], - [ - EXAMPLES_DIR / "ticktok_piano.wav", - "A ticktock sound playing at the same rhythm with piano.", - ], - [EXAMPLES_DIR / "water_drops.wav", "water drops"], - [EXAMPLES_DIR / "noisy_speech.wav", "speech"], -] diff --git a/spaces/Awesimo/jojogan/op/fused_act.py b/spaces/Awesimo/jojogan/op/fused_act.py deleted file mode 100644 index 4f39941f2ce76c474e3914ad1149741b02f24f65..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/op/fused_act.py +++ /dev/null @@ -1,127 +0,0 @@ -import os - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - "fused", - sources=[ - os.path.join(module_path, "fused_bias_act.cpp"), - os.path.join(module_path, "fused_bias_act_kernel.cu"), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, bias, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output.contiguous(), empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - if bias: - grad_bias = grad_input.sum(dim).detach() - - else: - grad_bias = empty - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input.contiguous(), - gradgrad_bias, - out, - 3, - 1, - ctx.negative_slope, - ctx.scale, - ) - - return gradgrad_out, None, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - - ctx.bias = bias is not None - - if bias is None: - bias = empty - - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.bias, ctx.negative_slope, ctx.scale - ) - - if not ctx.bias: - grad_bias = None - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, bias=True, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - if bias: - self.bias = nn.Parameter(torch.zeros(channel)) - - else: - self.bias = None - - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias=None, negative_slope=0.2, scale=2 ** 0.5): - if input.device.type == "cpu": - if bias is not None: - rest_dim = [1] * (input.ndim - bias.ndim - 1) - return ( - F.leaky_relu( - input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=0.2 - ) - * scale - ) - - else: - return F.leaky_relu(input, negative_slope=0.2) * scale - - else: - return FusedLeakyReLUFunction.apply( - input.contiguous(), bias, negative_slope, scale - ) diff --git a/spaces/BaddaAshok0265/AshokGenAI/app.py b/spaces/BaddaAshok0265/AshokGenAI/app.py deleted file mode 100644 index db4682e73d755e0f62038a4e8a406f7f2bb653fe..0000000000000000000000000000000000000000 --- a/spaces/BaddaAshok0265/AshokGenAI/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """ Hello, meet Ashok Badda, your youthful and witty personal assistant! At 20 years old, He's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py b/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py deleted file mode 100644 index 823b44fb64898e8dcbb12180ba45d1718f9b03f7..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Benson/text-generation/Examples/Candy Crush Soda Saga No Download.md b/spaces/Benson/text-generation/Examples/Candy Crush Soda Saga No Download.md deleted file mode 100644 index 6c537c2de87de3145780d8d3966830f0c0892764..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Candy Crush Soda Saga No Download.md +++ /dev/null @@ -1,145 +0,0 @@ - -

    Candy Crush Soda Saga: Un juego de puzzle dulce y efervescente

    -

    Si eres un fan de los juegos de puzzle de match-3, probablemente hayas escuchado o jugado a Candy Crush Saga, uno de los juegos más populares y adictivos de este género. Pero ¿sabías que hay una secuela de este juego que ofrece aún más diversión y desafíos? Se llama Candy Crush Soda Saga, y es un juego que puedes jugar online gratis en tu PC o dispositivo móvil, sin descargar nada. En este artículo, te contaremos todo lo que necesitas saber sobre Candy Crush Soda Saga, cómo jugarlo online y cuáles son algunas de las alternativas a este juego.

    -

    ¿Qué es Candy Crush Soda Saga?

    -

    Candy Crush Soda Saga es un juego de puzzle desarrollado por King, la misma compañía que creó Candy Crush Saga, Diamond Digger Saga, Farm Heroes Saga y muchos otros juegos populares. Fue lanzado en 2014 como un spin-off de Candy Crush Saga, y desde entonces se ha convertido en uno de los juegos más jugados en Facebook, Android, iOS, Windows Phone y Windows 10.

    -

    candy crush soda saga no download


    Download File –––––>>> https://bltlly.com/2v6JBZ



    -

    La secuela de la popular Candy Crush Saga

    -

    Candy Crush Soda Saga es una secuela de Candy Crush Saga, lo que significa que sigue el mismo juego básico de combinar tres o más caramelos del mismo color para eliminarlos del tablero y completar varios objetivos. Sin embargo, Candy Crush Soda Saga también introduce algunos nuevos elementos y giros que lo hacen diferente de su predecesor. Por ejemplo, en Candy Crush Soda Saga, encontrarás nuevos tipos de dulces, como botellas de refresco, caramelos de pescado, caramelos para colorear, caramelos de panal y dulces de mermelada. También explorarás nuevos mundos y niveles con diferentes temas y orígenes, como lagos de soda, nubes de algodón de azúcar, islas de glaseado, jardines de miel y fábricas de mermelada. También conocerás nuevos personajes y amigos a lo largo de tu viaje, como Kimmy, la hermana de Tiffi que está buscando a su hermano perdido.

    -

    El juego y las características de Candy Crush Soda Saga

    - -
      -
    • Más de 10000 niveles de Sodalicious que pondrán a prueba tus habilidades y estrategia.
    • -
    • Temporadas mensuales durante todo el año, llenas de misiones desafiantes y un pase de temporada alimentado por recompensas.
    • -
    • Modos de juego burbujeando con diversión y dulces únicos:
        -
      • Soda - Cambiar las botellas y hacer coincidir los caramelos para liberar la soda púrpura y guardar los Candy Bears.
      • -
      • Glaseado - Partido de dulces para romper el hielo y establecer los osos de caramelo libre.
      • -
      • Honeycomb - Coincidir con los dulces al lado de nido de abeja para liberar los osos de caramelo atrapados.
      • -
      • Jam - Difundir la mermelada en todo el tablero.
      • -
      -
    • -
    • Dulces únicos y deliciosas nuevas combinaciones a juego:
        -
      • Partido 4 dulces en un cuadrado para hacer un pescado sueco!
      • -
      • Partido 7 caramelos para el todo nuevo caramelo para colorear!
      • -
      -
    • -
    • ¡Explora mundos y niveles jugosos con aún más personajes!
    • -

      Los beneficios de jugar Candy Crush Soda Saga en línea

      -

      Una de las mejores cosas acerca de Candy Crush Soda Saga es que se puede jugar en línea de forma gratuita en su PC o dispositivo móvil, sin descargar nada. Esto significa que puede disfrutar del juego en cualquier momento y en cualquier lugar, siempre y cuando tenga una conexión a Internet. Jugar a Candy Crush Soda Saga en línea también tiene algunos otros beneficios, como:

      -
        -
      • Puedes sincronizar el progreso del juego en todos tus dispositivos, para que puedas continuar donde lo dejaste.
      • -
      • Puedes conectarte con tus amigos de Facebook y ver sus puntajes y progreso en las tablas de clasificación.
      • -
      • Puedes enviar y recibir vidas y refuerzos de tus amigos para ayudarse mutuamente.
      • -
      • Puedes unirte a un equipo o crear el tuyo propio y chatear con otros jugadores.
      • -
      • Puedes participar en eventos especiales y desafíos y ganar recompensas exclusivas.
      • -
      -

      Cómo jugar Candy Crush Soda Saga en línea gratis en PC y móvil

      - -

      El sitio web oficial de King.com

      -

      El sitio web oficial de King.com es el mejor lugar para jugar Candy Crush Soda Saga en línea, ya que es la fuente oficial del juego. Puede acceder al sitio web desde cualquier navegador de su PC o dispositivo móvil, y puede jugar el juego en modo de pantalla completa. También puedes iniciar sesión con tu cuenta de Facebook o crear una cuenta de King para sincronizar tu progreso y acceder a todas las funciones del juego. Para jugar Candy Crush Soda Saga online en King.com, sigue estos pasos:

      -
        -
      1. Visite -
      2. Si desea iniciar sesión con su cuenta de Facebook o crear una cuenta de King, haga clic en el botón "Conectar" en la esquina superior derecha de la pantalla.
      3. -
      4. Disfruta jugando Candy Crush Soda Saga en línea!
      5. -
      -

      Las plataformas de juego en línea de Y8.com, ahora.gg, y Games.lol

      -

      Si quieres jugar Candy Crush Soda Saga en línea en otros sitios web, también puedes probar algunas de las plataformas de juegos en línea que ofrecen el juego, como Y8.com, now.gg y Games.lol. Estos sitios web le permiten jugar Candy Crush Soda Saga en línea sin iniciar sesión o crear una cuenta, pero pueden no tener todas las características y actualizaciones del sitio web oficial. Para jugar Candy Crush Soda Saga en línea en estos sitios web, siga estos pasos:

      -
        -
      1. Visite uno de estos sitios web desde su navegador: -
      2. -
      3. Haga clic en el botón "Play" para iniciar el juego.
      4. -
      5. Disfruta jugando Candy Crush Soda Saga en línea!
      6. -
      -

      Los consejos y trucos para dominar Candy Crush Soda Saga

      - -
        -
      • Presta atención al objetivo de cada nivel y planifica tus movimientos en consecuencia.
      • -
      • Combina dulces cerca de la parte inferior del tablero para crear cascadas y limpiar más dulces.
      • -
      • Usa dulces especiales y potenciadores sabiamente y guárdalos para niveles difíciles.
      • -
      • Aprende a hacer diferentes combinaciones de dulces especiales, como rayas + envuelto, rayas + pescado, envuelto + pescado, colorante + pescado, etc.
      • -
      • Sepa cómo tratar con diferentes tipos de bloqueadores, como chocolate, regaliz, hielo, panal, etc.
      • -
      • Mantén un ojo en el nivel de soda y trata de llenarlo o bajarlo dependiendo del modo.
      • -
      • No malgastes movimientos y trata de obtener tantas estrellas como sea posible.
      • -arriba. Siempre puedes reproducir los niveles o pedir ayuda a tus amigos. -
      -

      ¿Cuáles son las alternativas a Candy Crush Soda Saga?

      -

      Candy Crush Soda Saga es un gran juego, pero no es el único de su tipo. Si quieres probar otros juegos similares a Candy Crush Soda Saga, tienes muchas opciones para elegir. Estas son algunas de las alternativas a Candy Crush Soda Saga que te pueden gustar:

      -

      Los otros juegos de la franquicia Candy Crush

      -

      Si te gusta Candy Crush Soda Saga, también te pueden encantar los otros juegos de la misma franquicia, como:

      -

      -
        -
      • Candy Crush Saga: El original y clásico juego de puzzle match-3 que comenzó todo.
      • -
      • Candy Crush Jelly Saga: La tercera entrega de la franquicia, donde tienes que untar jalea y competir con la Jelly Queen.
      • -
      • Candy Crush Friends Saga: La cuarta y última entrega de la franquicia, donde tienes que combinar dulces y recoger a tus amigos.
      • -
      -

      Todos estos juegos son gratis para jugar en línea en King.com o en sus dispositivos móviles, y tienen un juego similar y características como Candy Crush Soda Saga, pero con diferentes giros y desafíos.

      -

      Los juegos de rompecabezas de match-3 similares de otros desarrolladores

      - -
        -
      • Bejeweled: El clásico y original juego de puzzle match-3 que inspiró a muchos otros.
      • -
      • Cookie Jam: Un juego de puzzle de partido 3 delicioso y colorido donde tienes que hornear galletas y pasteles.
      • -
      • Gummy Drop: Un dulce y aventurero juego de puzzle match-3 donde tienes que viajar alrededor del mundo y reconstruir puntos de referencia.
      • -
      • Homescapes: Un relajante y divertido juego de puzzle match-3 donde tienes que renovar una mansión y ayudar a una familia.
      • -
      • Toon Blast: Un juego de rompecabezas de dibujos animados y explosivos match-3 donde tienes que destruir cubos y crear combos.
      • -
      -

      Todos estos juegos son gratis para jugar en línea en varios sitios web o en sus dispositivos móviles, y tienen un juego similar y características como Candy Crush Soda Saga, pero con diferentes temas e historias.

      -

      Los pros y los contras de jugar alternativas a Candy Crush Soda Saga

      -

      Jugar alternativas a Candy Crush Soda Saga puede ser una buena manera de darle vida a tu experiencia de juego y probar algo nuevo. Sin embargo, también hay algunos pros y contras de jugar alternativas a Candy Crush Soda Saga, como:

      - -ProsContras -Puedes descubrir nuevos juegos y géneros que puedes disfrutar. Puedes confundirte o sentirte abrumado por demasiadas opciones. -Puedes comparar y contrastar diferentes juegos y encontrar tu favorito. Usted puede perder interés o motivación en jugar Candy Crush Soda Saga. -Puede desafiarse a sí mismo con diferentes niveles y modos. Puede encontrar algunos juegos demasiado fáciles o demasiado difíciles para su gusto. -Puedes tener más diversión y variedad en tu tiempo de juego. Puedes gastar demasiado tiempo o dinero en juegos. - -

      Conclusión

      - -

      Preguntas frecuentes

      -

      Aquí están algunas de las preguntas más frecuentes sobre Candy Crush Soda Saga:

      -
        -
      1. ¿Cómo puedo obtener más vidas en Candy Crush Soda Saga?
      2. -

        Puedes obtener más vidas en Candy Crush Soda Saga haciendo una de las siguientes:

          -
        • Espera 30 minutos para que cada vida se regenere automáticamente.
        • -
        • Comprar más vidas con barras de oro, la moneda premium del juego.
        • -
        • Cambiar la configuración de fecha y hora en su dispositivo para engañar al juego para que le dé más vidas.
        • -
        -

        -
      3. ¿Cómo puedo obtener más barras de oro en Candy Crush Soda Saga?
      4. -

        Puede obtener más barras de oro en Candy Crush Soda Saga haciendo una de las siguientes:

          -
        • Completa las misiones y desafíos diarios y gana recompensas.
        • -
        • Subir de nivel su pase de temporada y desbloquear barras de oro y otras ventajas.
        • -
        • Únete a un equipo o crea tu propio equipo y gana eventos y competiciones.
        • -
        • Conecte su juego a su cuenta de Facebook y obtener barras de oro gratis.
        • -
        • Comprar más barras de oro con dinero real a través de compras en la aplicación.
        • -
        -

        -
      5. ¿Cómo puedo obtener más potenciadores en Candy Crush Soda Saga?
      6. -

        Puede obtener más potenciadores en Candy Crush Soda Saga haciendo uno de los siguientes:

          -
        • Girar la rueda de refuerzo diario y ganar un refuerzo al azar todos los días.
        • -
        • Juega el evento de Bubblegum Hill y gana boosters y otros premios.
        • -
        • Recoger estrellas y llenar el medidor Star Chaser para obtener boosters gratis.
        • -
        • Ver anuncios de vídeo y obtener refuerzos gratis.
        • -
        • Comprar más potenciadores con barras de oro o dinero real a través de compras en la aplicación.
        • -
        -

        -
      7. ¿Cómo puedo desbloquear nuevos episodios en Candy Crush Soda Saga?
      8. -

        Puede desbloquear nuevos episodios en Candy Crush Soda Saga haciendo uno de los siguientes:

          -
        • Completa todos los niveles en el episodio anterior.
        • -
        • Pídele a tus amigos o miembros del equipo de Facebook que te envíen entradas.
        • - -
        -

        -
      9. ¿Cómo puedo contactar al equipo de soporte de Candy Crush Soda Saga?
      10. -

        Puede ponerse en contacto con el equipo de soporte de Candy Crush Soda Saga haciendo una de las siguientes:

          -
        • Visite el sitio web oficial de King.com y haga clic en el botón "Contáctenos" en la parte inferior de la página.
        • -
        • Visita la página oficial de Facebook de Candy Crush Soda Saga y envía un mensaje a la página.
        • -
        • Visite el foro oficial de King.com y publique su pregunta o problema en la sección correspondiente.
        • -
        -

        64aa2da5cf
        -
        -
        \ No newline at end of file diff --git a/spaces/BertChristiaens/blip-diffusion/README.md b/spaces/BertChristiaens/blip-diffusion/README.md deleted file mode 100644 index 13e45d37136127d61f67a6c2eab2246facbf7a76..0000000000000000000000000000000000000000 --- a/spaces/BertChristiaens/blip-diffusion/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -license: openrail -title: Blip Diffusion -sdk: streamlit -emoji: 🚀 -colorFrom: yellow -colorTo: green ---- \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/__init__.py deleted file mode 100644 index b687f69da2144383f320cfab8015c402160af33c..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -__version__ = '0.16.0' diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/resolvers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/resolvers.py deleted file mode 100644 index 2c3d0e306f91f9dfac1843b40babd223766bbf50..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/resolvers.py +++ /dev/null @@ -1,547 +0,0 @@ -import collections -import itertools -import operator - -from .providers import AbstractResolver -from .structs import DirectedGraph, IteratorMapping, build_iter_view - -RequirementInformation = collections.namedtuple( - "RequirementInformation", ["requirement", "parent"] -) - - -class ResolverException(Exception): - """A base class for all exceptions raised by this module. - - Exceptions derived by this class should all be handled in this module. Any - bubbling pass the resolver should be treated as a bug. - """ - - -class RequirementsConflicted(ResolverException): - def __init__(self, criterion): - super(RequirementsConflicted, self).__init__(criterion) - self.criterion = criterion - - def __str__(self): - return "Requirements conflict: {}".format( - ", ".join(repr(r) for r in self.criterion.iter_requirement()), - ) - - -class InconsistentCandidate(ResolverException): - def __init__(self, candidate, criterion): - super(InconsistentCandidate, self).__init__(candidate, criterion) - self.candidate = candidate - self.criterion = criterion - - def __str__(self): - return "Provided candidate {!r} does not satisfy {}".format( - self.candidate, - ", ".join(repr(r) for r in self.criterion.iter_requirement()), - ) - - -class Criterion(object): - """Representation of possible resolution results of a package. - - This holds three attributes: - - * `information` is a collection of `RequirementInformation` pairs. - Each pair is a requirement contributing to this criterion, and the - candidate that provides the requirement. - * `incompatibilities` is a collection of all known not-to-work candidates - to exclude from consideration. - * `candidates` is a collection containing all possible candidates deducted - from the union of contributing requirements and known incompatibilities. - It should never be empty, except when the criterion is an attribute of a - raised `RequirementsConflicted` (in which case it is always empty). - - .. note:: - This class is intended to be externally immutable. **Do not** mutate - any of its attribute containers. - """ - - def __init__(self, candidates, information, incompatibilities): - self.candidates = candidates - self.information = information - self.incompatibilities = incompatibilities - - def __repr__(self): - requirements = ", ".join( - "({!r}, via={!r})".format(req, parent) - for req, parent in self.information - ) - return "Criterion({})".format(requirements) - - def iter_requirement(self): - return (i.requirement for i in self.information) - - def iter_parent(self): - return (i.parent for i in self.information) - - -class ResolutionError(ResolverException): - pass - - -class ResolutionImpossible(ResolutionError): - def __init__(self, causes): - super(ResolutionImpossible, self).__init__(causes) - # causes is a list of RequirementInformation objects - self.causes = causes - - -class ResolutionTooDeep(ResolutionError): - def __init__(self, round_count): - super(ResolutionTooDeep, self).__init__(round_count) - self.round_count = round_count - - -# Resolution state in a round. -State = collections.namedtuple("State", "mapping criteria backtrack_causes") - - -class Resolution(object): - """Stateful resolution object. - - This is designed as a one-off object that holds information to kick start - the resolution process, and holds the results afterwards. - """ - - def __init__(self, provider, reporter): - self._p = provider - self._r = reporter - self._states = [] - - @property - def state(self): - try: - return self._states[-1] - except IndexError: - raise AttributeError("state") - - def _push_new_state(self): - """Push a new state into history. - - This new state will be used to hold resolution results of the next - coming round. - """ - base = self._states[-1] - state = State( - mapping=base.mapping.copy(), - criteria=base.criteria.copy(), - backtrack_causes=base.backtrack_causes[:], - ) - self._states.append(state) - - def _add_to_criteria(self, criteria, requirement, parent): - self._r.adding_requirement(requirement=requirement, parent=parent) - - identifier = self._p.identify(requirement_or_candidate=requirement) - criterion = criteria.get(identifier) - if criterion: - incompatibilities = list(criterion.incompatibilities) - else: - incompatibilities = [] - - matches = self._p.find_matches( - identifier=identifier, - requirements=IteratorMapping( - criteria, - operator.methodcaller("iter_requirement"), - {identifier: [requirement]}, - ), - incompatibilities=IteratorMapping( - criteria, - operator.attrgetter("incompatibilities"), - {identifier: incompatibilities}, - ), - ) - - if criterion: - information = list(criterion.information) - information.append(RequirementInformation(requirement, parent)) - else: - information = [RequirementInformation(requirement, parent)] - - criterion = Criterion( - candidates=build_iter_view(matches), - information=information, - incompatibilities=incompatibilities, - ) - if not criterion.candidates: - raise RequirementsConflicted(criterion) - criteria[identifier] = criterion - - def _remove_information_from_criteria(self, criteria, parents): - """Remove information from parents of criteria. - - Concretely, removes all values from each criterion's ``information`` - field that have one of ``parents`` as provider of the requirement. - - :param criteria: The criteria to update. - :param parents: Identifiers for which to remove information from all criteria. - """ - if not parents: - return - for key, criterion in criteria.items(): - criteria[key] = Criterion( - criterion.candidates, - [ - information - for information in criterion.information - if ( - information.parent is None - or self._p.identify(information.parent) not in parents - ) - ], - criterion.incompatibilities, - ) - - def _get_preference(self, name): - return self._p.get_preference( - identifier=name, - resolutions=self.state.mapping, - candidates=IteratorMapping( - self.state.criteria, - operator.attrgetter("candidates"), - ), - information=IteratorMapping( - self.state.criteria, - operator.attrgetter("information"), - ), - backtrack_causes=self.state.backtrack_causes, - ) - - def _is_current_pin_satisfying(self, name, criterion): - try: - current_pin = self.state.mapping[name] - except KeyError: - return False - return all( - self._p.is_satisfied_by(requirement=r, candidate=current_pin) - for r in criterion.iter_requirement() - ) - - def _get_updated_criteria(self, candidate): - criteria = self.state.criteria.copy() - for requirement in self._p.get_dependencies(candidate=candidate): - self._add_to_criteria(criteria, requirement, parent=candidate) - return criteria - - def _attempt_to_pin_criterion(self, name): - criterion = self.state.criteria[name] - - causes = [] - for candidate in criterion.candidates: - try: - criteria = self._get_updated_criteria(candidate) - except RequirementsConflicted as e: - self._r.rejecting_candidate(e.criterion, candidate) - causes.append(e.criterion) - continue - - # Check the newly-pinned candidate actually works. This should - # always pass under normal circumstances, but in the case of a - # faulty provider, we will raise an error to notify the implementer - # to fix find_matches() and/or is_satisfied_by(). - satisfied = all( - self._p.is_satisfied_by(requirement=r, candidate=candidate) - for r in criterion.iter_requirement() - ) - if not satisfied: - raise InconsistentCandidate(candidate, criterion) - - self._r.pinning(candidate=candidate) - self.state.criteria.update(criteria) - - # Put newly-pinned candidate at the end. This is essential because - # backtracking looks at this mapping to get the last pin. - self.state.mapping.pop(name, None) - self.state.mapping[name] = candidate - - return [] - - # All candidates tried, nothing works. This criterion is a dead - # end, signal for backtracking. - return causes - - def _backjump(self, causes): - """Perform backjumping. - - When we enter here, the stack is like this:: - - [ state Z ] - [ state Y ] - [ state X ] - .... earlier states are irrelevant. - - 1. No pins worked for Z, so it does not have a pin. - 2. We want to reset state Y to unpinned, and pin another candidate. - 3. State X holds what state Y was before the pin, but does not - have the incompatibility information gathered in state Y. - - Each iteration of the loop will: - - 1. Identify Z. The incompatibility is not always caused by the latest - state. For example, given three requirements A, B and C, with - dependencies A1, B1 and C1, where A1 and B1 are incompatible: the - last state might be related to C, so we want to discard the - previous state. - 2. Discard Z. - 3. Discard Y but remember its incompatibility information gathered - previously, and the failure we're dealing with right now. - 4. Push a new state Y' based on X, and apply the incompatibility - information from Y to Y'. - 5a. If this causes Y' to conflict, we need to backtrack again. Make Y' - the new Z and go back to step 2. - 5b. If the incompatibilities apply cleanly, end backtracking. - """ - incompatible_reqs = itertools.chain( - (c.parent for c in causes if c.parent is not None), - (c.requirement for c in causes), - ) - incompatible_deps = {self._p.identify(r) for r in incompatible_reqs} - while len(self._states) >= 3: - # Remove the state that triggered backtracking. - del self._states[-1] - - # Ensure to backtrack to a state that caused the incompatibility - incompatible_state = False - while not incompatible_state: - # Retrieve the last candidate pin and known incompatibilities. - try: - broken_state = self._states.pop() - name, candidate = broken_state.mapping.popitem() - except (IndexError, KeyError): - raise ResolutionImpossible(causes) - current_dependencies = { - self._p.identify(d) - for d in self._p.get_dependencies(candidate) - } - incompatible_state = not current_dependencies.isdisjoint( - incompatible_deps - ) - - incompatibilities_from_broken = [ - (k, list(v.incompatibilities)) - for k, v in broken_state.criteria.items() - ] - - # Also mark the newly known incompatibility. - incompatibilities_from_broken.append((name, [candidate])) - - # Create a new state from the last known-to-work one, and apply - # the previously gathered incompatibility information. - def _patch_criteria(): - for k, incompatibilities in incompatibilities_from_broken: - if not incompatibilities: - continue - try: - criterion = self.state.criteria[k] - except KeyError: - continue - matches = self._p.find_matches( - identifier=k, - requirements=IteratorMapping( - self.state.criteria, - operator.methodcaller("iter_requirement"), - ), - incompatibilities=IteratorMapping( - self.state.criteria, - operator.attrgetter("incompatibilities"), - {k: incompatibilities}, - ), - ) - candidates = build_iter_view(matches) - if not candidates: - return False - incompatibilities.extend(criterion.incompatibilities) - self.state.criteria[k] = Criterion( - candidates=candidates, - information=list(criterion.information), - incompatibilities=incompatibilities, - ) - return True - - self._push_new_state() - success = _patch_criteria() - - # It works! Let's work on this new state. - if success: - return True - - # State does not work after applying known incompatibilities. - # Try the still previous state. - - # No way to backtrack anymore. - return False - - def resolve(self, requirements, max_rounds): - if self._states: - raise RuntimeError("already resolved") - - self._r.starting() - - # Initialize the root state. - self._states = [ - State( - mapping=collections.OrderedDict(), - criteria={}, - backtrack_causes=[], - ) - ] - for r in requirements: - try: - self._add_to_criteria(self.state.criteria, r, parent=None) - except RequirementsConflicted as e: - raise ResolutionImpossible(e.criterion.information) - - # The root state is saved as a sentinel so the first ever pin can have - # something to backtrack to if it fails. The root state is basically - # pinning the virtual "root" package in the graph. - self._push_new_state() - - for round_index in range(max_rounds): - self._r.starting_round(index=round_index) - - unsatisfied_names = [ - key - for key, criterion in self.state.criteria.items() - if not self._is_current_pin_satisfying(key, criterion) - ] - - # All criteria are accounted for. Nothing more to pin, we are done! - if not unsatisfied_names: - self._r.ending(state=self.state) - return self.state - - # keep track of satisfied names to calculate diff after pinning - satisfied_names = set(self.state.criteria.keys()) - set( - unsatisfied_names - ) - - # Choose the most preferred unpinned criterion to try. - name = min(unsatisfied_names, key=self._get_preference) - failure_causes = self._attempt_to_pin_criterion(name) - - if failure_causes: - causes = [i for c in failure_causes for i in c.information] - # Backjump if pinning fails. The backjump process puts us in - # an unpinned state, so we can work on it in the next round. - self._r.resolving_conflicts(causes=causes) - success = self._backjump(causes) - self.state.backtrack_causes[:] = causes - - # Dead ends everywhere. Give up. - if not success: - raise ResolutionImpossible(self.state.backtrack_causes) - else: - # discard as information sources any invalidated names - # (unsatisfied names that were previously satisfied) - newly_unsatisfied_names = { - key - for key, criterion in self.state.criteria.items() - if key in satisfied_names - and not self._is_current_pin_satisfying(key, criterion) - } - self._remove_information_from_criteria( - self.state.criteria, newly_unsatisfied_names - ) - # Pinning was successful. Push a new state to do another pin. - self._push_new_state() - - self._r.ending_round(index=round_index, state=self.state) - - raise ResolutionTooDeep(max_rounds) - - -def _has_route_to_root(criteria, key, all_keys, connected): - if key in connected: - return True - if key not in criteria: - return False - for p in criteria[key].iter_parent(): - try: - pkey = all_keys[id(p)] - except KeyError: - continue - if pkey in connected: - connected.add(key) - return True - if _has_route_to_root(criteria, pkey, all_keys, connected): - connected.add(key) - return True - return False - - -Result = collections.namedtuple("Result", "mapping graph criteria") - - -def _build_result(state): - mapping = state.mapping - all_keys = {id(v): k for k, v in mapping.items()} - all_keys[id(None)] = None - - graph = DirectedGraph() - graph.add(None) # Sentinel as root dependencies' parent. - - connected = {None} - for key, criterion in state.criteria.items(): - if not _has_route_to_root(state.criteria, key, all_keys, connected): - continue - if key not in graph: - graph.add(key) - for p in criterion.iter_parent(): - try: - pkey = all_keys[id(p)] - except KeyError: - continue - if pkey not in graph: - graph.add(pkey) - graph.connect(pkey, key) - - return Result( - mapping={k: v for k, v in mapping.items() if k in connected}, - graph=graph, - criteria=state.criteria, - ) - - -class Resolver(AbstractResolver): - """The thing that performs the actual resolution work.""" - - base_exception = ResolverException - - def resolve(self, requirements, max_rounds=100): - """Take a collection of constraints, spit out the resolution result. - - The return value is a representation to the final resolution result. It - is a tuple subclass with three public members: - - * `mapping`: A dict of resolved candidates. Each key is an identifier - of a requirement (as returned by the provider's `identify` method), - and the value is the resolved candidate. - * `graph`: A `DirectedGraph` instance representing the dependency tree. - The vertices are keys of `mapping`, and each edge represents *why* - a particular package is included. A special vertex `None` is - included to represent parents of user-supplied requirements. - * `criteria`: A dict of "criteria" that hold detailed information on - how edges in the graph are derived. Each key is an identifier of a - requirement, and the value is a `Criterion` instance. - - The following exceptions may be raised if a resolution cannot be found: - - * `ResolutionImpossible`: A resolution cannot be found for the given - combination of requirements. The `causes` attribute of the - exception is a list of (requirement, parent), giving the - requirements that could not be satisfied. - * `ResolutionTooDeep`: The dependency tree is too deeply nested and - the resolver gave up. This is usually caused by a circular - dependency, but you can try to resolve this by increasing the - `max_rounds` argument. - """ - resolution = Resolution(self.provider, self.reporter) - state = resolution.resolve(requirements, max_rounds=max_rounds) - return _build_result(state) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/zipp.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/zipp.py deleted file mode 100644 index 26b723c1fd3e25740e0268b8c9b50905c58c3d4a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/zipp.py +++ /dev/null @@ -1,329 +0,0 @@ -import io -import posixpath -import zipfile -import itertools -import contextlib -import sys -import pathlib - -if sys.version_info < (3, 7): - from collections import OrderedDict -else: - OrderedDict = dict - - -__all__ = ['Path'] - - -def _parents(path): - """ - Given a path with elements separated by - posixpath.sep, generate all parents of that path. - - >>> list(_parents('b/d')) - ['b'] - >>> list(_parents('/b/d/')) - ['/b'] - >>> list(_parents('b/d/f/')) - ['b/d', 'b'] - >>> list(_parents('b')) - [] - >>> list(_parents('')) - [] - """ - return itertools.islice(_ancestry(path), 1, None) - - -def _ancestry(path): - """ - Given a path with elements separated by - posixpath.sep, generate all elements of that path - - >>> list(_ancestry('b/d')) - ['b/d', 'b'] - >>> list(_ancestry('/b/d/')) - ['/b/d', '/b'] - >>> list(_ancestry('b/d/f/')) - ['b/d/f', 'b/d', 'b'] - >>> list(_ancestry('b')) - ['b'] - >>> list(_ancestry('')) - [] - """ - path = path.rstrip(posixpath.sep) - while path and path != posixpath.sep: - yield path - path, tail = posixpath.split(path) - - -_dedupe = OrderedDict.fromkeys -"""Deduplicate an iterable in original order""" - - -def _difference(minuend, subtrahend): - """ - Return items in minuend not in subtrahend, retaining order - with O(1) lookup. - """ - return itertools.filterfalse(set(subtrahend).__contains__, minuend) - - -class CompleteDirs(zipfile.ZipFile): - """ - A ZipFile subclass that ensures that implied directories - are always included in the namelist. - """ - - @staticmethod - def _implied_dirs(names): - parents = itertools.chain.from_iterable(map(_parents, names)) - as_dirs = (p + posixpath.sep for p in parents) - return _dedupe(_difference(as_dirs, names)) - - def namelist(self): - names = super(CompleteDirs, self).namelist() - return names + list(self._implied_dirs(names)) - - def _name_set(self): - return set(self.namelist()) - - def resolve_dir(self, name): - """ - If the name represents a directory, return that name - as a directory (with the trailing slash). - """ - names = self._name_set() - dirname = name + '/' - dir_match = name not in names and dirname in names - return dirname if dir_match else name - - @classmethod - def make(cls, source): - """ - Given a source (filename or zipfile), return an - appropriate CompleteDirs subclass. - """ - if isinstance(source, CompleteDirs): - return source - - if not isinstance(source, zipfile.ZipFile): - return cls(_pathlib_compat(source)) - - # Only allow for FastLookup when supplied zipfile is read-only - if 'r' not in source.mode: - cls = CompleteDirs - - source.__class__ = cls - return source - - -class FastLookup(CompleteDirs): - """ - ZipFile subclass to ensure implicit - dirs exist and are resolved rapidly. - """ - - def namelist(self): - with contextlib.suppress(AttributeError): - return self.__names - self.__names = super(FastLookup, self).namelist() - return self.__names - - def _name_set(self): - with contextlib.suppress(AttributeError): - return self.__lookup - self.__lookup = super(FastLookup, self)._name_set() - return self.__lookup - - -def _pathlib_compat(path): - """ - For path-like objects, convert to a filename for compatibility - on Python 3.6.1 and earlier. - """ - try: - return path.__fspath__() - except AttributeError: - return str(path) - - -class Path: - """ - A pathlib-compatible interface for zip files. - - Consider a zip file with this structure:: - - . - ├── a.txt - └── b - ├── c.txt - └── d - └── e.txt - - >>> data = io.BytesIO() - >>> zf = zipfile.ZipFile(data, 'w') - >>> zf.writestr('a.txt', 'content of a') - >>> zf.writestr('b/c.txt', 'content of c') - >>> zf.writestr('b/d/e.txt', 'content of e') - >>> zf.filename = 'mem/abcde.zip' - - Path accepts the zipfile object itself or a filename - - >>> root = Path(zf) - - From there, several path operations are available. - - Directory iteration (including the zip file itself): - - >>> a, b = root.iterdir() - >>> a - Path('mem/abcde.zip', 'a.txt') - >>> b - Path('mem/abcde.zip', 'b/') - - name property: - - >>> b.name - 'b' - - join with divide operator: - - >>> c = b / 'c.txt' - >>> c - Path('mem/abcde.zip', 'b/c.txt') - >>> c.name - 'c.txt' - - Read text: - - >>> c.read_text() - 'content of c' - - existence: - - >>> c.exists() - True - >>> (b / 'missing.txt').exists() - False - - Coercion to string: - - >>> import os - >>> str(c).replace(os.sep, posixpath.sep) - 'mem/abcde.zip/b/c.txt' - - At the root, ``name``, ``filename``, and ``parent`` - resolve to the zipfile. Note these attributes are not - valid and will raise a ``ValueError`` if the zipfile - has no filename. - - >>> root.name - 'abcde.zip' - >>> str(root.filename).replace(os.sep, posixpath.sep) - 'mem/abcde.zip' - >>> str(root.parent) - 'mem' - """ - - __repr = "{self.__class__.__name__}({self.root.filename!r}, {self.at!r})" - - def __init__(self, root, at=""): - """ - Construct a Path from a ZipFile or filename. - - Note: When the source is an existing ZipFile object, - its type (__class__) will be mutated to a - specialized type. If the caller wishes to retain the - original type, the caller should either create a - separate ZipFile object or pass a filename. - """ - self.root = FastLookup.make(root) - self.at = at - - def open(self, mode='r', *args, pwd=None, **kwargs): - """ - Open this entry as text or binary following the semantics - of ``pathlib.Path.open()`` by passing arguments through - to io.TextIOWrapper(). - """ - if self.is_dir(): - raise IsADirectoryError(self) - zip_mode = mode[0] - if not self.exists() and zip_mode == 'r': - raise FileNotFoundError(self) - stream = self.root.open(self.at, zip_mode, pwd=pwd) - if 'b' in mode: - if args or kwargs: - raise ValueError("encoding args invalid for binary operation") - return stream - return io.TextIOWrapper(stream, *args, **kwargs) - - @property - def name(self): - return pathlib.Path(self.at).name or self.filename.name - - @property - def suffix(self): - return pathlib.Path(self.at).suffix or self.filename.suffix - - @property - def suffixes(self): - return pathlib.Path(self.at).suffixes or self.filename.suffixes - - @property - def stem(self): - return pathlib.Path(self.at).stem or self.filename.stem - - @property - def filename(self): - return pathlib.Path(self.root.filename).joinpath(self.at) - - def read_text(self, *args, **kwargs): - with self.open('r', *args, **kwargs) as strm: - return strm.read() - - def read_bytes(self): - with self.open('rb') as strm: - return strm.read() - - def _is_child(self, path): - return posixpath.dirname(path.at.rstrip("/")) == self.at.rstrip("/") - - def _next(self, at): - return self.__class__(self.root, at) - - def is_dir(self): - return not self.at or self.at.endswith("/") - - def is_file(self): - return self.exists() and not self.is_dir() - - def exists(self): - return self.at in self.root._name_set() - - def iterdir(self): - if not self.is_dir(): - raise ValueError("Can't listdir a file") - subs = map(self._next, self.root.namelist()) - return filter(self._is_child, subs) - - def __str__(self): - return posixpath.join(self.root.filename, self.at) - - def __repr__(self): - return self.__repr.format(self=self) - - def joinpath(self, *other): - next = posixpath.join(self.at, *map(_pathlib_compat, other)) - return self._next(self.root.resolve_dir(next)) - - __truediv__ = joinpath - - @property - def parent(self): - if not self.at: - return self.filename.parent - parent_at = posixpath.dirname(self.at.rstrip('/')) - if parent_at: - parent_at += '/' - return self._next(parent_at) diff --git a/spaces/BillBojangeles2000/bart-large-cnn-samsum/README.md b/spaces/BillBojangeles2000/bart-large-cnn-samsum/README.md deleted file mode 100644 index af53f7a60e82a3cb3796265213f7675f9c000ca0..0000000000000000000000000000000000000000 --- a/spaces/BillBojangeles2000/bart-large-cnn-samsum/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bart Large Cnn Samsum -emoji: 🏢 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: bigcode-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Brasd99/TTS-Voice-Cloner/app.py b/spaces/Brasd99/TTS-Voice-Cloner/app.py deleted file mode 100644 index 6d0c6a170ac624bcb02f4f620a1e05ccee8880ec..0000000000000000000000000000000000000000 --- a/spaces/Brasd99/TTS-Voice-Cloner/app.py +++ /dev/null @@ -1,101 +0,0 @@ -from TTS.api import TTS -from bs4 import BeautifulSoup -import requests -import streamlit as st -import tempfile -import os -import json -import datetime - -with open('config.json', 'r') as f: - config = json.load(f) - -APP_NAME = config['APP_NAME'] -APP_LOGO = config['APP_LOGO'] -APP_DESCRIPTION = config['APP_DESCRIPTION'] -LANGUAGES_URL = config['LANGUAGES_URL'] - -def contains_only_ascii(input_string): - return all(ord(char) < 128 for char in input_string) - -def get_iso_languages(): - response = requests.get(LANGUAGES_URL) - soup = BeautifulSoup(response.text, 'html.parser') - - p_tags = soup.find_all('p') - - iso_language_dict = {} - - for p_tag in p_tags[1:]: # Skipping the first

        which contains the header - parts = p_tag.get_text().split() - if len(parts) == 2: - iso_code, language_name = parts - if contains_only_ascii(language_name): - iso_language_dict[language_name] = iso_code - - return iso_language_dict - -def create_temp_file(input_wav): - temp_file = tempfile.NamedTemporaryFile(delete=False) - temp_file.write(input_wav.read()) - return temp_file - -def remove_temp_file(temp_file): - temp_file.close() - os.remove(temp_file.name) - -def update_progress(percent, text): - progress_bar.progress(percent) - status_text.text(text) - -iso_languages = get_iso_languages() -languages = list(iso_languages.keys()) - -st.set_page_config(page_title=APP_NAME) -st.title(APP_NAME) -st.image(APP_LOGO, use_column_width=True) -st.markdown(APP_DESCRIPTION) - -language = st.selectbox('Select a language', languages) -prompt = st.text_input('Enter your prompt') -input_wav = st.file_uploader("Upload a WAV file", type=["wav"]) - -if input_wav: - if not input_wav or input_wav is None: - st.error('Please upload wav input audio') - elif not prompt: - st.error('Please write prompt') - else: - progress_bar = st.progress(0) - status_text = st.empty() - - current_datetime = datetime.datetime.now() - formatted_datetime = current_datetime.strftime("%Y-%m-%d_%H%M%S") - output_filename = f"recording_{formatted_datetime}.wav" - - temp_file = create_temp_file(input_wav) - - iso_code = iso_languages[language] - - print(f'Language: {language}, prompt: {prompt}') - - update_progress(0, 'Loading TTS model...') - api = TTS(f"tts_models/{iso_code}/fairseq/vits") - - update_progress(50, 'Generating audio...') - api.tts_with_vc_to_file( - prompt, - speaker_wav=temp_file.name, - file_path=output_filename - ) - - remove_temp_file(temp_file) - - audio_file = open(output_filename, 'rb') - audio_bytes = audio_file.read() - - update_progress(100, 'Audio generated successfully!') - - st.audio(audio_bytes, format='audio/wav') - - st.download_button('Download WAV', data=audio_bytes, file_name='output.wav') \ No newline at end of file diff --git a/spaces/CMU-80100/80-100-Pre-Writing-Chatbot-Section-H/hf_streaming_chatbot.py b/spaces/CMU-80100/80-100-Pre-Writing-Chatbot-Section-H/hf_streaming_chatbot.py deleted file mode 100644 index 11a2ac882638cfaf9a928c6fb7db1dd6e22eebb0..0000000000000000000000000000000000000000 --- a/spaces/CMU-80100/80-100-Pre-Writing-Chatbot-Section-H/hf_streaming_chatbot.py +++ /dev/null @@ -1,112 +0,0 @@ -import openai -import os -import os.path -import gradio -from datetime import date -from datetime import datetime -import _thread - -# import the prompts here: -from prompts import debate_prompt_1 - - -######################################### - -openai.api_key = os.getenv("OPENAI_API_KEY") - -print("OPENAI_API_KEY Working...\n") - -users = {(os.getenv("user1"), os.getenv("PASSWORD")),(os.getenv("user2"), os.getenv("PASSWORD")), - (os.getenv("user3"), os.getenv("PASSWORD")),(os.getenv("user4"), os.getenv("PASSWORD")), - (os.getenv("user5"), os.getenv("PASSWORD")),(os.getenv("user6"), os.getenv("PASSWORD")), - (os.getenv("user7"), os.getenv("PASSWORD")),(os.getenv("user8"), os.getenv("PASSWORD")), - (os.getenv("user9"), os.getenv("PASSWORD")),(os.getenv("user10"), os.getenv("PASSWORD")), - (os.getenv("user11"), os.getenv("PASSWORD")),(os.getenv("user12"), os.getenv("PASSWORD")), - (os.getenv("user13"), os.getenv("PASSWORD")),(os.getenv("user14"), os.getenv("PASSWORD")), - (os.getenv("user15"), os.getenv("PASSWORD")),(os.getenv("user16"), os.getenv("PASSWORD")), - (os.getenv("user17"), os.getenv("PASSWORD")),(os.getenv("user18"), os.getenv("PASSWORD"))} - -currentUsers = [] -user_num = -1 - -def authorization(username, password): - if (username, password) in users: - currentUsers.append(username) - global user_num - user_num += 1 - print(currentUsers, user_num) - return True - else: - return False - - -# now = datetime.now() -today = date.today() -# start_time = now.strftime("%H:%M:%S") - -output = [] - -############## STREAMING VERSION W/O FLAGGING ################################## - -def predict(message, history): - history_openai_format = [{"role": "system", "content": debate_prompt_1}] - - for human, assistant in history: - history_openai_format.append({"role": "user", "content": human }) - history_openai_format.append({"role": "assistant", "content":assistant}) - output.append(f"{currentUsers[0]}: {human}\n\n") - output.append(f"gpt-4: {assistant}\n\n") - history_openai_format.append({"role": "user", "content": message}) - - # print(currentUsers[user_num]) - # with open(f'activity/{currentUsers[user_num]}_({today}).txt', 'w') as f: - # if (len(output) > 2): - # f.write(f"{output[-2]}\n\n") - # f.write(f"{output[-1]}\n\n") - - response = openai.ChatCompletion.create( - model='gpt-4', - messages= history_openai_format, - temperature=0.8, - max_tokens=512, - top_p=1, - stream=True - ) - - partial_message = "" - for chunk in response: - if len(chunk['choices'][0]['delta']) != 0: - partial_message = partial_message + chunk['choices'][0]['delta']['content'] - yield partial_message - - # if message == 'exit': - # _thread.interrupt_main() - -gradio.ChatInterface(fn = predict, - title = "80-100 Pre-Writing AI Assistant Chatbot", - description = "Welcome to the 80-100 Pre-Writing AI Chatbot.\n This bot is designed to discuss the readings, create outlines, and a variety of pre-writing tasks.\nRemember to copy and paste your interaction to a document. Conversations are not saved.\n Please start the discussion by asking: What is your job?", - -).queue().launch(auth = authorization) - -################################################################################ - -# today = date.today() -# now2 = datetime.now() -# end_time = now2.strftime("%H:%M:%S") - -# addition = "" - -# if (os.path.isfile(f'activity/{currentUsers[0]}_({today}).txt')): -# counter = 1 -# addition = f"-{counter}" -# while(os.path.isfile(f'activity/{currentUsers[0]}_({today}){addition}.txt')): -# counter += 1 -# addition = f"-{counter}" - -# with open(f'activity/{currentUsers[0]}_({today}){addition}.txt', 'w') as f: -# f.write(f"Start of Session: {start_time} \n") -# f.write(f"End of Session: {end_time} \n\n") -# f.writelines(output) -# f.write('------End of Session------') - -# print("Activity has been logged in the history folder. Have a nice day!") diff --git a/spaces/CVPR/LIVE/thrust/testing/cuda/stream_per_thread.cmake b/spaces/CVPR/LIVE/thrust/testing/cuda/stream_per_thread.cmake deleted file mode 100644 index 265f4fdc30b5b89daa0886a6cc5ab6765d9a5202..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/testing/cuda/stream_per_thread.cmake +++ /dev/null @@ -1,11 +0,0 @@ -# This test should always use per-thread streams on NVCC. -set_target_properties(${test_target} PROPERTIES - COMPILE_OPTIONS - $<$,$>:--default-stream=per-thread> -) - -# NVC++ does not have an equivalent option, and will always -# use the global stream by default. -if (CMAKE_CUDA_COMPILER_ID STREQUAL "Feta") - set_tests_properties(${test_target} PROPERTIES WILL_FAIL ON) -endif() diff --git a/spaces/CVPR/regionclip-demo/detectron2/evaluation/fast_eval_api.py b/spaces/CVPR/regionclip-demo/detectron2/evaluation/fast_eval_api.py deleted file mode 100644 index 2eb202bd5efa3ec3d366027b1debffc269ae8b17..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/evaluation/fast_eval_api.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -import time -from pycocotools.cocoeval import COCOeval - -from detectron2 import _C - -logger = logging.getLogger(__name__) - - -class COCOeval_opt(COCOeval): - """ - This is a slightly modified version of the original COCO API, where the functions evaluateImg() - and accumulate() are implemented in C++ to speedup evaluation - """ - - def evaluate(self): - """ - Run per image evaluation on given images and store results in self.evalImgs_cpp, a - datastructure that isn't readable from Python but is used by a c++ implementation of - accumulate(). Unlike the original COCO PythonAPI, we don't populate the datastructure - self.evalImgs because this datastructure is a computational bottleneck. - :return: None - """ - tic = time.time() - - p = self.params - # add backward compatibility if useSegm is specified in params - if p.useSegm is not None: - p.iouType = "segm" if p.useSegm == 1 else "bbox" - logger.info("Evaluate annotation type *{}*".format(p.iouType)) - p.imgIds = list(np.unique(p.imgIds)) - if p.useCats: - p.catIds = list(np.unique(p.catIds)) - p.maxDets = sorted(p.maxDets) - self.params = p - - self._prepare() # bottleneck - - # loop through images, area range, max detection number - catIds = p.catIds if p.useCats else [-1] - - if p.iouType == "segm" or p.iouType == "bbox": - computeIoU = self.computeIoU - elif p.iouType == "keypoints": - computeIoU = self.computeOks - self.ious = { - (imgId, catId): computeIoU(imgId, catId) for imgId in p.imgIds for catId in catIds - } # bottleneck - - maxDet = p.maxDets[-1] - - # <<<< Beginning of code differences with original COCO API - def convert_instances_to_cpp(instances, is_det=False): - # Convert annotations for a list of instances in an image to a format that's fast - # to access in C++ - instances_cpp = [] - for instance in instances: - instance_cpp = _C.InstanceAnnotation( - int(instance["id"]), - instance["score"] if is_det else instance.get("score", 0.0), - instance["area"], - bool(instance.get("iscrowd", 0)), - bool(instance.get("ignore", 0)), - ) - instances_cpp.append(instance_cpp) - return instances_cpp - - # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++ - ground_truth_instances = [ - [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds] - for imgId in p.imgIds - ] - detected_instances = [ - [convert_instances_to_cpp(self._dts[imgId, catId], is_det=True) for catId in p.catIds] - for imgId in p.imgIds - ] - ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds] - - if not p.useCats: - # For each image, flatten per-category lists into a single list - ground_truth_instances = [[[o for c in i for o in c]] for i in ground_truth_instances] - detected_instances = [[[o for c in i for o in c]] for i in detected_instances] - - # Call C++ implementation of self.evaluateImgs() - self._evalImgs_cpp = _C.COCOevalEvaluateImages( - p.areaRng, maxDet, p.iouThrs, ious, ground_truth_instances, detected_instances - ) - self._evalImgs = None - - self._paramsEval = copy.deepcopy(self.params) - toc = time.time() - logger.info("COCOeval_opt.evaluate() finished in {:0.2f} seconds.".format(toc - tic)) - # >>>> End of code differences with original COCO API - - def accumulate(self): - """ - Accumulate per image evaluation results and store the result in self.eval. Does not - support changing parameter settings from those used by self.evaluate() - """ - logger.info("Accumulating evaluation results...") - tic = time.time() - assert hasattr( - self, "_evalImgs_cpp" - ), "evaluate() must be called before accmulate() is called." - - self.eval = _C.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp) - - # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections - self.eval["recall"] = np.array(self.eval["recall"]).reshape( - self.eval["counts"][:1] + self.eval["counts"][2:] - ) - - # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X - # num_area_ranges X num_max_detections - self.eval["precision"] = np.array(self.eval["precision"]).reshape(self.eval["counts"]) - self.eval["scores"] = np.array(self.eval["scores"]).reshape(self.eval["counts"]) - toc = time.time() - logger.info("COCOeval_opt.accumulate() finished in {:0.2f} seconds.".format(toc - tic)) diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/fast_rcnn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/fast_rcnn.py deleted file mode 100644 index 5fe09c42b06a766e20d96cfcd0ce6356ff90bdfc..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/fast_rcnn.py +++ /dev/null @@ -1,1086 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -from typing import Dict, List, Tuple, Union -import torch -from fvcore.nn import giou_loss, smooth_l1_loss -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple -from detectron2.layers.soft_nms import batched_soft_nms -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.structures import Boxes, Instances -from detectron2.utils.events import get_event_storage - -__all__ = ["fast_rcnn_inference", "FastRCNNOutputLayers", "CLIPFastRCNNOutputLayers"] - - -logger = logging.getLogger(__name__) - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - R: number of ROIs, combined over all images, in the minibatch - Ri: number of ROIs in image i - K: number of foreground classes. E.g.,there are 80 foreground classes in COCO. - -Naming convention: - - deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box - transform (see :class:`box_regression.Box2BoxTransform`). - - pred_class_logits: predicted class scores in [-inf, +inf]; use - softmax(pred_class_logits) to estimate P(class). - - gt_classes: ground-truth classification labels in [0, K], where [0, K) represent - foreground object classes and K represents the background class. - - pred_proposal_deltas: predicted box2box transform deltas for transforming proposals - to detection box predictions. - - gt_proposal_deltas: ground-truth box2box transform deltas -""" - - -def fast_rcnn_inference( - boxes: List[torch.Tensor], - scores: List[torch.Tensor], - image_shapes: List[Tuple[int, int]], - score_thresh: float, - nms_thresh: float, - soft_nms_enabled, - soft_nms_method, - soft_nms_sigma, - soft_nms_prune, - topk_per_image: int, - scores_bf_multiply, -): - """ - Call `fast_rcnn_inference_single_image` for all images. - - Args: - boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic - boxes for each image. Element i has shape (Ri, K * 4) if doing - class-specific regression, or (Ri, 4) if doing class-agnostic - regression, where Ri is the number of predicted objects for image i. - This is compatible with the output of :meth:`FastRCNNOutputLayers.predict_boxes`. - scores (list[Tensor]): A list of Tensors of predicted class scores for each image. - Element i has shape (Ri, K + 1), where Ri is the number of predicted objects - for image i. Compatible with the output of :meth:`FastRCNNOutputLayers.predict_probs`. - image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch. - score_thresh (float): Only return detections with a confidence score exceeding this - threshold. - nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1]. - soft_nms_enabled (bool): Indicate to use soft non-maximum suppression. - soft_nms_method: (str): One of ['gaussian', 'linear', 'hard'] - soft_nms_sigma: (float): Sigma for gaussian soft nms. Value in (0, inf) - soft_nms_prune: (float): Threshold for pruning during soft nms. Value in [0, 1] - topk_per_image (int): The number of top scoring detections to return. Set < 0 to return - all detections. - - Returns: - instances: (list[Instances]): A list of N instances, one for each image in the batch, - that stores the topk most confidence detections. - kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates - the corresponding boxes/scores index in [0, Ri) from the input, for image i. - """ - result_per_image = [ - fast_rcnn_inference_single_image( - boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, - soft_nms_enabled, soft_nms_method, soft_nms_sigma, soft_nms_prune, topk_per_image, s_bf_per_img - ) - for scores_per_image, boxes_per_image, image_shape, s_bf_per_img in zip(scores, boxes, image_shapes, scores_bf_multiply) - ] - return [x[0] for x in result_per_image], [x[1] for x in result_per_image] - - -def _log_classification_stats(pred_logits, gt_classes, prefix="fast_rcnn"): - """ - Log the classification metrics to EventStorage. - - Args: - pred_logits: Rx(K+1) logits. The last column is for background class. - gt_classes: R labels - """ - num_instances = gt_classes.numel() - if num_instances == 0: - return - pred_classes = pred_logits.argmax(dim=1) - bg_class_ind = pred_logits.shape[1] - 1 - - fg_inds = (gt_classes >= 0) & (gt_classes < bg_class_ind) - num_fg = fg_inds.nonzero().numel() - fg_gt_classes = gt_classes[fg_inds] - fg_pred_classes = pred_classes[fg_inds] - - num_false_negative = (fg_pred_classes == bg_class_ind).nonzero().numel() - num_accurate = (pred_classes == gt_classes).nonzero().numel() - fg_num_accurate = (fg_pred_classes == fg_gt_classes).nonzero().numel() - - storage = get_event_storage() - storage.put_scalar(f"{prefix}/cls_accuracy", num_accurate / num_instances) - if num_fg > 0: - storage.put_scalar(f"{prefix}/fg_cls_accuracy", fg_num_accurate / num_fg) - storage.put_scalar(f"{prefix}/false_negative", num_false_negative / num_fg) - #print("cls_accuracy {:.2f}; fg_cls_accuracy {:.2f}; false_negative {:.2f}".format(num_accurate / num_instances, fg_num_accurate / num_fg, num_false_negative / num_fg)) - - -def fast_rcnn_inference_single_image( - boxes, - scores, - image_shape: Tuple[int, int], - score_thresh: float, - nms_thresh: float, - soft_nms_enabled, - soft_nms_method, - soft_nms_sigma, - soft_nms_prune, - topk_per_image: int, - scores_bf_multiply: None, -): - """ - Single-image inference. Return bounding-box detection results by thresholding - on scores and applying non-maximum suppression (NMS). - - Args: - Same as `fast_rcnn_inference`, but with boxes, scores, and image shapes - per image. - - Returns: - Same as `fast_rcnn_inference`, but for only one image. - """ - valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1) - if not valid_mask.all(): - boxes = boxes[valid_mask] - scores = scores[valid_mask] - scores_bf_multiply = scores_bf_multiply[valid_mask] - - # scores = scores[:, :-1] - # scores_bf_multiply = scores_bf_multiply[:, :-1] - num_bbox_reg_classes = boxes.shape[1] // 4 - # Convert to Boxes to use the `clip` function ... - boxes = Boxes(boxes.reshape(-1, 4)) - boxes.clip(image_shape) - boxes = boxes.tensor.view(-1, num_bbox_reg_classes, 4) # R x C x 4 - - # 1. Filter results based on detection scores. It can make NMS more efficient - # by filtering out low-confidence detections. - filter_mask = scores > score_thresh # R x K - # R' x 2. First column contains indices of the R predictions; - # Second column contains indices of classes. - filter_inds = filter_mask.nonzero() - if num_bbox_reg_classes == 1: - boxes = boxes[filter_inds[:, 0], 0] - else: - boxes = boxes[filter_mask] - scores = scores[filter_mask] - scores_bf_multiply = scores_bf_multiply[filter_mask] - - # 2. Apply NMS for each class independently. - if not soft_nms_enabled: - keep = batched_nms(boxes, scores, filter_inds[:, 1], nms_thresh) - else: - keep, soft_nms_scores = batched_soft_nms( - boxes, - scores, - filter_inds[:, 1], - soft_nms_method, - soft_nms_sigma, - nms_thresh, - soft_nms_prune, - ) - scores[keep] = soft_nms_scores - # scores_bf_multiply? (TBD) - scores_bf_multiply = scores - if topk_per_image >= 0: - keep = keep[:topk_per_image] - boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep] - scores_bf_multiply = scores_bf_multiply[keep] - - result = Instances(image_shape) - result.pred_boxes = Boxes(boxes) - result.scores = scores - result.scores = scores_bf_multiply # convert to the original scores before multiplying RPN scores - result.pred_classes = filter_inds[:, 1] - return result, filter_inds[:, 0] - - -class FastRCNNOutputs: - """ - An internal implementation that stores information about outputs of a Fast R-CNN head, - and provides methods that are used to decode the outputs of a Fast R-CNN head. - """ - - def __init__( - self, - box2box_transform, - pred_class_logits, - pred_proposal_deltas, - proposals, - smooth_l1_beta=0.0, - box_reg_loss_type="smooth_l1", - ): - """ - Args: - box2box_transform (Box2BoxTransform/Box2BoxTransformRotated): - box2box transform instance for proposal-to-detection transformations. - pred_class_logits (Tensor): A tensor of shape (R, K + 1) storing the predicted class - logits for all R predicted object instances. - Each row corresponds to a predicted object instance. - pred_proposal_deltas (Tensor): A tensor of shape (R, K * B) or (R, B) for - class-specific or class-agnostic regression. It stores the predicted deltas that - transform proposals into final box detections. - B is the box dimension (4 or 5). - When B is 4, each row is [dx, dy, dw, dh (, ....)]. - When B is 5, each row is [dx, dy, dw, dh, da (, ....)]. - proposals (list[Instances]): A list of N Instances, where Instances i stores the - proposals for image i, in the field "proposal_boxes". - When training, each Instances must have ground-truth labels - stored in the field "gt_classes" and "gt_boxes". - The total number of all instances must be equal to R. - smooth_l1_beta (float): The transition point between L1 and L2 loss in - the smooth L1 loss function. When set to 0, the loss becomes L1. When - set to +inf, the loss becomes constant 0. - box_reg_loss_type (str): Box regression loss type. One of: "smooth_l1", "giou" - """ - self.box2box_transform = box2box_transform - self.num_preds_per_image = [len(p) for p in proposals] - self.pred_class_logits = pred_class_logits - self.pred_proposal_deltas = pred_proposal_deltas - self.smooth_l1_beta = smooth_l1_beta - self.box_reg_loss_type = box_reg_loss_type - - self.image_shapes = [x.image_size for x in proposals] - - if len(proposals): - box_type = type(proposals[0].proposal_boxes) - # cat(..., dim=0) concatenates over all images in the batch - self.proposals = box_type.cat([p.proposal_boxes for p in proposals]) - assert ( - not self.proposals.tensor.requires_grad - ), "Proposals should not require gradients!" - - # "gt_classes" exists if and only if training. But other gt fields may - # not necessarily exist in training for images that have no groundtruth. - if proposals[0].has("gt_classes"): - self.gt_classes = cat([p.gt_classes for p in proposals], dim=0) - - # If "gt_boxes" does not exist, the proposals must be all negative and - # should not be included in regression loss computation. - # Here we just use proposal_boxes as an arbitrary placeholder because its - # value won't be used in self.box_reg_loss(). - gt_boxes = [ - p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes for p in proposals - ] - self.gt_boxes = box_type.cat(gt_boxes) - else: - self.proposals = Boxes(torch.zeros(0, 4, device=self.pred_proposal_deltas.device)) - self._no_instances = len(self.proposals) == 0 # no instances found - - def softmax_cross_entropy_loss(self): - """ - Deprecated - """ - _log_classification_stats(self.pred_class_logits, self.gt_classes) - return cross_entropy(self.pred_class_logits, self.gt_classes, reduction="mean") - - def box_reg_loss(self): - """ - Deprecated - """ - if self._no_instances: - return 0.0 * self.pred_proposal_deltas.sum() - - box_dim = self.proposals.tensor.size(1) # 4 or 5 - cls_agnostic_bbox_reg = self.pred_proposal_deltas.size(1) == box_dim - device = self.pred_proposal_deltas.device - - bg_class_ind = self.pred_class_logits.shape[1] - 1 - # Box delta loss is only computed between the prediction for the gt class k - # (if 0 <= k < bg_class_ind) and the target; there is no loss defined on predictions - # for non-gt classes and background. - # Empty fg_inds should produce a valid loss of zero because reduction=sum. - fg_inds = nonzero_tuple((self.gt_classes >= 0) & (self.gt_classes < bg_class_ind))[0] - - if cls_agnostic_bbox_reg: - # pred_proposal_deltas only corresponds to foreground class for agnostic - gt_class_cols = torch.arange(box_dim, device=device) - else: - # pred_proposal_deltas for class k are located in columns [b * k : b * k + b], - # where b is the dimension of box representation (4 or 5) - # Note that compared to Detectron1, - # we do not perform bounding box regression for background classes. - gt_class_cols = box_dim * self.gt_classes[fg_inds, None] + torch.arange( - box_dim, device=device - ) - - if self.box_reg_loss_type == "smooth_l1": - gt_proposal_deltas = self.box2box_transform.get_deltas( - self.proposals.tensor, self.gt_boxes.tensor - ) - loss_box_reg = smooth_l1_loss( - self.pred_proposal_deltas[fg_inds[:, None], gt_class_cols], - gt_proposal_deltas[fg_inds], - self.smooth_l1_beta, - reduction="sum", - ) - elif self.box_reg_loss_type == "giou": - fg_pred_boxes = self.box2box_transform.apply_deltas( - self.pred_proposal_deltas[fg_inds[:, None], gt_class_cols], - self.proposals.tensor[fg_inds], - ) - loss_box_reg = giou_loss( - fg_pred_boxes, - self.gt_boxes.tensor[fg_inds], - reduction="sum", - ) - else: - raise ValueError(f"Invalid bbox reg loss type '{self.box_reg_loss_type}'") - - loss_box_reg = loss_box_reg / self.gt_classes.numel() - return loss_box_reg - - def losses(self): - """ - Deprecated - """ - return {"loss_cls": self.softmax_cross_entropy_loss(), "loss_box_reg": self.box_reg_loss()} - - def predict_boxes(self): - """ - Deprecated - """ - pred = self.box2box_transform.apply_deltas(self.pred_proposal_deltas, self.proposals.tensor) - return pred.split(self.num_preds_per_image, dim=0) - - def predict_probs(self): - """ - Deprecated - """ - probs = F.softmax(self.pred_class_logits, dim=-1) - return probs.split(self.num_preds_per_image, dim=0) - - -class FastRCNNOutputLayers(nn.Module): - """ - Two linear layers for predicting Fast R-CNN outputs: - - 1. proposal-to-detection box regression deltas - 2. classification scores - """ - - @configurable - def __init__( - self, - input_shape: ShapeSpec, - *, - box2box_transform, - num_classes: int, - test_score_thresh: float = 0.0, - test_nms_thresh: float = 0.5, - soft_nms_enabled=False, - soft_nms_method="gaussian", - soft_nms_sigma=0.5, - soft_nms_prune=0.001, - test_topk_per_image: int = 100, - cls_agnostic_bbox_reg: bool = False, - smooth_l1_beta: float = 0.0, - box_reg_loss_type: str = "smooth_l1", - loss_weight: Union[float, Dict[str, float]] = 1.0, - clip_cls_emb: tuple = (False, None), - no_box_delta: bool = False, - bg_cls_loss_weight: None, - multiply_rpn_score: False, - openset_test: None, - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature to this module - box2box_transform (Box2BoxTransform or Box2BoxTransformRotated): - num_classes (int): number of foreground classes - test_score_thresh (float): threshold to filter predictions results. - test_nms_thresh (float): NMS threshold for prediction results. - test_topk_per_image (int): number of top predictions to produce per image. - cls_agnostic_bbox_reg (bool): whether to use class agnostic for bbox regression - smooth_l1_beta (float): transition point from L1 to L2 loss. Only used if - `box_reg_loss_type` is "smooth_l1" - box_reg_loss_type (str): Box regression loss type. One of: "smooth_l1", "giou" - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all losses, or a dict of individual weightings. Valid dict keys are: - * "loss_cls": applied to classification loss - * "loss_box_reg": applied to box regression loss - """ - super().__init__() - if isinstance(input_shape, int): # some backward compatibility - input_shape = ShapeSpec(channels=input_shape) - self.num_classes = num_classes - input_size = input_shape.channels * (input_shape.width or 1) * (input_shape.height or 1) - if clip_cls_emb[0]: # if combine {C4, text emb as classifier}, then has to use att_pool to match dimension - input_size = clip_cls_emb[3] if clip_cls_emb[2] in ['CLIPRes5ROIHeads', 'CLIPStandardROIHeads'] else input_size - # prediction layer for num_classes foreground classes and one background class (hence + 1) - self.cls_score = nn.Linear(input_size, num_classes + 1) - num_bbox_reg_classes = 1 if cls_agnostic_bbox_reg else num_classes - box_dim = len(box2box_transform.weights) - self.bbox_pred = nn.Linear(input_size, num_bbox_reg_classes * box_dim) - - nn.init.normal_(self.cls_score.weight, std=0.01) - nn.init.normal_(self.bbox_pred.weight, std=0.001) - for l in [self.cls_score, self.bbox_pred]: - nn.init.constant_(l.bias, 0) - - self.box2box_transform = box2box_transform - self.smooth_l1_beta = smooth_l1_beta - self.test_score_thresh = test_score_thresh - self.test_nms_thresh = test_nms_thresh - self.soft_nms_enabled = soft_nms_enabled - self.soft_nms_method = soft_nms_method - self.soft_nms_sigma = soft_nms_sigma - self.soft_nms_prune = soft_nms_prune - self.test_topk_per_image = test_topk_per_image - self.box_reg_loss_type = box_reg_loss_type - if isinstance(loss_weight, float): - loss_weight = {"loss_cls": loss_weight, "loss_box_reg": loss_weight} - self.loss_weight = loss_weight - - # use clip text embeddings as classifier's weights - self.use_clip_cls_emb = clip_cls_emb[0] - if self.use_clip_cls_emb: - ######### V2L projection layer in CVPR OVR model ######### - if openset_test[3]: # run CVPR model - self.emb_pred = nn.Linear(input_size, 768) - self.emb_pred.weight.requires_grad = False - self.emb_pred.bias.requires_grad = False - input_size = 768 - else: - self.emb_pred = None - ######### V2L projection layer in CVPR OVR model ######### - text_emb_require_grad = False - self.use_bias = False - self.tempurature = openset_test[2] # 0.01 # the smaller, the bigger difference among probs after softmax - self.no_box_delta = no_box_delta - if bg_cls_loss_weight is not None: # loss weigh for bg regions - self.cls_loss_weight = torch.ones(num_classes + 1) - self.cls_loss_weight[-1] = bg_cls_loss_weight - else: - self.cls_loss_weight = None - self.multiply_rpn_score = multiply_rpn_score - self.focal_scaled_loss = openset_test[4] - - @classmethod - def from_config(cls, cfg, input_shape): - # if cfg.MODEL.CLIP.CROP_REGION_TYPE == "RPN": - # assert cfg.MODEL.CLIP.NO_BOX_DELTA is False - return { - "input_shape": input_shape, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS), - # fmt: off - "num_classes" : cfg.MODEL.ROI_HEADS.NUM_CLASSES, - "cls_agnostic_bbox_reg" : cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, - "smooth_l1_beta" : cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA, - "test_score_thresh" : cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST, - "test_nms_thresh" : cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - "soft_nms_enabled" : cfg.MODEL.ROI_HEADS.SOFT_NMS_ENABLED, - "soft_nms_method" : cfg.MODEL.ROI_HEADS.SOFT_NMS_METHOD, - "soft_nms_sigma" : cfg.MODEL.ROI_HEADS.SOFT_NMS_SIGMA, - "soft_nms_prune" : cfg.MODEL.ROI_HEADS.SOFT_NMS_PRUNE, - "test_topk_per_image" : cfg.TEST.DETECTIONS_PER_IMAGE, - "box_reg_loss_type" : cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE, - "loss_weight" : {"loss_box_reg": cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT}, - "clip_cls_emb" : (cfg.MODEL.CLIP.USE_TEXT_EMB_CLASSIFIER, cfg.MODEL.CLIP.TEXT_EMB_PATH, cfg.MODEL.ROI_HEADS.NAME, cfg.MODEL.CLIP.TEXT_EMB_DIM), - "no_box_delta" : cfg.MODEL.CLIP.NO_BOX_DELTA or cfg.MODEL.CLIP.CROP_REGION_TYPE == 'GT', - "bg_cls_loss_weight" : cfg.MODEL.CLIP.BG_CLS_LOSS_WEIGHT, - "multiply_rpn_score" : cfg.MODEL.CLIP.MULTIPLY_RPN_SCORE, - "openset_test" : (cfg.MODEL.CLIP.OPENSET_TEST_NUM_CLASSES, cfg.MODEL.CLIP.OPENSET_TEST_TEXT_EMB_PATH, \ - cfg.MODEL.CLIP.CLSS_TEMP, cfg.MODEL.CLIP.RUN_CVPR_OVR, cfg.MODEL.CLIP.FOCAL_SCALED_LOSS) - # fmt: on - } - - def forward(self, x, queries): - """ - Args: - x: per-region features of shape (N, ...) for N bounding boxes to predict. - - Returns: - (Tensor, Tensor): - First tensor: shape (N,K+1), scores for each of the N box. Each row contains the - scores for K object categories and 1 background class. - - Second tensor: bounding box regression deltas for each box. Shape is shape (N,Kx4), - or (N,4) for class-agnostic regression. - """ - if x.dim() > 2: - x = torch.flatten(x, start_dim=1) - if self.use_clip_cls_emb: # use clip text embeddings as classifier's weights - normalized_x = F.normalize(x, p=2.0, dim=1) - cls_scores = normalized_x @ queries.t() - bg_cls_scores = cls_scores.new(cls_scores.shape[0], 1).fill_(0.3) - scores = cls_scores # torch.cat((cls_scores, bg_cls_scores), 1) - else: # default setting - scores = self.cls_score(x) - proposal_deltas = scores.new(scores.shape[0], 4).fill_(0) # self.bbox_pred(x) - return scores, proposal_deltas - - def losses(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_boxes``, - ``gt_classes`` are expected. - - Returns: - Dict[str, Tensor]: dict of losses - """ - scores, proposal_deltas = predictions - - # parse classification outputs - gt_classes = ( - cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0) - ) - _log_classification_stats(scores, gt_classes) - - # parse box regression outputs - if len(proposals): - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4 - assert not proposal_boxes.requires_grad, "Proposals should not require gradients!" - # If "gt_boxes" does not exist, the proposals must be all negative and - # should not be included in regression loss computation. - # Here we just use proposal_boxes as an arbitrary placeholder because its - # value won't be used in self.box_reg_loss(). - gt_boxes = cat( - [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals], - dim=0, - ) - else: - proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device) - - # loss weights - if self.cls_loss_weight is not None and self.cls_loss_weight.device != scores.device: - self.cls_loss_weight = self.cls_loss_weight.to(scores.device) - if self.focal_scaled_loss is not None: - loss_cls = self.focal_loss(scores, gt_classes, gamma=self.focal_scaled_loss) - else: - loss_cls = cross_entropy(scores, gt_classes, reduction="mean") if self.cls_loss_weight is None else \ - cross_entropy(scores, gt_classes, reduction="mean", weight=self.cls_loss_weight) - losses = { - "loss_cls": loss_cls, - "loss_box_reg": self.box_reg_loss( - proposal_boxes, gt_boxes, proposal_deltas, gt_classes - ), - } - return {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - - def focal_loss(self, inputs, targets, alpha=0.25, gamma=0.5, reduction="mean", mode='softmax'): - """Inspired by RetinaNet implementation""" - if mode == 'sigmoid': # original focal loss implementation, except we include bg loss - targets = F.one_hot(targets, num_classes=self.num_classes + 1).to(inputs.dtype) # create binary label for each logit entry, including bg loss - p = torch.sigmoid(inputs) - ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") - p_t = p * targets + (1 - p) * (1 - targets) - loss = ce_loss * ((1 - p_t) ** gamma) - - if alpha >= 0: - alpha_t = alpha * targets + (1 - alpha) * (1 - targets) - loss = alpha_t * loss - elif mode == 'softmax': - only_fg = False # if True, only fg rois are attached the focal loss scaling - #gamma = 0.3 # 0.5 # 0.8 # 1.5 # 1.0 - alpha = -1 # no binary target in this case; instead, we can use bg loss weight - if targets.numel() == 0 and reduction == "mean": - return input.sum() * 0.0 # connect the gradient - ce_loss = F.cross_entropy(inputs, targets, reduction="none") - p = F.softmax(inputs, dim=-1) - p_t = p[torch.arange(p.size(0)).to(p.device), targets] # get prob of target class - if only_fg: # apply scaling to only fg rois - roi_wise_gamma = torch.zeros(p.size(0)).to(p.device) - roi_wise_gamma[targets != self.num_classes] = gamma - gamma = roi_wise_gamma - loss = ce_loss * ((1 - p_t) ** gamma) - - # if alpha >= 0: - # alpha_t = alpha * targets + (1 - alpha) * (1 - targets) - # loss = alpha_t * loss - # bg loss weight - if self.cls_loss_weight is not None: - loss_weight = torch.ones(loss.size(0)).to(p.device) - loss_weight[targets == self.num_classes] = self.cls_loss_weight[-1].item() - loss = loss * loss_weight - - if reduction == "mean": - loss = loss.mean() - elif reduction == "sum": - loss = loss.sum() - - return loss - - def box_reg_loss(self, proposal_boxes, gt_boxes, pred_deltas, gt_classes): - """ - Args: - All boxes are tensors with the same shape Rx(4 or 5). - gt_classes is a long tensor of shape R, the gt class label of each proposal. - R shall be the number of proposals. - """ - box_dim = proposal_boxes.shape[1] # 4 or 5 - # Regression loss is only computed for foreground proposals (those matched to a GT) - fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < self.num_classes))[0] - if pred_deltas.shape[1] == box_dim: # cls-agnostic regression - fg_pred_deltas = pred_deltas[fg_inds] - else: - fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[ - fg_inds, gt_classes[fg_inds] - ] - - if self.box_reg_loss_type == "smooth_l1": - gt_pred_deltas = self.box2box_transform.get_deltas( - proposal_boxes[fg_inds], - gt_boxes[fg_inds], - ) - loss_box_reg = smooth_l1_loss( - fg_pred_deltas, gt_pred_deltas, self.smooth_l1_beta, reduction="sum" - ) - elif self.box_reg_loss_type == "giou": - fg_pred_boxes = self.box2box_transform.apply_deltas( - fg_pred_deltas, proposal_boxes[fg_inds] - ) - loss_box_reg = giou_loss(fg_pred_boxes, gt_boxes[fg_inds], reduction="sum") - else: - raise ValueError(f"Invalid bbox reg loss type '{self.box_reg_loss_type}'") - # The reg loss is normalized using the total number of regions (R), not the number - # of foreground regions even though the box regression loss is only defined on - # foreground regions. Why? Because doing so gives equal training influence to - # each foreground example. To see how, consider two different minibatches: - # (1) Contains a single foreground region - # (2) Contains 100 foreground regions - # If we normalize by the number of foreground regions, the single example in - # minibatch (1) will be given 100 times as much influence as each foreground - # example in minibatch (2). Normalizing by the total number of regions, R, - # means that the single example in minibatch (1) and each of the 100 examples - # in minibatch (2) are given equal influence. - return loss_box_reg / max(gt_classes.numel(), 1.0) # return 0 if empty - - def inference(self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances]): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Instances]: same as `fast_rcnn_inference`. - list[Tensor]: same as `fast_rcnn_inference`. - """ - boxes = self.predict_boxes(predictions, proposals) - scores = self.predict_probs(predictions, proposals) - image_shapes = [x.image_size for x in proposals] - scores_bf_multiply = scores # as a backup - if self.multiply_rpn_score: - rpn_scores = [p.get('objectness_logits') for p in proposals] - # filter based on rpn_scores - # boxes = (boxes[0][rpn_scores[0] > 0.9],) - # scores = (scores[0][rpn_scores[0] > 0.9],) - # rpn_scores = [rpn_scores[0][rpn_scores[0] > 0.9]] - # scores_bf_multiply = scores # as a backup - #rpn_scores = [p.get('objectness_logits').sigmoid() for p in proposals] - scores = [(torch.sigmoid(s) * torch.sigmoid(rpn_s[:, None])) ** 0.5 for s, rpn_s in zip(scores, rpn_scores)] - return fast_rcnn_inference( - boxes, - scores, - image_shapes, - self.test_score_thresh, - self.test_nms_thresh, - self.soft_nms_enabled, - self.soft_nms_method, - self.soft_nms_sigma, - self.soft_nms_prune, - self.test_topk_per_image, - scores_bf_multiply = scores_bf_multiply if self.multiply_rpn_score else None, - ) - - def predict_boxes_for_gt_classes(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_classes`` are expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted boxes for GT classes in case of - class-specific box head. Element i of the list has shape (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - scores, proposal_deltas = predictions - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - N, B = proposal_boxes.shape - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, proposal_boxes - ) # Nx(KxB) - - K = predict_boxes.shape[1] // B - if K > 1: - gt_classes = torch.cat([p.gt_classes for p in proposals], dim=0) - # Some proposals are ignored or have a background class. Their gt_classes - # cannot be used as index. - gt_classes = gt_classes.clamp_(0, K - 1) - - predict_boxes = predict_boxes.view(N, K, B)[ - torch.arange(N, dtype=torch.long, device=predict_boxes.device), gt_classes - ] - num_prop_per_image = [len(p) for p in proposals] - return predict_boxes.split(num_prop_per_image) - - def predict_boxes( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted class-specific or class-agnostic boxes - for each image. Element i has shape (Ri, K * B) or (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - _, proposal_deltas = predictions - num_prop_per_image = [len(p) for p in proposals] - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - if self.no_box_delta: - predict_boxes = proposal_boxes - else: - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, - proposal_boxes, - ) # Nx(KxB) - return predict_boxes.split(num_prop_per_image) - - def predict_probs( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. - - Returns: - list[Tensor]: - A list of Tensors of predicted class probabilities for each image. - Element i has shape (Ri, K + 1), where Ri is the number of proposals for image i. - """ - scores, _ = predictions - num_inst_per_image = [len(p) for p in proposals] - # probs = F.softmax(scores, dim=-1) - probs = scores - return probs.split(num_inst_per_image, dim=0) - - -class OLDFastRCNNOutputLayers(nn.Module): - """ - Two linear layers for predicting Fast R-CNN outputs: - - 1. proposal-to-detection box regression deltas - 2. classification scores - """ - - @configurable - def __init__( - self, - input_shape: ShapeSpec, - *, - box2box_transform, - num_classes: int, - test_score_thresh: float = 0.0, - test_nms_thresh: float = 0.5, - test_topk_per_image: int = 100, - cls_agnostic_bbox_reg: bool = False, - smooth_l1_beta: float = 0.0, - box_reg_loss_type: str = "smooth_l1", - loss_weight: Union[float, Dict[str, float]] = 1.0, - no_box_delta: bool = False, - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature to this module - box2box_transform (Box2BoxTransform or Box2BoxTransformRotated): - num_classes (int): number of foreground classes - test_score_thresh (float): threshold to filter predictions results. - test_nms_thresh (float): NMS threshold for prediction results. - test_topk_per_image (int): number of top predictions to produce per image. - cls_agnostic_bbox_reg (bool): whether to use class agnostic for bbox regression - smooth_l1_beta (float): transition point from L1 to L2 loss. Only used if - `box_reg_loss_type` is "smooth_l1" - box_reg_loss_type (str): Box regression loss type. One of: "smooth_l1", "giou" - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all losses, or a dict of individual weightings. Valid dict keys are: - * "loss_cls": applied to classification loss - * "loss_box_reg": applied to box regression loss - """ - super().__init__() - if isinstance(input_shape, int): # some backward compatibility - input_shape = ShapeSpec(channels=input_shape) - self.num_classes = num_classes - input_size = input_shape.channels * (input_shape.width or 1) * (input_shape.height or 1) - # prediction layer for num_classes foreground classes and one background class (hence + 1) - self.cls_score = nn.Linear(input_size, num_classes + 1) - num_bbox_reg_classes = 1 if cls_agnostic_bbox_reg else num_classes - box_dim = len(box2box_transform.weights) - self.bbox_pred = nn.Linear(input_size, num_bbox_reg_classes * box_dim) - - nn.init.normal_(self.cls_score.weight, std=0.01) - nn.init.normal_(self.bbox_pred.weight, std=0.001) - for l in [self.cls_score, self.bbox_pred]: - nn.init.constant_(l.bias, 0) - - self.box2box_transform = box2box_transform - self.smooth_l1_beta = smooth_l1_beta - self.test_score_thresh = test_score_thresh - self.test_nms_thresh = test_nms_thresh - self.test_topk_per_image = test_topk_per_image - self.box_reg_loss_type = box_reg_loss_type - if isinstance(loss_weight, float): - loss_weight = {"loss_cls": loss_weight, "loss_box_reg": loss_weight} - self.loss_weight = loss_weight - self.no_box_delta = no_box_delta - - @classmethod - def from_config(cls, cfg, input_shape): - return { - "input_shape": input_shape, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS), - # fmt: off - "num_classes" : cfg.MODEL.ROI_HEADS.NUM_CLASSES, - "cls_agnostic_bbox_reg" : cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, - "smooth_l1_beta" : cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA, - "test_score_thresh" : cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST, - "test_nms_thresh" : cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - "test_topk_per_image" : cfg.TEST.DETECTIONS_PER_IMAGE, - "box_reg_loss_type" : cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE, - "loss_weight" : {"loss_box_reg": cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT}, - "no_box_delta" : cfg.MODEL.CLIP.NO_BOX_DELTA or cfg.MODEL.CLIP.CROP_REGION_TYPE == 'GT', - # fmt: on - } - - def forward(self, x): - """ - Args: - x: per-region features of shape (N, ...) for N bounding boxes to predict. - - Returns: - (Tensor, Tensor): - First tensor: shape (N,K+1), scores for each of the N box. Each row contains the - scores for K object categories and 1 background class. - - Second tensor: bounding box regression deltas for each box. Shape is shape (N,Kx4), - or (N,4) for class-agnostic regression. - """ - if x.dim() > 2: - x = torch.flatten(x, start_dim=1) - scores = self.cls_score(x) - proposal_deltas = self.bbox_pred(x) - return scores, proposal_deltas - - def losses(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_boxes``, - ``gt_classes`` are expected. - - Returns: - Dict[str, Tensor]: dict of losses - """ - scores, proposal_deltas = predictions - - # parse classification outputs - gt_classes = ( - cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0) - ) - _log_classification_stats(scores, gt_classes) - - # parse box regression outputs - if len(proposals): - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4 - assert not proposal_boxes.requires_grad, "Proposals should not require gradients!" - # If "gt_boxes" does not exist, the proposals must be all negative and - # should not be included in regression loss computation. - # Here we just use proposal_boxes as an arbitrary placeholder because its - # value won't be used in self.box_reg_loss(). - gt_boxes = cat( - [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals], - dim=0, - ) - else: - proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device) - - losses = { - "loss_cls": cross_entropy(scores, gt_classes, reduction="mean"), - "loss_box_reg": self.box_reg_loss( - proposal_boxes, gt_boxes, proposal_deltas, gt_classes - ), - } - return {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - - def box_reg_loss(self, proposal_boxes, gt_boxes, pred_deltas, gt_classes): - """ - Args: - All boxes are tensors with the same shape Rx(4 or 5). - gt_classes is a long tensor of shape R, the gt class label of each proposal. - R shall be the number of proposals. - """ - box_dim = proposal_boxes.shape[1] # 4 or 5 - # Regression loss is only computed for foreground proposals (those matched to a GT) - fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < self.num_classes))[0] - if pred_deltas.shape[1] == box_dim: # cls-agnostic regression - fg_pred_deltas = pred_deltas[fg_inds] - else: - fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[ - fg_inds, gt_classes[fg_inds] - ] - - if self.box_reg_loss_type == "smooth_l1": - gt_pred_deltas = self.box2box_transform.get_deltas( - proposal_boxes[fg_inds], - gt_boxes[fg_inds], - ) - loss_box_reg = smooth_l1_loss( - fg_pred_deltas, gt_pred_deltas, self.smooth_l1_beta, reduction="sum" - ) - elif self.box_reg_loss_type == "giou": - fg_pred_boxes = self.box2box_transform.apply_deltas( - fg_pred_deltas, proposal_boxes[fg_inds] - ) - loss_box_reg = giou_loss(fg_pred_boxes, gt_boxes[fg_inds], reduction="sum") - else: - raise ValueError(f"Invalid bbox reg loss type '{self.box_reg_loss_type}'") - # The reg loss is normalized using the total number of regions (R), not the number - # of foreground regions even though the box regression loss is only defined on - # foreground regions. Why? Because doing so gives equal training influence to - # each foreground example. To see how, consider two different minibatches: - # (1) Contains a single foreground region - # (2) Contains 100 foreground regions - # If we normalize by the number of foreground regions, the single example in - # minibatch (1) will be given 100 times as much influence as each foreground - # example in minibatch (2). Normalizing by the total number of regions, R, - # means that the single example in minibatch (1) and each of the 100 examples - # in minibatch (2) are given equal influence. - return loss_box_reg / max(gt_classes.numel(), 1.0) # return 0 if empty - - def inference(self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances]): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Instances]: same as `fast_rcnn_inference`. - list[Tensor]: same as `fast_rcnn_inference`. - """ - boxes = self.predict_boxes(predictions, proposals) - scores = self.predict_probs(predictions, proposals) - image_shapes = [x.image_size for x in proposals] - return fast_rcnn_inference( - boxes, - scores, - image_shapes, - self.test_score_thresh, - self.test_nms_thresh, - self.test_topk_per_image, - ) - - def predict_boxes_for_gt_classes(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_classes`` are expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted boxes for GT classes in case of - class-specific box head. Element i of the list has shape (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - scores, proposal_deltas = predictions - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - N, B = proposal_boxes.shape - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, proposal_boxes - ) # Nx(KxB) - - K = predict_boxes.shape[1] // B - if K > 1: - gt_classes = torch.cat([p.gt_classes for p in proposals], dim=0) - # Some proposals are ignored or have a background class. Their gt_classes - # cannot be used as index. - gt_classes = gt_classes.clamp_(0, K - 1) - - predict_boxes = predict_boxes.view(N, K, B)[ - torch.arange(N, dtype=torch.long, device=predict_boxes.device), gt_classes - ] - num_prop_per_image = [len(p) for p in proposals] - return predict_boxes.split(num_prop_per_image) - - def predict_boxes( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted class-specific or class-agnostic boxes - for each image. Element i has shape (Ri, K * B) or (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - _, proposal_deltas = predictions - num_prop_per_image = [len(p) for p in proposals] - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - if self.no_box_delta: - predict_boxes = proposal_boxes - else: - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, - proposal_boxes, - ) # Nx(KxB) - return predict_boxes.split(num_prop_per_image) - - def predict_probs( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. - - Returns: - list[Tensor]: - A list of Tensors of predicted class probabilities for each image. - Element i has shape (Ri, K + 1), where Ri is the number of proposals for image i. - """ - scores, _ = predictions - num_inst_per_image = [len(p) for p in proposals] - probs = F.softmax(scores, dim=-1) - return probs.split(num_inst_per_image, dim=0) \ No newline at end of file diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/js/overlay.js b/spaces/ChandraMohanNayal/AutoGPT/autogpt/js/overlay.js deleted file mode 100644 index 1c99c72673330b8ea8cf037ef889233f2d4326be..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/js/overlay.js +++ /dev/null @@ -1,29 +0,0 @@ -const overlay = document.createElement('div'); -Object.assign(overlay.style, { - position: 'fixed', - zIndex: 999999, - top: 0, - left: 0, - width: '100%', - height: '100%', - background: 'rgba(0, 0, 0, 0.7)', - color: '#fff', - fontSize: '24px', - fontWeight: 'bold', - display: 'flex', - justifyContent: 'center', - alignItems: 'center', -}); -const textContent = document.createElement('div'); -Object.assign(textContent.style, { - textAlign: 'center', -}); -textContent.textContent = 'AutoGPT Analyzing Page'; -overlay.appendChild(textContent); -document.body.append(overlay); -document.body.style.overflow = 'hidden'; -let dotCount = 0; -setInterval(() => { - textContent.textContent = 'AutoGPT Analyzing Page' + '.'.repeat(dotCount); - dotCount = (dotCount + 1) % 4; -}, 1000); diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/google/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/google/__init__.py deleted file mode 100644 index 2e0544d44ba347a347f97f8bd017a3295d08d3c1..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/google/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from typing import List - -from pil_utils import BuildImage, Text2Image - -from meme_generator import add_meme - - -def google(images, texts: List[str], args): - text = texts[0] - text = " ".join(text.splitlines()) - colors = ["#4285f4", "#db4437", "#f4b400", "#4285f4", "#0f9d58", "#db4437"] - t2m = Text2Image.from_text(text, 200) - index = 0 - for char in t2m.lines[0].chars: - char.fill = colors[index % len(colors)] - if char.char.strip(): - index += 1 - return BuildImage(t2m.to_image(bg_color="white", padding=(50, 50))).save_jpg() - - -add_meme( - "google", - google, - min_texts=1, - max_texts=1, - default_texts=["Google"], - keywords=["google"], -) diff --git a/spaces/CoPoBio/skin_cancer_risk_prediction/app.py b/spaces/CoPoBio/skin_cancer_risk_prediction/app.py deleted file mode 100644 index 2ec187954f3d435163daff639b00343c068f3f24..0000000000000000000000000000000000000000 --- a/spaces/CoPoBio/skin_cancer_risk_prediction/app.py +++ /dev/null @@ -1,241 +0,0 @@ -import gradio as gr -import os - -# -*- coding: utf-8 -*- - -import cv2 -import numpy as np -import torch -from torchvision import transforms -from simple_vae import VAE -from PIL import Image -import tensorflow.compat.v1 as tf -tf.disable_v2_behavior() - - -import imutils -import dlib -from facealigner import FaceAligner -from imutils.face_utils import rect_to_bb -from imutils import face_utils - - - -torch.hub.download_url_to_file(os.environ['MODEL'], 'ae_trained_model.pth') - - -#-------------------------------start facial preprocessing------------------------------ -detector_size = 512 - -# construct the arguments; shape-predictor & image -shape_predictor = 'shape_predictor_68_face_landmarks.dat' - -#initialize dlib's face detector (HOG-based) and then create -# the facial landmark predictor and the face aligner -detector = dlib.get_frontal_face_detector() -predictor = dlib.shape_predictor(shape_predictor) -fa = FaceAligner(predictor, desiredFaceWidth=detector_size) - - -def face_preprocessing(image_src):#,save_name): - - image_resized = imutils.resize(image_src, width=768) - gray = cv2.cvtColor(image_resized, cv2.COLOR_BGR2GRAY) - rects = detector(gray, 2) - if len(rects) == 0: - print('no face detected') - return image_src, 0 - rect = rects[0] - #print(image_resized.shape, gray.shape, rect) - img = fa.align(image_resized, gray, rect) - #print(img.shape) - #exit(0) - gray2 = img.copy() - rects2 = detector(gray2, 2) ######### - if len(rects2) == 0: - print('no face detected after alignment') - return img, 0 - rect = rects2[0] - - lm = predictor(gray2, rect) - lm = face_utils.shape_to_np(lm) - - left = lm[0] - right = lm[16] - top = lm[20] - nosetip = lm[30] - jaw = lm[8] - n=60 - #print(left) - #print(2*(left[0]-n)-(1024-jaw[1]),jaw[1],left[0]-n,1024-(left[0]-n)) - if left[0]-n >= detector_size-(left[0]-n) or left[0]-n > detector_size-(left[0]-n): - print('not able to crop') - return img, 0 - img_crop = img[left[0]-n:detector_size-(left[0]-n),left[0]-n:detector_size-(left[0]-n)] - #mg_crop = img[ 2*(left[0]-n)-(1024-jaw[1]):jaw[1],left[0]-n:1024-(left[0]-n)] - - - gray3 = img_crop.copy() - rects3= detector(gray3, 2) ######### - if len(rects3) == 0: - print('no face detected after cropping') - return img, 0 - rect = rects3[0] - - lm = predictor(gray3, rect) - lm = face_utils.shape_to_np(lm) - - - - n_size=img_crop.shape[0] - landmarks_points=[] - for n in range(0,17): - if n == 0: - x = 0 - y = 0 - elif n == 16: - x=n_size - y=0 - else: - x=lm[n][0] - y=lm[n][1] - - landmarks_points.append((x,y)) - #print(landmarks_points) - #exit(0) - target_gray=cv2.cvtColor(img_crop,cv2.COLOR_RGB2GRAY) - mask=np.zeros_like(target_gray) - points=np.array(landmarks_points,np.int32) - - convexhull=cv2.convexHull(points) - - cv2.fillConvexPoly(mask,convexhull,255) - - target_face_1=cv2.bitwise_and(img_crop,img_crop,mask=mask) - - - return target_face_1, 1 - - -#----------------------------end facial preprocessing------------ - - -device = torch.device('cpu') - -model_ae = VAE(device=device, is_train=False).to(device) - -try: - model_ae.load_state_dict(torch.load('ae_trained_model.pth',map_location='cpu')) -except Exception as e: - print(f"Error loading model: {str(e)}") - -# Images - - - -batch_size = 1 -x = tf.placeholder(tf.float32, shape=[batch_size, 200], name='x') -logits = tf.layers.dense(inputs=x, units=1, activation=None, name='dense', - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) -config = tf.ConfigProto() -config.gpu_options.allow_growth = True -saver = tf.train.Saver() - -sess = tf.Session(config=config) -sess.run(tf.global_variables_initializer()) -saver.restore(sess, tf.train.latest_checkpoint('checkpoint_dcph/')) - - -def image_classifier(image): - image_switch_ch = image.copy() - image_switch_ch[:,:,0] = image[:,:,2] - image_switch_ch[:,:,1] = image[:,:,1] - image_switch_ch[:,:,2] = image[:,:,0] - image_processed, processed_flag = face_preprocessing(image_switch_ch) - if processed_flag == 0: - print('no face detected') - return 'no face detected' - image_processed_resized = cv2.resize(image_processed, (128, 128),interpolation=cv2.INTER_AREA) - colorimage_b = cv2.equalizeHist(image_processed_resized[:,:,0]) - colorimage_g = cv2.equalizeHist(image_processed_resized[:,:,1]) - colorimage_r = cv2.equalizeHist(image_processed_resized[:,:,2]) - # Next we stack our equalized channels back into a single image - image_processed_resized_eq = np.stack((colorimage_b,colorimage_g,colorimage_r), axis=2).astype(dtype='float32') - image_processed_resized_eq = image_processed_resized_eq/255.0 - img_tensor = torch.tensor(image_processed_resized_eq).to(torch.float32) - img_tensor = img_tensor.permute(2, 0, 1).unsqueeze(0) - #print(img_tensor.shape,img_tensor) - save_z = [] - with torch.no_grad(): - model_ae.eval() - latent_z = model_ae.encode(img_tensor) - save_z = latent_z.detach().cpu().numpy() - #print(save_z) - - logits_out = sess.run(logits, feed_dict={x: save_z}) - normalized_score = (logits_out[0]+10.22614574432373)/(5.887106418609619+10.22614574432373) - normalized_score =float(normalized_score) - if normalized_score > 1: - normalized_score = 1 - if normalized_score < 0: - normalized_score = 0 - - return f'The predicted risk is: {normalized_score:.2f}' - -title = "Demonstration of skin cancer risk prediction" -description = """ - -""" -examples_xai=[ - ['02.jpg'], ['03.jpg'],['04.jpg'], ['05.jpg'], ['06.jpg'],['07.jpg'], ['08.jpg'], ['09.jpg'],['10.jpg'], ['11.jpg'] -] - -examples_external=[ - ['baby.jpg'],['elderly.jpg'] -] - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Predict the risk of developing skin cancer from facial image - This app is a proof-of-concept demonstration of predicting the risk of developing skin cancer from 2D facial image. Link to the [manuscript](https://www.medrxiv.org/content/10.1101/2023.10.04.23296549v1). - - Abstract
        - Background: Efficient identification of individuals at high risk of skin cancer is crucial for implementing personalized screening strategies and subsequent care. While Artificial Intelligence holds promising potential for predictive analysis using image data, its application for skin cancer risk prediction utilizing facial images remains unexplored. We present a neural network-based explainable artificial intelligence (XAI) approach for skin cancer risk prediction based on 2D facial images and compare its efficacy to 18 established skin cancer risk factors using data from the Rotterdam Study. - - Methods: The study employed data from the Rotterdam population-based study in which both skin cancer risk factors and 2D facial images and the occurrence of skin cancer were collected from 2010 to 2018. We conducted a deep-learning survival analysis based on 2D facial images using our developed XAI approach. We subsequently compared these results with survival analysis based on skin cancer risk factors using cox proportional hazard regression. - - Findings: Among the 2,810 participants (mean Age=68.5±9.3 years, average Follow-up=5.0 years), 228 participants were diagnosed with skin cancer after photo acquisition. Our XAI approach achieved superior predictive accuracy based on 2D facial images (c-index=0.72, SD=0.05), outperforming that of the known risk factors (c-index=0.59, SD=0.03). - - Interpretation: This proof-of-concept study underscores the high potential of harnessing facial images and a tailored XAI approach as an easily accessible alternative over known risk factors for identifying individuals at high risk of skin cancer. - - Please kindly note that:
        - 1) This app does not collect any uploaded image data;
        - 2) The model was trained with facial images of participants (age > 50, Dutch European) from the [Rotterdam Study](http://www.epib.nl/research/ergo.htm), and images were taken in a room with consistent ambient lighting. - """) - uploaded_image = gr.Image(type="numpy") - txt_output = gr.Textbox(value="", label="Output") - btn = gr.Button(value="Submit") - btn.click(image_classifier, inputs=[uploaded_image], outputs=[txt_output]) - gr.Markdown("## Image examples generated by our XAI approach") - gr.Examples( - examples=examples_xai, - inputs=uploaded_image, - outputs=[txt_output], - fn=image_classifier, - cache_examples=True, - ) - gr.Markdown("## External image examples") - gr.Examples( - examples=examples_external, - inputs=uploaded_image, - outputs=[txt_output], - fn=image_classifier, - cache_examples=True, - ) -#demo = gr.Interface(fn=image_classifier, inputs=uploaded_image, outputs="label", title=title, description=description) -#demo.launch(show_api=False) -demo.launch() - -sess.close() \ No newline at end of file diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" deleted file mode 100644 index 5cf239e1d44ace85e1a32561513cfd37ee3255d9..0000000000000000000000000000000000000000 --- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" +++ /dev/null @@ -1,59 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询gpt-3.5和gpt-4……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = 'gpt-3.5-turbo&gpt-4' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - - -@CatchException -def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 \ No newline at end of file diff --git a/spaces/Cpp4App/Cpp4App/examples/11.html b/spaces/Cpp4App/Cpp4App/examples/11.html deleted file mode 100644 index 87c5234bfcae85325f6853dce784b5da320d94b3..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/examples/11.html +++ /dev/null @@ -1,698 +0,0 @@ - Privacy Policy - Data Policy | Snapchat Privacy

        Privacy and Safety Hub

        Privacy Policy

        Effective: June 29, 2022
        Snap Inc. is a camera company. Our products and services — including Snapchat, Bitmoji, Spectacles, advertising, commerce, and others that link to this Privacy Policy — provide fast and fun ways to express yourself, live in the moment, learn about the world, and have fun together!
        When you use these services, you’ll share some information with us. So we want to be upfront about the information we collect, how we use it, whom we share it with, and the controls we give you to access, update, and delete your information.
        That’s why we’ve written this Privacy Policy. And it’s why we’ve tried to write it in a way that’s easy to understand for all our users and blissfully free of the legalese that often clouds these documents. Of course, if you still have questions about anything in our Privacy Policy, just contact us.
        You should read our entire Privacy Policy, but when you only have a few minutes or want to remember something later on, you can always take a look at this overview and video. We also encourage you to check out the rest of our Privacy Center. We designed it to give you easy-to-digest summaries of our privacy practices. For example, our Privacy by Product page provides a breakdown of specific privacy features for our products.

        Information We Collect

        There are three basic categories of information we collect:
        • Information you provide.
        • Information we get when you use our services.
        • Information we get from third parties.
        Here’s a little more detail on each of these categories.
        Information You Provide
        When you interact with our services, we collect information that you provide to us. For example, many of our services require you to set up an account, so we may need to collect a few important details about you, such as your name, username, password, email address, phone number, and date of birth. We may also ask you to provide us with some additional information that will be publicly visible on our services, such as a profile picture or Bitmoji avatar. Some services, such as commerce products, may require you to provide us with a debit or credit card number and its associated account information.
        Of course, you’ll also provide us whatever information you send through our services, such as Snaps and Chats. Keep in mind that the users who view your Snaps, Chats, and any other content can always save that content or copy it outside the app. So, the same common sense that applies to the internet at large applies to our services as well: Don’t send messages or share content that you wouldn’t want someone to save or share.
        When you contact customer support or communicate with us in any other way, we’ll collect whatever information you volunteer or that we need to resolve your question.
        Information We Get When You Use Our Services
        When you use our services, we collect information about which of those services you’ve used and how you’ve used them. We might know, for instance, that you watched a particular Story, saw a specific ad for a certain period of time, and sent a few Snaps. Here’s a fuller explanation of the types of information we collect when you use our services:
        • Usage Information. We collect information about your activity through our services. For example, we may collect information about:
          • how you interact with our services, such as which Filters or Lenses you view or apply to Snaps, which Stories you watch on Discover, whether you’re using Spectacles, or which search queries you submit.
          • how you communicate with other Snapchatters, such as their names, the time and date of your communications, the number of messages you exchange with your friends, which friends you exchange messages with the most, and your interactions with messages (such as when you open a message or capture a screenshot).
        • Content Information. We collect content you create on our services, such as custom stickers, and information about the content you create or provide, such as if the recipient has viewed the content and the metadata that is provided with the content.
        • Device Information. We collect information from and about the devices you use. For example, we collect:
        • information about your hardware and software, such as the hardware model, operating system version, device memory, advertising identifiers, unique application identifiers, apps installed, unique device identifiers, device usage data, browser type, keyboards installed, language, battery level, and time zone;
        • information from device sensors, such as accelerometers, gyroscopes, compasses, microphones, and whether you have headphones connected; and
        • information about your wireless and mobile network connections, such as mobile phone number, service provider, IP address, and signal strength.
        • Device Phonebook. Because our services are all about communicating with friends, we may — with your permission — collect information from your device’s phonebook.
        • Camera, Photos, and Audio. Many of our services require us to collect images and other information from your device’s camera and photos. For example, you won’t be able to send Snaps or upload photos from your camera roll unless we can access your camera or photos.
        • Location Information. When you use our services we may collect information about your location. With your permission, we may also collect information about your precise location using methods that include GPS, wireless networks, cell towers, Wi-Fi access points, and other sensors, such as gyroscopes, accelerometers, and compasses.
        • Information Collected by Cookies and Other Technologies. Like most online services and mobile applications, we may use cookies and other technologies, such as web beacons, web storage, and unique advertising identifiers, to collect information about your activity, browser, and device. We may also use these technologies to collect information when you interact with services we offer through one of our partners, such as advertising and commerce features. For example, we may use information collected on other websites to show you more relevant ads. Most web browsers are set to accept cookies by default. If you prefer, you can usually remove or reject browser cookies through the settings on your browser or device. Keep in mind, though, that removing or rejecting cookies could affect the availability and functionality of our services. To learn more about how we and our partners use cookies on our services and your choices, please check out our Cookie Policy.
        • Log Information. We also collect log information when you use our website, such as:
        • details about how you’ve used our services;
        • device information, such as your web browser type and language;
        • access times;
        • pages viewed;
        • IP address;
        • identifiers associated with cookies or other technologies that may uniquely identify your device or browser; and
        • pages you visited before or after navigating to our website.
        Information We Collect from Third Parties
        We may collect information about you from other users, our affiliates, and third parties. Here are a few examples:
        • If you link your Snapchat account to another service (like Bitmoji or a third-party app), we may receive information from the other service, like how you use that service.
        • Advertisers, app developers, publishers, and other third parties may share information with us as well. We may use this information, among other ways, to help target or measure the performance of ads. You can learn more about our use of this kind of third-party data in our Support Center.
        • If another user uploads their contact list, we may combine information from that user’s contact list with other information we have collected about you.

        How We Use Information

        What do we do with the information we collect? For the detailed answer, go here. The short answer is: Provide you with an amazing set of products and services that we relentlessly improve. Here are the ways we do that:
        • develop, operate, improve, deliver, maintain, and protect our products and services.
        • send you communications, including by email or SMS where permitted. For example, we may use email or SMS to respond to support inquiries or to share information about our products, services, and promotional offers that we think may interest you.
        • monitor and analyze trends and usage.
        • personalize our services by, among other things, suggesting friends, profile information, or Bitmoji stickers, helping Snapchatters find each other in Snapchat, affiliate and third-party apps and services, or customising the content we show you, including ads.
        • add context to your Snapchat experience, for example by tagging your Memories with searchable labels based on your location (of course, if you’ve given us permission to collect your location) and the content of your photo or video (e.g., if there’s a dog in your photo, it may be searchable in Memories by the term “dog”).
        • provide and improve our advertising services, ad targeting, and ad measurement, including through the use of your precise location information (again, if you’ve given us permission to collect that information), both on and off our services. We may also store information about your use of third-party apps and websites on your device to do this. Learn more. See the Control Over Your Information section below for more information about Snap Inc.’s advertising practices and your choices.
        • enhance the safety and security of our products and services.
        • verify your identity and prevent fraud or other unauthorised or illegal activity.
        • use information we’ve collected from cookies and other technology to enhance our services and your experience with them.
        • enforce, investigate, and report conduct violating our Terms of Service and other usage policies, respond to requests from law enforcement, and comply with legal requirements.
        We may also use information from Apple’s TrueDepth camera to improve the quality of Lenses. Information from the TrueDepth camera is used in real time — we don’t store this information on our servers or share it with third parties.

        How We Share Information

        We may share information about you in the following ways:
        • With other Snapchatters. We may share the following information with other Snapchatters:
        • information about you, such as your username, name, and Bitmoji.
        • information about how you have interacted with our services, such as your Snapchat “score,” the names of Snapchatters you are friends with, how close you are with your friends on Snapchat, your recent location history (if you choose to share your location on Snap Map), and other information that will help Snapchatters understand your connections with others using our services. For example, because it may not be clear whether a new friend request comes from someone you actually know, we may share whether you and the requestor have Snapchat friends in common.
        • information about your device, such as the operating system and device type, to help you receive Chats, Snaps, and other content in the optimal viewing format.
        • any additional information you have directed us to share. For example, Snap will share your information when you connect your Snapchat account to a third-party app, and if you share information or content from Snapchat to the third-party app.
        • content you post or send. How widely your content is shared depends on your personal settings and the type of service you are using. For example, a Snap may be sent to just a single friend you select, but your My Story content may be seen by any Snapchatter whom you allow to see your My Story.
        • With all Snapchatters, our business partners, and the general public. We may share the following information with all Snapchatters as well as with our business partners and the general public:
        • public information like your name, username, profile pictures, Snapcode, and Public Profile.
        • Public Content like your Highlights, Custom Stickers, Lenses, Story submissions that are set to be viewable by Everyone, and any content that you submit to an inherently public service, like Spotlight, Snap Map, and other crowd-sourced services. This content may be viewed, used, and shared by the public at large both on and off our services, including through search results, on websites, in apps, and in online and offline broadcasts.
        • With third parties. We may share information with third parties in the following ways:
        • We may share information about you with service providers who perform services on our behalf, including to facilitate payments and measure and optimize the performance of ads and deliver more relevant ads, including on third-party websites and apps.
        • We may share information about you with business partners that provide services and functionality on our services. For more information about information collected by third parties on our services, visit our Support Site.
        • We may share information about you, such as device and usage information, to help us and others prevent fraud.
        • We may share information about you for legal, safety, and security reasons. We may share information about you if we reasonably believe that disclosing the information is needed to:
        • comply with any valid legal process, governmental request, or applicable law, rule, or regulation.
        • investigate, remedy, or enforce potential Terms of Service and Community Guidelines violations.
        • protect the rights, property, or safety of us, our users, or others.
        • detect and resolve any fraud or security concerns.
        • We may share information about you as part of a merger or acquisition. If Snap Inc. gets involved in a merger, asset sale, financing, liquidation or bankruptcy, or acquisition of all or some portion of our business to another company, we may share your information with that company before and after the transaction closes.
        • Non-personal information. We may also share with third parties that provide services to us or perform business purposes for us aggregated, non-personally identifiable, or de-identified information.

        Third-Party Content and Integrations

        Our services may contain third-party content and integrations. Examples include third-party integrations in the Camera, third-party games in Chat, and third-party Snap Kit integrations. Through these integrations, you may be providing information to the third party as well as to Snap. We are not responsible for how those third parties collect or use your information. As always, we encourage you to review the privacy policies of every third-party service that you visit or use, including those third parties you interact with through our services. You can learn more about third-party services in Snapchat here.

        How Long We Keep Your Information

        Snapchat lets you capture what it’s like to live in the moment. On our end, that means most messages — like Snaps and Chats — sent in Snapchat will be automatically deleted by default from our servers after we detect they’ve been opened by all recipients or have expired. Other content, like Story posts, are stored for longer. For detailed information about how long we store different types of content, check out our Support Site.
        We store other information for longer periods of time. For example:
        • We store your basic account information — like your name, phone number, and email address — and list of friends until you ask us to delete them.
        • We store location information for different lengths of time based on how precise it is and which services you use. If location information is associated with a Snap — like those saved to Memories or posted to Snap Map or Spotlight — we’ll retain that location as long as we store the Snap. Pro tip: You can see the location data we retain about you by downloading your data.
        If you ever decide to stop using Snapchat, you can just ask us to delete your account. We’ll also delete most of the information we’ve collected about you after you’ve been inactive for a while!
        Keep in mind that, while our systems are designed to carry out our deletion practices automatically, we cannot promise that deletion will occur within a specific timeframe. There may be legal requirements to store your data and we may need to suspend those deletion practices if we receive valid legal process asking us to preserve content, if we receive reports of abuse or other Terms of Service violations, or if your account, content created by you, or content created with other users is flagged by others or our systems for abuse or other Terms of Service violations. Finally, we may also retain certain information in backup for a limited period of time or as required by law.

        Control Over Your Information

        We want you to be in control of your information, so we provide you with the following tools.
        • Access, Correction, and Portability. You can access and edit most of your basic account information right in our apps. You can also use Download My Data to obtain a copy of information that isn’t available in our apps in a portable format, so you can move it or store it wherever you want. Because your privacy is important to us, we will ask you to verify your identity or provide additional information before we let you access or update your personal information. We may also reject your request to access or update your personal information for a number of reasons, including, for example, if the request risks the privacy of other users or is unlawful.
        • Revoking permissions. In most cases, if you let us use your information, you can simply revoke your permission by changing the settings in the app or on your device if your device offers those options. Of course, if you do that, certain services may lose full functionality. For promotional emails and SMS, you may opt out by clicking on the unsubscribe link or similar mechanism as provided.
        • Deletion. While we hope you’ll remain a lifelong Snapchatter, if for some reason you ever want to delete your account, just go here to learn how. You can also delete some information in the app, like photos you’ve saved to Memories, Spotlight submissions, and search history.
        • Advertising Preferences. We try to show you ads that we think will be relevant to your interests. If you would like to modify the information we and our advertising partners use to select these ads, you can do so in the app and through your device preferences. Go here to learn more.
        • Tracking. If you opt out of tracking on devices running iOS 14.5 or more recent versions, we will not link identifiable information from third-party apps and websites with identifiable information from Snapchat for advertising purposes, except on your device. You can control use of this on-device data for advertising by opting out of Activity-Based Advertising in Snapchat Ad Preferences Settings. Go here to learn more.
        • Communicating with other Snapchatters. It’s important to us that you stay in control over whom you communicate with. That’s why we’ve built a number of tools in Settings that let you indicate, among other things, who you want to see your Stories, whether you’d like to receive Snaps from just your friends or all Snapchatters, and whether you’d like to block another Snapchatter from contacting you again. Go here to learn more.

        International Data Transfers

        We may collect your personal information from, transfer it to, and store and process it in the United States and other countries outside of where you live. Whenever we share information outside of where you live, when we are legally required to do so, we make sure an adequate transfer mechanism is in place. We also make sure any third parties we share information with have an adequate transfer mechanism in place, as well. You can find more information on the categories of third parties we share information with here.

        State and Region Specific Information

        You may have specific privacy rights in your state or region. For example, in the United States, residents of California and other states have specific privacy rights. Snapchatters in the European Economic Area (EEA), the UK, Brazil, the Republic of Korea, and other jurisdictions also have specific rights. We keep an up-to-date overview of state and region specific disclosures here.

        Children

        Our services are not intended for — and we don’t direct them to — anyone under 13. And that’s why we do not knowingly collect personal information from anyone under 13. In addition, we may limit how we collect, use, and store some of the information of EEA and UK users between 13 and 16. In some cases, this means we will be unable to provide certain functionality to these users. If we need to rely on consent as a legal basis for processing your information and your country requires consent from a parent, we may require your parent’s consent before we collect and use that information.

        Revisions to the Privacy Policy

        We may change this Privacy Policy from time to time. But when we do, we’ll let you know one way or another. Sometimes, we’ll let you know by revising the date at the top of the Privacy Policy that’s available on our website and mobile application. Other times, we may provide you with additional notice (such as adding a statement to our websites’ homepages or providing you with an in-app notification).
        \ No newline at end of file diff --git a/spaces/Cran-May/Mistril-7b/app.py b/spaces/Cran-May/Mistril-7b/app.py deleted file mode 100644 index 1ff6b89caed4133409105b6e3b7d32997128f1e0..0000000000000000000000000000000000000000 --- a/spaces/Cran-May/Mistril-7b/app.py +++ /dev/null @@ -1,69 +0,0 @@ -import streamlit as st -from gradio_client import Client -from time import sleep -from ctransformers import AutoModelForCausalLM -# Constants -TITLE = "Mistrial 7B Chatbot" -DESCRIPTION = """ -This Space demonstrates model [Mistrial-7b-] -""" - -# Initialize client - - -with st.sidebar: - # system_promptSide = st.text_input("Optional system prompt:") - temperatureSide = st.slider("Temperature", min_value=0.0, max_value=1.0, value=0.9, step=0.05) - max_new_tokensSide = st.slider("Max new tokens", min_value=0.0, max_value=4096.0, value=4096.0, step=64.0) - # ToppSide = st.slider("Top-p (nucleus sampling)", min_value=0.0, max_value=1.0, value=0.6, step=0.05) - # RepetitionpenaltySide = st.slider("Repetition penalty", min_value=0.0, max_value=2.0, value=1.2, step=0.05) - -# Load the model -model = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-GGUF", model_file="mistral-7b-instruct-v0.1.Q5_K_S.gguf", model_type="mistral", gpu_layers=0) -ins = '''[INST] <> -You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. -If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. -<> -{} [/INST] -''' -# Define the conversation history -conversation_history = [] - -# Prediction function -def predict(message, system_prompt='', temperature=0.7, max_new_tokens=4096,Topp=0.5,Repetitionpenalty=1.2): - global conversation_history - question=message - input_text=ins - # Append the user's input to the conversation history - conversation_history.append({"role": "system", "content": input_text}) - response_text = model(ins.format(question)) - conversation_history.append({"role": "user", "content": input_text}) - conversation_history.append({"role": "assistant", "content": response_text}) - return response_text - -# Streamlit UI -st.title(TITLE) -st.write(DESCRIPTION) - - -if "messages" not in st.session_state: - st.session_state.messages = [] - -# Display chat messages from history on app rerun -for message in st.session_state.messages: - with st.chat_message(message["role"], avatar=("🧑‍💻" if message["role"] == 'human' else '🦙')): - st.markdown(message["content"]) - -# React to user input -if prompt := st.chat_input("Ask Mistril-7b anything..."): - # Display user message in chat message container - st.chat_message("human",avatar = "🧑‍💻").markdown(prompt) - # Add user message to chat history - st.session_state.messages.append({"role": "human", "content": prompt}) - - response = predict(message=prompt)#, temperature= temperatureSide,max_new_tokens=max_new_tokensSide) - # Display assistant response in chat message container - with st.chat_message("assistant", avatar='🦙'): - st.markdown(response) - # Add assistant response to chat history - st.session_state.messages.append({"role": "assistant", "content": response}) \ No newline at end of file diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/dpm_solver/sampler.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/dpm_solver/sampler.py deleted file mode 100644 index 3c191e672fa2b8cef94ba624fb4990e689374d67..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/dpm_solver/sampler.py +++ /dev/null @@ -1,88 +0,0 @@ -"""SAMPLING ONLY.""" -import torch - -from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver - - -MODEL_TYPES = { - "eps": "noise", - "v": "v" -} - - -class DPMSolverSampler(object): - def __init__(self, model, **kwargs): - super().__init__() - self.model = model - to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device) - self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) - - def register_buffer(self, name, attr): - # This is in the original sampler.py, but it is forcing the attr to 'cuda' instead of the default device. - #if type(attr) == torch.Tensor: - # if attr.device != torch.device("cuda"): - # attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - - print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}') - - device = self.model.betas.device - if x_T is None: - img = torch.randn(size, device=device) - else: - img = x_T - - ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod) - - model_fn = model_wrapper( - lambda x, t, c: self.model.apply_model(x, t, c), - ns, - model_type=MODEL_TYPES[self.model.parameterization], - guidance_type="classifier-free", - condition=conditioning, - unconditional_condition=unconditional_conditioning, - guidance_scale=unconditional_guidance_scale, - ) - - dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False) - x = dpm_solver.sample(img, steps=S, skip_type="time_uniform", method="multistep", order=2, lower_order_final=True) - - return x.to(device), None \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py deleted file mode 100644 index bfdb4ea7e12761fa1440e484c83bcaa3de7844c9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py +++ /dev/null @@ -1,2117 +0,0 @@ -from __future__ import annotations - -import array -import asyncio -import concurrent.futures -import math -import socket -import sys -from asyncio.base_events import _run_until_complete_cb # type: ignore[attr-defined] -from collections import OrderedDict, deque -from concurrent.futures import Future -from contextvars import Context, copy_context -from dataclasses import dataclass -from functools import partial, wraps -from inspect import ( - CORO_RUNNING, - CORO_SUSPENDED, - GEN_RUNNING, - GEN_SUSPENDED, - getcoroutinestate, - getgeneratorstate, -) -from io import IOBase -from os import PathLike -from queue import Queue -from socket import AddressFamily, SocketKind -from threading import Thread -from types import TracebackType -from typing import ( - IO, - Any, - AsyncGenerator, - Awaitable, - Callable, - Collection, - Coroutine, - Generator, - Iterable, - Mapping, - Optional, - Sequence, - Tuple, - TypeVar, - Union, - cast, -) -from weakref import WeakKeyDictionary - -import sniffio - -from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc -from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable -from .._core._eventloop import claim_worker_thread, threadlocals -from .._core._exceptions import ( - BrokenResourceError, - BusyResourceError, - ClosedResourceError, - EndOfStream, - WouldBlock, -) -from .._core._exceptions import ExceptionGroup as BaseExceptionGroup -from .._core._sockets import GetAddrInfoReturnType, convert_ipv6_sockaddr -from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter -from .._core._synchronization import Event as BaseEvent -from .._core._synchronization import ResourceGuard -from .._core._tasks import CancelScope as BaseCancelScope -from ..abc import IPSockAddrType, UDPPacketType -from ..lowlevel import RunVar - -if sys.version_info >= (3, 8): - - def get_coro(task: asyncio.Task) -> Generator | Awaitable[Any]: - return task.get_coro() - -else: - - def get_coro(task: asyncio.Task) -> Generator | Awaitable[Any]: - return task._coro - - -from asyncio import all_tasks, create_task, current_task, get_running_loop -from asyncio import run as native_run - - -def _get_task_callbacks(task: asyncio.Task) -> Iterable[Callable]: - return [cb for cb, context in task._callbacks] - - -T_Retval = TypeVar("T_Retval") -T_contra = TypeVar("T_contra", contravariant=True) - -# Check whether there is native support for task names in asyncio (3.8+) -_native_task_names = hasattr(asyncio.Task, "get_name") - - -_root_task: RunVar[asyncio.Task | None] = RunVar("_root_task") - - -def find_root_task() -> asyncio.Task: - root_task = _root_task.get(None) - if root_task is not None and not root_task.done(): - return root_task - - # Look for a task that has been started via run_until_complete() - for task in all_tasks(): - if task._callbacks and not task.done(): - for cb in _get_task_callbacks(task): - if ( - cb is _run_until_complete_cb - or getattr(cb, "__module__", None) == "uvloop.loop" - ): - _root_task.set(task) - return task - - # Look up the topmost task in the AnyIO task tree, if possible - task = cast(asyncio.Task, current_task()) - state = _task_states.get(task) - if state: - cancel_scope = state.cancel_scope - while cancel_scope and cancel_scope._parent_scope is not None: - cancel_scope = cancel_scope._parent_scope - - if cancel_scope is not None: - return cast(asyncio.Task, cancel_scope._host_task) - - return task - - -def get_callable_name(func: Callable) -> str: - module = getattr(func, "__module__", None) - qualname = getattr(func, "__qualname__", None) - return ".".join([x for x in (module, qualname) if x]) - - -# -# Event loop -# - -_run_vars = ( - WeakKeyDictionary() -) # type: WeakKeyDictionary[asyncio.AbstractEventLoop, Any] - -current_token = get_running_loop - - -def _task_started(task: asyncio.Task) -> bool: - """Return ``True`` if the task has been started and has not finished.""" - coro = cast(Coroutine[Any, Any, Any], get_coro(task)) - try: - return getcoroutinestate(coro) in (CORO_RUNNING, CORO_SUSPENDED) - except AttributeError: - try: - return getgeneratorstate(cast(Generator, coro)) in ( - GEN_RUNNING, - GEN_SUSPENDED, - ) - except AttributeError: - # task coro is async_genenerator_asend https://bugs.python.org/issue37771 - raise Exception(f"Cannot determine if task {task} has started or not") - - -def _maybe_set_event_loop_policy( - policy: asyncio.AbstractEventLoopPolicy | None, use_uvloop: bool -) -> None: - # On CPython, use uvloop when possible if no other policy has been given and if not - # explicitly disabled - if policy is None and use_uvloop and sys.implementation.name == "cpython": - try: - import uvloop - except ImportError: - pass - else: - # Test for missing shutdown_default_executor() (uvloop 0.14.0 and earlier) - if not hasattr( - asyncio.AbstractEventLoop, "shutdown_default_executor" - ) or hasattr(uvloop.loop.Loop, "shutdown_default_executor"): - policy = uvloop.EventLoopPolicy() - - if policy is not None: - asyncio.set_event_loop_policy(policy) - - -def run( - func: Callable[..., Awaitable[T_Retval]], - *args: object, - debug: bool = False, - use_uvloop: bool = False, - policy: asyncio.AbstractEventLoopPolicy | None = None, -) -> T_Retval: - @wraps(func) - async def wrapper() -> T_Retval: - task = cast(asyncio.Task, current_task()) - task_state = TaskState(None, get_callable_name(func), None) - _task_states[task] = task_state - if _native_task_names: - task.set_name(task_state.name) - - try: - return await func(*args) - finally: - del _task_states[task] - - _maybe_set_event_loop_policy(policy, use_uvloop) - return native_run(wrapper(), debug=debug) - - -# -# Miscellaneous -# - -sleep = asyncio.sleep - - -# -# Timeouts and cancellation -# - -CancelledError = asyncio.CancelledError - - -class CancelScope(BaseCancelScope): - def __new__( - cls, *, deadline: float = math.inf, shield: bool = False - ) -> CancelScope: - return object.__new__(cls) - - def __init__(self, deadline: float = math.inf, shield: bool = False): - self._deadline = deadline - self._shield = shield - self._parent_scope: CancelScope | None = None - self._cancel_called = False - self._active = False - self._timeout_handle: asyncio.TimerHandle | None = None - self._cancel_handle: asyncio.Handle | None = None - self._tasks: set[asyncio.Task] = set() - self._host_task: asyncio.Task | None = None - self._timeout_expired = False - self._cancel_calls: int = 0 - - def __enter__(self) -> CancelScope: - if self._active: - raise RuntimeError( - "Each CancelScope may only be used for a single 'with' block" - ) - - self._host_task = host_task = cast(asyncio.Task, current_task()) - self._tasks.add(host_task) - try: - task_state = _task_states[host_task] - except KeyError: - task_name = host_task.get_name() if _native_task_names else None - task_state = TaskState(None, task_name, self) - _task_states[host_task] = task_state - else: - self._parent_scope = task_state.cancel_scope - task_state.cancel_scope = self - - self._timeout() - self._active = True - - # Start cancelling the host task if the scope was cancelled before entering - if self._cancel_called: - self._deliver_cancellation() - - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - if not self._active: - raise RuntimeError("This cancel scope is not active") - if current_task() is not self._host_task: - raise RuntimeError( - "Attempted to exit cancel scope in a different task than it was " - "entered in" - ) - - assert self._host_task is not None - host_task_state = _task_states.get(self._host_task) - if host_task_state is None or host_task_state.cancel_scope is not self: - raise RuntimeError( - "Attempted to exit a cancel scope that isn't the current tasks's " - "current cancel scope" - ) - - self._active = False - if self._timeout_handle: - self._timeout_handle.cancel() - self._timeout_handle = None - - self._tasks.remove(self._host_task) - - host_task_state.cancel_scope = self._parent_scope - - # Restart the cancellation effort in the farthest directly cancelled parent scope if this - # one was shielded - if self._shield: - self._deliver_cancellation_to_parent() - - if exc_val is not None: - exceptions = ( - exc_val.exceptions if isinstance(exc_val, ExceptionGroup) else [exc_val] - ) - if all(isinstance(exc, CancelledError) for exc in exceptions): - if self._timeout_expired: - return self._uncancel() - elif not self._cancel_called: - # Task was cancelled natively - return None - elif not self._parent_cancelled(): - # This scope was directly cancelled - return self._uncancel() - - return None - - def _uncancel(self) -> bool: - if sys.version_info < (3, 11) or self._host_task is None: - self._cancel_calls = 0 - return True - - # Uncancel all AnyIO cancellations - for i in range(self._cancel_calls): - self._host_task.uncancel() - - self._cancel_calls = 0 - return not self._host_task.cancelling() - - def _timeout(self) -> None: - if self._deadline != math.inf: - loop = get_running_loop() - if loop.time() >= self._deadline: - self._timeout_expired = True - self.cancel() - else: - self._timeout_handle = loop.call_at(self._deadline, self._timeout) - - def _deliver_cancellation(self) -> None: - """ - Deliver cancellation to directly contained tasks and nested cancel scopes. - - Schedule another run at the end if we still have tasks eligible for cancellation. - """ - should_retry = False - current = current_task() - for task in self._tasks: - if task._must_cancel: # type: ignore[attr-defined] - continue - - # The task is eligible for cancellation if it has started and is not in a cancel - # scope shielded from this one - cancel_scope = _task_states[task].cancel_scope - while cancel_scope is not self: - if cancel_scope is None or cancel_scope._shield: - break - else: - cancel_scope = cancel_scope._parent_scope - else: - should_retry = True - if task is not current and ( - task is self._host_task or _task_started(task) - ): - self._cancel_calls += 1 - task.cancel() - - # Schedule another callback if there are still tasks left - if should_retry: - self._cancel_handle = get_running_loop().call_soon( - self._deliver_cancellation - ) - else: - self._cancel_handle = None - - def _deliver_cancellation_to_parent(self) -> None: - """Start cancellation effort in the farthest directly cancelled parent scope""" - scope = self._parent_scope - scope_to_cancel: CancelScope | None = None - while scope is not None: - if scope._cancel_called and scope._cancel_handle is None: - scope_to_cancel = scope - - # No point in looking beyond any shielded scope - if scope._shield: - break - - scope = scope._parent_scope - - if scope_to_cancel is not None: - scope_to_cancel._deliver_cancellation() - - def _parent_cancelled(self) -> bool: - # Check whether any parent has been cancelled - cancel_scope = self._parent_scope - while cancel_scope is not None and not cancel_scope._shield: - if cancel_scope._cancel_called: - return True - else: - cancel_scope = cancel_scope._parent_scope - - return False - - def cancel(self) -> DeprecatedAwaitable: - if not self._cancel_called: - if self._timeout_handle: - self._timeout_handle.cancel() - self._timeout_handle = None - - self._cancel_called = True - if self._host_task is not None: - self._deliver_cancellation() - - return DeprecatedAwaitable(self.cancel) - - @property - def deadline(self) -> float: - return self._deadline - - @deadline.setter - def deadline(self, value: float) -> None: - self._deadline = float(value) - if self._timeout_handle is not None: - self._timeout_handle.cancel() - self._timeout_handle = None - - if self._active and not self._cancel_called: - self._timeout() - - @property - def cancel_called(self) -> bool: - return self._cancel_called - - @property - def shield(self) -> bool: - return self._shield - - @shield.setter - def shield(self, value: bool) -> None: - if self._shield != value: - self._shield = value - if not value: - self._deliver_cancellation_to_parent() - - -async def checkpoint() -> None: - await sleep(0) - - -async def checkpoint_if_cancelled() -> None: - task = current_task() - if task is None: - return - - try: - cancel_scope = _task_states[task].cancel_scope - except KeyError: - return - - while cancel_scope: - if cancel_scope.cancel_called: - await sleep(0) - elif cancel_scope.shield: - break - else: - cancel_scope = cancel_scope._parent_scope - - -async def cancel_shielded_checkpoint() -> None: - with CancelScope(shield=True): - await sleep(0) - - -def current_effective_deadline() -> float: - try: - cancel_scope = _task_states[current_task()].cancel_scope # type: ignore[index] - except KeyError: - return math.inf - - deadline = math.inf - while cancel_scope: - deadline = min(deadline, cancel_scope.deadline) - if cancel_scope._cancel_called: - deadline = -math.inf - break - elif cancel_scope.shield: - break - else: - cancel_scope = cancel_scope._parent_scope - - return deadline - - -def current_time() -> float: - return get_running_loop().time() - - -# -# Task states -# - - -class TaskState: - """ - Encapsulates auxiliary task information that cannot be added to the Task instance itself - because there are no guarantees about its implementation. - """ - - __slots__ = "parent_id", "name", "cancel_scope" - - def __init__( - self, - parent_id: int | None, - name: str | None, - cancel_scope: CancelScope | None, - ): - self.parent_id = parent_id - self.name = name - self.cancel_scope = cancel_scope - - -_task_states = WeakKeyDictionary() # type: WeakKeyDictionary[asyncio.Task, TaskState] - - -# -# Task groups -# - - -class ExceptionGroup(BaseExceptionGroup): - def __init__(self, exceptions: list[BaseException]): - super().__init__() - self.exceptions = exceptions - - -class _AsyncioTaskStatus(abc.TaskStatus): - def __init__(self, future: asyncio.Future, parent_id: int): - self._future = future - self._parent_id = parent_id - - def started(self, value: T_contra | None = None) -> None: - try: - self._future.set_result(value) - except asyncio.InvalidStateError: - raise RuntimeError( - "called 'started' twice on the same task status" - ) from None - - task = cast(asyncio.Task, current_task()) - _task_states[task].parent_id = self._parent_id - - -class TaskGroup(abc.TaskGroup): - def __init__(self) -> None: - self.cancel_scope: CancelScope = CancelScope() - self._active = False - self._exceptions: list[BaseException] = [] - - async def __aenter__(self) -> TaskGroup: - self.cancel_scope.__enter__() - self._active = True - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - ignore_exception = self.cancel_scope.__exit__(exc_type, exc_val, exc_tb) - if exc_val is not None: - self.cancel_scope.cancel() - self._exceptions.append(exc_val) - - while self.cancel_scope._tasks: - try: - await asyncio.wait(self.cancel_scope._tasks) - except asyncio.CancelledError: - self.cancel_scope.cancel() - - self._active = False - if not self.cancel_scope._parent_cancelled(): - exceptions = self._filter_cancellation_errors(self._exceptions) - else: - exceptions = self._exceptions - - try: - if len(exceptions) > 1: - if all( - isinstance(e, CancelledError) and not e.args for e in exceptions - ): - # Tasks were cancelled natively, without a cancellation message - raise CancelledError - else: - raise ExceptionGroup(exceptions) - elif exceptions and exceptions[0] is not exc_val: - raise exceptions[0] - except BaseException as exc: - # Clear the context here, as it can only be done in-flight. - # If the context is not cleared, it can result in recursive tracebacks (see #145). - exc.__context__ = None - raise - - return ignore_exception - - @staticmethod - def _filter_cancellation_errors( - exceptions: Sequence[BaseException], - ) -> list[BaseException]: - filtered_exceptions: list[BaseException] = [] - for exc in exceptions: - if isinstance(exc, ExceptionGroup): - new_exceptions = TaskGroup._filter_cancellation_errors(exc.exceptions) - if len(new_exceptions) > 1: - filtered_exceptions.append(exc) - elif len(new_exceptions) == 1: - filtered_exceptions.append(new_exceptions[0]) - elif new_exceptions: - new_exc = ExceptionGroup(new_exceptions) - new_exc.__cause__ = exc.__cause__ - new_exc.__context__ = exc.__context__ - new_exc.__traceback__ = exc.__traceback__ - filtered_exceptions.append(new_exc) - elif not isinstance(exc, CancelledError) or exc.args: - filtered_exceptions.append(exc) - - return filtered_exceptions - - async def _run_wrapped_task( - self, coro: Coroutine, task_status_future: asyncio.Future | None - ) -> None: - # This is the code path for Python 3.7 on which asyncio freaks out if a task - # raises a BaseException. - __traceback_hide__ = __tracebackhide__ = True # noqa: F841 - task = cast(asyncio.Task, current_task()) - try: - await coro - except BaseException as exc: - if task_status_future is None or task_status_future.done(): - self._exceptions.append(exc) - self.cancel_scope.cancel() - else: - task_status_future.set_exception(exc) - else: - if task_status_future is not None and not task_status_future.done(): - task_status_future.set_exception( - RuntimeError("Child exited without calling task_status.started()") - ) - finally: - if task in self.cancel_scope._tasks: - self.cancel_scope._tasks.remove(task) - del _task_states[task] - - def _spawn( - self, - func: Callable[..., Awaitable[Any]], - args: tuple, - name: object, - task_status_future: asyncio.Future | None = None, - ) -> asyncio.Task: - def task_done(_task: asyncio.Task) -> None: - # This is the code path for Python 3.8+ - assert _task in self.cancel_scope._tasks - self.cancel_scope._tasks.remove(_task) - del _task_states[_task] - - try: - exc = _task.exception() - except CancelledError as e: - while isinstance(e.__context__, CancelledError): - e = e.__context__ - - exc = e - - if exc is not None: - if task_status_future is None or task_status_future.done(): - self._exceptions.append(exc) - self.cancel_scope.cancel() - else: - task_status_future.set_exception(exc) - elif task_status_future is not None and not task_status_future.done(): - task_status_future.set_exception( - RuntimeError("Child exited without calling task_status.started()") - ) - - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - options: dict[str, Any] = {} - name = get_callable_name(func) if name is None else str(name) - if _native_task_names: - options["name"] = name - - kwargs = {} - if task_status_future: - parent_id = id(current_task()) - kwargs["task_status"] = _AsyncioTaskStatus( - task_status_future, id(self.cancel_scope._host_task) - ) - else: - parent_id = id(self.cancel_scope._host_task) - - coro = func(*args, **kwargs) - if not asyncio.iscoroutine(coro): - raise TypeError( - f"Expected an async function, but {func} appears to be synchronous" - ) - - foreign_coro = not hasattr(coro, "cr_frame") and not hasattr(coro, "gi_frame") - if foreign_coro or sys.version_info < (3, 8): - coro = self._run_wrapped_task(coro, task_status_future) - - task = create_task(coro, **options) - if not foreign_coro and sys.version_info >= (3, 8): - task.add_done_callback(task_done) - - # Make the spawned task inherit the task group's cancel scope - _task_states[task] = TaskState( - parent_id=parent_id, name=name, cancel_scope=self.cancel_scope - ) - self.cancel_scope._tasks.add(task) - return task - - def start_soon( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - self._spawn(func, args, name) - - async def start( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - future: asyncio.Future = asyncio.Future() - task = self._spawn(func, args, name, future) - - # If the task raises an exception after sending a start value without a switch point - # between, the task group is cancelled and this method never proceeds to process the - # completed future. That's why we have to have a shielded cancel scope here. - with CancelScope(shield=True): - try: - return await future - except CancelledError: - task.cancel() - raise - - -# -# Threads -# - -_Retval_Queue_Type = Tuple[Optional[T_Retval], Optional[BaseException]] - - -class WorkerThread(Thread): - MAX_IDLE_TIME = 10 # seconds - - def __init__( - self, - root_task: asyncio.Task, - workers: set[WorkerThread], - idle_workers: deque[WorkerThread], - ): - super().__init__(name="AnyIO worker thread") - self.root_task = root_task - self.workers = workers - self.idle_workers = idle_workers - self.loop = root_task._loop - self.queue: Queue[ - tuple[Context, Callable, tuple, asyncio.Future] | None - ] = Queue(2) - self.idle_since = current_time() - self.stopping = False - - def _report_result( - self, future: asyncio.Future, result: Any, exc: BaseException | None - ) -> None: - self.idle_since = current_time() - if not self.stopping: - self.idle_workers.append(self) - - if not future.cancelled(): - if exc is not None: - if isinstance(exc, StopIteration): - new_exc = RuntimeError("coroutine raised StopIteration") - new_exc.__cause__ = exc - exc = new_exc - - future.set_exception(exc) - else: - future.set_result(result) - - def run(self) -> None: - with claim_worker_thread("asyncio"): - threadlocals.loop = self.loop - while True: - item = self.queue.get() - if item is None: - # Shutdown command received - return - - context, func, args, future = item - if not future.cancelled(): - result = None - exception: BaseException | None = None - try: - result = context.run(func, *args) - except BaseException as exc: - exception = exc - - if not self.loop.is_closed(): - self.loop.call_soon_threadsafe( - self._report_result, future, result, exception - ) - - self.queue.task_done() - - def stop(self, f: asyncio.Task | None = None) -> None: - self.stopping = True - self.queue.put_nowait(None) - self.workers.discard(self) - try: - self.idle_workers.remove(self) - except ValueError: - pass - - -_threadpool_idle_workers: RunVar[deque[WorkerThread]] = RunVar( - "_threadpool_idle_workers" -) -_threadpool_workers: RunVar[set[WorkerThread]] = RunVar("_threadpool_workers") - - -async def run_sync_in_worker_thread( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: CapacityLimiter | None = None, -) -> T_Retval: - await checkpoint() - - # If this is the first run in this event loop thread, set up the necessary variables - try: - idle_workers = _threadpool_idle_workers.get() - workers = _threadpool_workers.get() - except LookupError: - idle_workers = deque() - workers = set() - _threadpool_idle_workers.set(idle_workers) - _threadpool_workers.set(workers) - - async with (limiter or current_default_thread_limiter()): - with CancelScope(shield=not cancellable): - future: asyncio.Future = asyncio.Future() - root_task = find_root_task() - if not idle_workers: - worker = WorkerThread(root_task, workers, idle_workers) - worker.start() - workers.add(worker) - root_task.add_done_callback(worker.stop) - else: - worker = idle_workers.pop() - - # Prune any other workers that have been idle for MAX_IDLE_TIME seconds or longer - now = current_time() - while idle_workers: - if now - idle_workers[0].idle_since < WorkerThread.MAX_IDLE_TIME: - break - - expired_worker = idle_workers.popleft() - expired_worker.root_task.remove_done_callback(expired_worker.stop) - expired_worker.stop() - - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, None) - worker.queue.put_nowait((context, func, args, future)) - return await future - - -def run_sync_from_thread( - func: Callable[..., T_Retval], - *args: object, - loop: asyncio.AbstractEventLoop | None = None, -) -> T_Retval: - @wraps(func) - def wrapper() -> None: - try: - f.set_result(func(*args)) - except BaseException as exc: - f.set_exception(exc) - if not isinstance(exc, Exception): - raise - - f: concurrent.futures.Future[T_Retval] = Future() - loop = loop or threadlocals.loop - loop.call_soon_threadsafe(wrapper) - return f.result() - - -def run_async_from_thread( - func: Callable[..., Awaitable[T_Retval]], *args: object -) -> T_Retval: - f: concurrent.futures.Future[T_Retval] = asyncio.run_coroutine_threadsafe( - func(*args), threadlocals.loop - ) - return f.result() - - -class BlockingPortal(abc.BlockingPortal): - def __new__(cls) -> BlockingPortal: - return object.__new__(cls) - - def __init__(self) -> None: - super().__init__() - self._loop = get_running_loop() - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: dict[str, Any], - name: object, - future: Future, - ) -> None: - run_sync_from_thread( - partial(self._task_group.start_soon, name=name), - self._call_func, - func, - args, - kwargs, - future, - loop=self._loop, - ) - - -# -# Subprocesses -# - - -@dataclass(eq=False) -class StreamReaderWrapper(abc.ByteReceiveStream): - _stream: asyncio.StreamReader - - async def receive(self, max_bytes: int = 65536) -> bytes: - data = await self._stream.read(max_bytes) - if data: - return data - else: - raise EndOfStream - - async def aclose(self) -> None: - self._stream.feed_eof() - - -@dataclass(eq=False) -class StreamWriterWrapper(abc.ByteSendStream): - _stream: asyncio.StreamWriter - - async def send(self, item: bytes) -> None: - self._stream.write(item) - await self._stream.drain() - - async def aclose(self) -> None: - self._stream.close() - - -@dataclass(eq=False) -class Process(abc.Process): - _process: asyncio.subprocess.Process - _stdin: StreamWriterWrapper | None - _stdout: StreamReaderWrapper | None - _stderr: StreamReaderWrapper | None - - async def aclose(self) -> None: - if self._stdin: - await self._stdin.aclose() - if self._stdout: - await self._stdout.aclose() - if self._stderr: - await self._stderr.aclose() - - await self.wait() - - async def wait(self) -> int: - return await self._process.wait() - - def terminate(self) -> None: - self._process.terminate() - - def kill(self) -> None: - self._process.kill() - - def send_signal(self, signal: int) -> None: - self._process.send_signal(signal) - - @property - def pid(self) -> int: - return self._process.pid - - @property - def returncode(self) -> int | None: - return self._process.returncode - - @property - def stdin(self) -> abc.ByteSendStream | None: - return self._stdin - - @property - def stdout(self) -> abc.ByteReceiveStream | None: - return self._stdout - - @property - def stderr(self) -> abc.ByteReceiveStream | None: - return self._stderr - - -async def open_process( - command: str | bytes | Sequence[str | bytes], - *, - shell: bool, - stdin: int | IO[Any] | None, - stdout: int | IO[Any] | None, - stderr: int | IO[Any] | None, - cwd: str | bytes | PathLike | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> Process: - await checkpoint() - if shell: - process = await asyncio.create_subprocess_shell( - cast(Union[str, bytes], command), - stdin=stdin, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - else: - process = await asyncio.create_subprocess_exec( - *command, - stdin=stdin, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - - stdin_stream = StreamWriterWrapper(process.stdin) if process.stdin else None - stdout_stream = StreamReaderWrapper(process.stdout) if process.stdout else None - stderr_stream = StreamReaderWrapper(process.stderr) if process.stderr else None - return Process(process, stdin_stream, stdout_stream, stderr_stream) - - -def _forcibly_shutdown_process_pool_on_exit( - workers: set[Process], _task: object -) -> None: - """ - Forcibly shuts down worker processes belonging to this event loop.""" - child_watcher: asyncio.AbstractChildWatcher | None - try: - child_watcher = asyncio.get_event_loop_policy().get_child_watcher() - except NotImplementedError: - child_watcher = None - - # Close as much as possible (w/o async/await) to avoid warnings - for process in workers: - if process.returncode is None: - continue - - process._stdin._stream._transport.close() # type: ignore[union-attr] - process._stdout._stream._transport.close() # type: ignore[union-attr] - process._stderr._stream._transport.close() # type: ignore[union-attr] - process.kill() - if child_watcher: - child_watcher.remove_child_handler(process.pid) - - -async def _shutdown_process_pool_on_exit(workers: set[Process]) -> None: - """ - Shuts down worker processes belonging to this event loop. - - NOTE: this only works when the event loop was started using asyncio.run() or anyio.run(). - - """ - process: Process - try: - await sleep(math.inf) - except asyncio.CancelledError: - for process in workers: - if process.returncode is None: - process.kill() - - for process in workers: - await process.aclose() - - -def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None: - kwargs: dict[str, Any] = ( - {"name": "AnyIO process pool shutdown task"} if _native_task_names else {} - ) - create_task(_shutdown_process_pool_on_exit(workers), **kwargs) - find_root_task().add_done_callback( - partial(_forcibly_shutdown_process_pool_on_exit, workers) - ) - - -# -# Sockets and networking -# - - -class StreamProtocol(asyncio.Protocol): - read_queue: deque[bytes] - read_event: asyncio.Event - write_event: asyncio.Event - exception: Exception | None = None - - def connection_made(self, transport: asyncio.BaseTransport) -> None: - self.read_queue = deque() - self.read_event = asyncio.Event() - self.write_event = asyncio.Event() - self.write_event.set() - cast(asyncio.Transport, transport).set_write_buffer_limits(0) - - def connection_lost(self, exc: Exception | None) -> None: - if exc: - self.exception = BrokenResourceError() - self.exception.__cause__ = exc - - self.read_event.set() - self.write_event.set() - - def data_received(self, data: bytes) -> None: - self.read_queue.append(data) - self.read_event.set() - - def eof_received(self) -> bool | None: - self.read_event.set() - return True - - def pause_writing(self) -> None: - self.write_event = asyncio.Event() - - def resume_writing(self) -> None: - self.write_event.set() - - -class DatagramProtocol(asyncio.DatagramProtocol): - read_queue: deque[tuple[bytes, IPSockAddrType]] - read_event: asyncio.Event - write_event: asyncio.Event - exception: Exception | None = None - - def connection_made(self, transport: asyncio.BaseTransport) -> None: - self.read_queue = deque(maxlen=100) # arbitrary value - self.read_event = asyncio.Event() - self.write_event = asyncio.Event() - self.write_event.set() - - def connection_lost(self, exc: Exception | None) -> None: - self.read_event.set() - self.write_event.set() - - def datagram_received(self, data: bytes, addr: IPSockAddrType) -> None: - addr = convert_ipv6_sockaddr(addr) - self.read_queue.append((data, addr)) - self.read_event.set() - - def error_received(self, exc: Exception) -> None: - self.exception = exc - - def pause_writing(self) -> None: - self.write_event.clear() - - def resume_writing(self) -> None: - self.write_event.set() - - -class SocketStream(abc.SocketStream): - def __init__(self, transport: asyncio.Transport, protocol: StreamProtocol): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def receive(self, max_bytes: int = 65536) -> bytes: - with self._receive_guard: - await checkpoint() - - if ( - not self._protocol.read_event.is_set() - and not self._transport.is_closing() - ): - self._transport.resume_reading() - await self._protocol.read_event.wait() - self._transport.pause_reading() - - try: - chunk = self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - elif self._protocol.exception: - raise self._protocol.exception - else: - raise EndOfStream from None - - if len(chunk) > max_bytes: - # Split the oversized chunk - chunk, leftover = chunk[:max_bytes], chunk[max_bytes:] - self._protocol.read_queue.appendleft(leftover) - - # If the read queue is empty, clear the flag so that the next call will block until - # data is available - if not self._protocol.read_queue: - self._protocol.read_event.clear() - - return chunk - - async def send(self, item: bytes) -> None: - with self._send_guard: - await checkpoint() - - if self._closed: - raise ClosedResourceError - elif self._protocol.exception is not None: - raise self._protocol.exception - - try: - self._transport.write(item) - except RuntimeError as exc: - if self._transport.is_closing(): - raise BrokenResourceError from exc - else: - raise - - await self._protocol.write_event.wait() - - async def send_eof(self) -> None: - try: - self._transport.write_eof() - except OSError: - pass - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - try: - self._transport.write_eof() - except OSError: - pass - - self._transport.close() - await sleep(0) - self._transport.abort() - - -class UNIXSocketStream(abc.SocketStream): - _receive_future: asyncio.Future | None = None - _send_future: asyncio.Future | None = None - _closing = False - - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = get_running_loop() - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - def _wait_until_readable(self, loop: asyncio.AbstractEventLoop) -> asyncio.Future: - def callback(f: object) -> None: - del self._receive_future - loop.remove_reader(self.__raw_socket) - - f = self._receive_future = asyncio.Future() - self._loop.add_reader(self.__raw_socket, f.set_result, None) - f.add_done_callback(callback) - return f - - def _wait_until_writable(self, loop: asyncio.AbstractEventLoop) -> asyncio.Future: - def callback(f: object) -> None: - del self._send_future - loop.remove_writer(self.__raw_socket) - - f = self._send_future = asyncio.Future() - self._loop.add_writer(self.__raw_socket, f.set_result, None) - f.add_done_callback(callback) - return f - - async def send_eof(self) -> None: - with self._send_guard: - self._raw_socket.shutdown(socket.SHUT_WR) - - async def receive(self, max_bytes: int = 65536) -> bytes: - loop = get_running_loop() - await checkpoint() - with self._receive_guard: - while True: - try: - data = self.__raw_socket.recv(max_bytes) - except BlockingIOError: - await self._wait_until_readable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - if not data: - raise EndOfStream - - return data - - async def send(self, item: bytes) -> None: - loop = get_running_loop() - await checkpoint() - with self._send_guard: - view = memoryview(item) - while view: - try: - bytes_sent = self.__raw_socket.send(view) - except BlockingIOError: - await self._wait_until_writable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - view = view[bytes_sent:] - - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - if not isinstance(msglen, int) or msglen < 0: - raise ValueError("msglen must be a non-negative integer") - if not isinstance(maxfds, int) or maxfds < 1: - raise ValueError("maxfds must be a positive integer") - - loop = get_running_loop() - fds = array.array("i") - await checkpoint() - with self._receive_guard: - while True: - try: - message, ancdata, flags, addr = self.__raw_socket.recvmsg( - msglen, socket.CMSG_LEN(maxfds * fds.itemsize) - ) - except BlockingIOError: - await self._wait_until_readable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - if not message and not ancdata: - raise EndOfStream - - break - - for cmsg_level, cmsg_type, cmsg_data in ancdata: - if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS: - raise RuntimeError( - f"Received unexpected ancillary data; message = {message!r}, " - f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}" - ) - - fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) - - return message, list(fds) - - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - if not message: - raise ValueError("message must not be empty") - if not fds: - raise ValueError("fds must not be empty") - - loop = get_running_loop() - filenos: list[int] = [] - for fd in fds: - if isinstance(fd, int): - filenos.append(fd) - elif isinstance(fd, IOBase): - filenos.append(fd.fileno()) - - fdarray = array.array("i", filenos) - await checkpoint() - with self._send_guard: - while True: - try: - # The ignore can be removed after mypy picks up - # https://github.com/python/typeshed/pull/5545 - self.__raw_socket.sendmsg( - [message], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fdarray)] - ) - break - except BlockingIOError: - await self._wait_until_writable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - - async def aclose(self) -> None: - if not self._closing: - self._closing = True - if self.__raw_socket.fileno() != -1: - self.__raw_socket.close() - - if self._receive_future: - self._receive_future.set_result(None) - if self._send_future: - self._send_future.set_result(None) - - -class TCPSocketListener(abc.SocketListener): - _accept_scope: CancelScope | None = None - _closed = False - - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = cast(asyncio.BaseEventLoop, get_running_loop()) - self._accept_guard = ResourceGuard("accepting connections from") - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - async def accept(self) -> abc.SocketStream: - if self._closed: - raise ClosedResourceError - - with self._accept_guard: - await checkpoint() - with CancelScope() as self._accept_scope: - try: - client_sock, _addr = await self._loop.sock_accept(self._raw_socket) - except asyncio.CancelledError: - # Workaround for https://bugs.python.org/issue41317 - try: - self._loop.remove_reader(self._raw_socket) - except (ValueError, NotImplementedError): - pass - - if self._closed: - raise ClosedResourceError from None - - raise - finally: - self._accept_scope = None - - client_sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - transport, protocol = await self._loop.connect_accepted_socket( - StreamProtocol, client_sock - ) - return SocketStream(transport, protocol) - - async def aclose(self) -> None: - if self._closed: - return - - self._closed = True - if self._accept_scope: - # Workaround for https://bugs.python.org/issue41317 - try: - self._loop.remove_reader(self._raw_socket) - except (ValueError, NotImplementedError): - pass - - self._accept_scope.cancel() - await sleep(0) - - self._raw_socket.close() - - -class UNIXSocketListener(abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = get_running_loop() - self._accept_guard = ResourceGuard("accepting connections from") - self._closed = False - - async def accept(self) -> abc.SocketStream: - await checkpoint() - with self._accept_guard: - while True: - try: - client_sock, _ = self.__raw_socket.accept() - client_sock.setblocking(False) - return UNIXSocketStream(client_sock) - except BlockingIOError: - f: asyncio.Future = asyncio.Future() - self._loop.add_reader(self.__raw_socket, f.set_result, None) - f.add_done_callback( - lambda _: self._loop.remove_reader(self.__raw_socket) - ) - await f - except OSError as exc: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - - async def aclose(self) -> None: - self._closed = True - self.__raw_socket.close() - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - -class UDPSocket(abc.UDPSocket): - def __init__( - self, transport: asyncio.DatagramTransport, protocol: DatagramProtocol - ): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - self._transport.close() - - async def receive(self) -> tuple[bytes, IPSockAddrType]: - with self._receive_guard: - await checkpoint() - - # If the buffer is empty, ask for more data - if not self._protocol.read_queue and not self._transport.is_closing(): - self._protocol.read_event.clear() - await self._protocol.read_event.wait() - - try: - return self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from None - - async def send(self, item: UDPPacketType) -> None: - with self._send_guard: - await checkpoint() - await self._protocol.write_event.wait() - if self._closed: - raise ClosedResourceError - elif self._transport.is_closing(): - raise BrokenResourceError - else: - self._transport.sendto(*item) - - -class ConnectedUDPSocket(abc.ConnectedUDPSocket): - def __init__( - self, transport: asyncio.DatagramTransport, protocol: DatagramProtocol - ): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - self._transport.close() - - async def receive(self) -> bytes: - with self._receive_guard: - await checkpoint() - - # If the buffer is empty, ask for more data - if not self._protocol.read_queue and not self._transport.is_closing(): - self._protocol.read_event.clear() - await self._protocol.read_event.wait() - - try: - packet = self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from None - - return packet[0] - - async def send(self, item: bytes) -> None: - with self._send_guard: - await checkpoint() - await self._protocol.write_event.wait() - if self._closed: - raise ClosedResourceError - elif self._transport.is_closing(): - raise BrokenResourceError - else: - self._transport.sendto(item) - - -async def connect_tcp( - host: str, port: int, local_addr: tuple[str, int] | None = None -) -> SocketStream: - transport, protocol = cast( - Tuple[asyncio.Transport, StreamProtocol], - await get_running_loop().create_connection( - StreamProtocol, host, port, local_addr=local_addr - ), - ) - transport.pause_reading() - return SocketStream(transport, protocol) - - -async def connect_unix(path: str) -> UNIXSocketStream: - await checkpoint() - loop = get_running_loop() - raw_socket = socket.socket(socket.AF_UNIX) - raw_socket.setblocking(False) - while True: - try: - raw_socket.connect(path) - except BlockingIOError: - f: asyncio.Future = asyncio.Future() - loop.add_writer(raw_socket, f.set_result, None) - f.add_done_callback(lambda _: loop.remove_writer(raw_socket)) - await f - except BaseException: - raw_socket.close() - raise - else: - return UNIXSocketStream(raw_socket) - - -async def create_udp_socket( - family: socket.AddressFamily, - local_address: IPSockAddrType | None, - remote_address: IPSockAddrType | None, - reuse_port: bool, -) -> UDPSocket | ConnectedUDPSocket: - result = await get_running_loop().create_datagram_endpoint( - DatagramProtocol, - local_addr=local_address, - remote_addr=remote_address, - family=family, - reuse_port=reuse_port, - ) - transport = result[0] - protocol = result[1] - if protocol.exception: - transport.close() - raise protocol.exception - - if not remote_address: - return UDPSocket(transport, protocol) - else: - return ConnectedUDPSocket(transport, protocol) - - -async def getaddrinfo( - host: bytes | str, - port: str | int | None, - *, - family: int | AddressFamily = 0, - type: int | SocketKind = 0, - proto: int = 0, - flags: int = 0, -) -> GetAddrInfoReturnType: - # https://github.com/python/typeshed/pull/4304 - result = await get_running_loop().getaddrinfo( - host, port, family=family, type=type, proto=proto, flags=flags - ) - return cast(GetAddrInfoReturnType, result) - - -async def getnameinfo(sockaddr: IPSockAddrType, flags: int = 0) -> tuple[str, str]: - return await get_running_loop().getnameinfo(sockaddr, flags) - - -_read_events: RunVar[dict[Any, asyncio.Event]] = RunVar("read_events") -_write_events: RunVar[dict[Any, asyncio.Event]] = RunVar("write_events") - - -async def wait_socket_readable(sock: socket.socket) -> None: - await checkpoint() - try: - read_events = _read_events.get() - except LookupError: - read_events = {} - _read_events.set(read_events) - - if read_events.get(sock): - raise BusyResourceError("reading from") from None - - loop = get_running_loop() - event = read_events[sock] = asyncio.Event() - loop.add_reader(sock, event.set) - try: - await event.wait() - finally: - if read_events.pop(sock, None) is not None: - loop.remove_reader(sock) - readable = True - else: - readable = False - - if not readable: - raise ClosedResourceError - - -async def wait_socket_writable(sock: socket.socket) -> None: - await checkpoint() - try: - write_events = _write_events.get() - except LookupError: - write_events = {} - _write_events.set(write_events) - - if write_events.get(sock): - raise BusyResourceError("writing to") from None - - loop = get_running_loop() - event = write_events[sock] = asyncio.Event() - loop.add_writer(sock.fileno(), event.set) - try: - await event.wait() - finally: - if write_events.pop(sock, None) is not None: - loop.remove_writer(sock) - writable = True - else: - writable = False - - if not writable: - raise ClosedResourceError - - -# -# Synchronization -# - - -class Event(BaseEvent): - def __new__(cls) -> Event: - return object.__new__(cls) - - def __init__(self) -> None: - self._event = asyncio.Event() - - def set(self) -> DeprecatedAwaitable: - self._event.set() - return DeprecatedAwaitable(self.set) - - def is_set(self) -> bool: - return self._event.is_set() - - async def wait(self) -> None: - if await self._event.wait(): - await checkpoint() - - def statistics(self) -> EventStatistics: - return EventStatistics(len(self._event._waiters)) # type: ignore[attr-defined] - - -class CapacityLimiter(BaseCapacityLimiter): - _total_tokens: float = 0 - - def __new__(cls, total_tokens: float) -> CapacityLimiter: - return object.__new__(cls) - - def __init__(self, total_tokens: float): - self._borrowers: set[Any] = set() - self._wait_queue: OrderedDict[Any, asyncio.Event] = OrderedDict() - self.total_tokens = total_tokens - - async def __aenter__(self) -> None: - await self.acquire() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> None: - self.release() - - @property - def total_tokens(self) -> float: - return self._total_tokens - - @total_tokens.setter - def total_tokens(self, value: float) -> None: - if not isinstance(value, int) and not math.isinf(value): - raise TypeError("total_tokens must be an int or math.inf") - if value < 1: - raise ValueError("total_tokens must be >= 1") - - old_value = self._total_tokens - self._total_tokens = value - events = [] - for event in self._wait_queue.values(): - if value <= old_value: - break - - if not event.is_set(): - events.append(event) - old_value += 1 - - for event in events: - event.set() - - @property - def borrowed_tokens(self) -> int: - return len(self._borrowers) - - @property - def available_tokens(self) -> float: - return self._total_tokens - len(self._borrowers) - - def acquire_nowait(self) -> DeprecatedAwaitable: - self.acquire_on_behalf_of_nowait(current_task()) - return DeprecatedAwaitable(self.acquire_nowait) - - def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable: - if borrower in self._borrowers: - raise RuntimeError( - "this borrower is already holding one of this CapacityLimiter's " - "tokens" - ) - - if self._wait_queue or len(self._borrowers) >= self._total_tokens: - raise WouldBlock - - self._borrowers.add(borrower) - return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait) - - async def acquire(self) -> None: - return await self.acquire_on_behalf_of(current_task()) - - async def acquire_on_behalf_of(self, borrower: object) -> None: - await checkpoint_if_cancelled() - try: - self.acquire_on_behalf_of_nowait(borrower) - except WouldBlock: - event = asyncio.Event() - self._wait_queue[borrower] = event - try: - await event.wait() - except BaseException: - self._wait_queue.pop(borrower, None) - raise - - self._borrowers.add(borrower) - else: - try: - await cancel_shielded_checkpoint() - except BaseException: - self.release() - raise - - def release(self) -> None: - self.release_on_behalf_of(current_task()) - - def release_on_behalf_of(self, borrower: object) -> None: - try: - self._borrowers.remove(borrower) - except KeyError: - raise RuntimeError( - "this borrower isn't holding any of this CapacityLimiter's " "tokens" - ) from None - - # Notify the next task in line if this limiter has free capacity now - if self._wait_queue and len(self._borrowers) < self._total_tokens: - event = self._wait_queue.popitem(last=False)[1] - event.set() - - def statistics(self) -> CapacityLimiterStatistics: - return CapacityLimiterStatistics( - self.borrowed_tokens, - self.total_tokens, - tuple(self._borrowers), - len(self._wait_queue), - ) - - -_default_thread_limiter: RunVar[CapacityLimiter] = RunVar("_default_thread_limiter") - - -def current_default_thread_limiter() -> CapacityLimiter: - try: - return _default_thread_limiter.get() - except LookupError: - limiter = CapacityLimiter(40) - _default_thread_limiter.set(limiter) - return limiter - - -# -# Operating system signals -# - - -class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]): - def __init__(self, signals: tuple[int, ...]): - self._signals = signals - self._loop = get_running_loop() - self._signal_queue: deque[int] = deque() - self._future: asyncio.Future = asyncio.Future() - self._handled_signals: set[int] = set() - - def _deliver(self, signum: int) -> None: - self._signal_queue.append(signum) - if not self._future.done(): - self._future.set_result(None) - - def __enter__(self) -> _SignalReceiver: - for sig in set(self._signals): - self._loop.add_signal_handler(sig, self._deliver, sig) - self._handled_signals.add(sig) - - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - for sig in self._handled_signals: - self._loop.remove_signal_handler(sig) - return None - - def __aiter__(self) -> _SignalReceiver: - return self - - async def __anext__(self) -> int: - await checkpoint() - if not self._signal_queue: - self._future = asyncio.Future() - await self._future - - return self._signal_queue.popleft() - - -def open_signal_receiver(*signals: int) -> _SignalReceiver: - return _SignalReceiver(signals) - - -# -# Testing and debugging -# - - -def _create_task_info(task: asyncio.Task) -> TaskInfo: - task_state = _task_states.get(task) - if task_state is None: - name = task.get_name() if _native_task_names else None - parent_id = None - else: - name = task_state.name - parent_id = task_state.parent_id - - return TaskInfo(id(task), parent_id, name, get_coro(task)) - - -def get_current_task() -> TaskInfo: - return _create_task_info(current_task()) # type: ignore[arg-type] - - -def get_running_tasks() -> list[TaskInfo]: - return [_create_task_info(task) for task in all_tasks() if not task.done()] - - -async def wait_all_tasks_blocked() -> None: - await checkpoint() - this_task = current_task() - while True: - for task in all_tasks(): - if task is this_task: - continue - - if task._fut_waiter is None or task._fut_waiter.done(): # type: ignore[attr-defined] - await sleep(0.1) - break - else: - return - - -class TestRunner(abc.TestRunner): - def __init__( - self, - debug: bool = False, - use_uvloop: bool = False, - policy: asyncio.AbstractEventLoopPolicy | None = None, - ): - self._exceptions: list[BaseException] = [] - _maybe_set_event_loop_policy(policy, use_uvloop) - self._loop = asyncio.new_event_loop() - self._loop.set_debug(debug) - self._loop.set_exception_handler(self._exception_handler) - asyncio.set_event_loop(self._loop) - - def _cancel_all_tasks(self) -> None: - to_cancel = all_tasks(self._loop) - if not to_cancel: - return - - for task in to_cancel: - task.cancel() - - self._loop.run_until_complete( - asyncio.gather(*to_cancel, return_exceptions=True) - ) - - for task in to_cancel: - if task.cancelled(): - continue - if task.exception() is not None: - raise cast(BaseException, task.exception()) - - def _exception_handler( - self, loop: asyncio.AbstractEventLoop, context: dict[str, Any] - ) -> None: - if isinstance(context.get("exception"), Exception): - self._exceptions.append(context["exception"]) - else: - loop.default_exception_handler(context) - - def _raise_async_exceptions(self) -> None: - # Re-raise any exceptions raised in asynchronous callbacks - if self._exceptions: - exceptions, self._exceptions = self._exceptions, [] - if len(exceptions) == 1: - raise exceptions[0] - elif exceptions: - raise ExceptionGroup(exceptions) - - def close(self) -> None: - try: - self._cancel_all_tasks() - self._loop.run_until_complete(self._loop.shutdown_asyncgens()) - finally: - asyncio.set_event_loop(None) - self._loop.close() - - def run_asyncgen_fixture( - self, - fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]], - kwargs: dict[str, Any], - ) -> Iterable[T_Retval]: - async def fixture_runner() -> None: - agen = fixture_func(**kwargs) - try: - retval = await agen.asend(None) - self._raise_async_exceptions() - except BaseException as exc: - f.set_exception(exc) - return - else: - f.set_result(retval) - - await event.wait() - try: - await agen.asend(None) - except StopAsyncIteration: - pass - else: - await agen.aclose() - raise RuntimeError("Async generator fixture did not stop") - - f = self._loop.create_future() - event = asyncio.Event() - fixture_task = self._loop.create_task(fixture_runner()) - self._loop.run_until_complete(f) - yield f.result() - event.set() - self._loop.run_until_complete(fixture_task) - self._raise_async_exceptions() - - def run_fixture( - self, - fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]], - kwargs: dict[str, Any], - ) -> T_Retval: - retval = self._loop.run_until_complete(fixture_func(**kwargs)) - self._raise_async_exceptions() - return retval - - def run_test( - self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any] - ) -> None: - try: - self._loop.run_until_complete(test_func(**kwargs)) - except Exception as exc: - self._exceptions.append(exc) - - self._raise_async_exceptions() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c231646e.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c231646e.js deleted file mode 100644 index e92836414a211002cd0797d9989daa1c5b104eca..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c231646e.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as c,e as m,s as v,a9 as b,N as f,K as r,L as o,U as d,p as g,M as p,ab as h,ac as w,ad as y,z as G,v as j,A as k}from"./index-1d65707a.js";function C(n){let s,l,u,i;const _=n[4].default,a=b(_,n,n[3],null);return{c(){s=f("div"),l=f("div"),a&&a.c(),r(l,"class","styler svelte-iyf88w"),o(l,"--block-radius","0px"),o(l,"--block-border-width","0px"),o(l,"--layout-gap","1px"),o(l,"--form-gap-width","1px"),o(l,"--button-border-width","0px"),o(l,"--button-large-radius","0px"),o(l,"--button-small-radius","0px"),r(s,"id",n[0]),r(s,"class",u="gr-group "+n[1].join(" ")+" svelte-iyf88w"),d(s,"hide",!n[2])},m(e,t){g(e,s,t),p(s,l),a&&a.m(l,null),i=!0},p(e,[t]){a&&a.p&&(!i||t&8)&&h(a,_,e,e[3],i?y(_,e[3],t,null):w(e[3]),null),(!i||t&1)&&r(s,"id",e[0]),(!i||t&2&&u!==(u="gr-group "+e[1].join(" ")+" svelte-iyf88w"))&&r(s,"class",u),(!i||t&6)&&d(s,"hide",!e[2])},i(e){i||(G(a,e),i=!0)},o(e){j(a,e),i=!1},d(e){e&&k(s),a&&a.d(e)}}}function S(n,s,l){let{$$slots:u={},$$scope:i}=s,{elem_id:_=""}=s,{elem_classes:a=[]}=s,{visible:e=!0}=s;return n.$$set=t=>{"elem_id"in t&&l(0,_=t.elem_id),"elem_classes"in t&&l(1,a=t.elem_classes),"visible"in t&&l(2,e=t.visible),"$$scope"in t&&l(3,i=t.$$scope)},[_,a,e,i,u]}class q extends c{constructor(s){super(),m(this,s,S,C,v,{elem_id:0,elem_classes:1,visible:2})}}const A=q,K=["static"];export{A as Component,K as modes}; -//# sourceMappingURL=index-c231646e.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/fastai_utils.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/fastai_utils.py deleted file mode 100644 index e586e8663c39e8d5bab3f57f667dbd878514e59d..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/fastai_utils.py +++ /dev/null @@ -1,425 +0,0 @@ -import json -import os -from pathlib import Path -from pickle import DEFAULT_PROTOCOL, PicklingError -from typing import Any, Dict, List, Optional, Union - -from packaging import version - -from huggingface_hub import snapshot_download -from huggingface_hub.constants import CONFIG_NAME -from huggingface_hub.hf_api import HfApi -from huggingface_hub.utils import ( - SoftTemporaryDirectory, - get_fastai_version, - get_fastcore_version, - get_python_version, -) - -from .utils import logging, validate_hf_hub_args -from .utils._runtime import _PY_VERSION # noqa: F401 # for backward compatibility... - - -logger = logging.get_logger(__name__) - - -def _check_fastai_fastcore_versions( - fastai_min_version: str = "2.4", - fastcore_min_version: str = "1.3.27", -): - """ - Checks that the installed fastai and fastcore versions are compatible for pickle serialization. - - Args: - fastai_min_version (`str`, *optional*): - The minimum fastai version supported. - fastcore_min_version (`str`, *optional*): - The minimum fastcore version supported. - - - Raises the following error: - - - [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) - if the fastai or fastcore libraries are not available or are of an invalid version. - - - """ - - if (get_fastcore_version() or get_fastai_version()) == "N/A": - raise ImportError( - f"fastai>={fastai_min_version} and fastcore>={fastcore_min_version} are" - f" required. Currently using fastai=={get_fastai_version()} and" - f" fastcore=={get_fastcore_version()}." - ) - - current_fastai_version = version.Version(get_fastai_version()) - current_fastcore_version = version.Version(get_fastcore_version()) - - if current_fastai_version < version.Version(fastai_min_version): - raise ImportError( - "`push_to_hub_fastai` and `from_pretrained_fastai` require a" - f" fastai>={fastai_min_version} version, but you are using fastai version" - f" {get_fastai_version()} which is incompatible. Upgrade with `pip install" - " fastai==2.5.6`." - ) - - if current_fastcore_version < version.Version(fastcore_min_version): - raise ImportError( - "`push_to_hub_fastai` and `from_pretrained_fastai` require a" - f" fastcore>={fastcore_min_version} version, but you are using fastcore" - f" version {get_fastcore_version()} which is incompatible. Upgrade with" - " `pip install fastcore==1.3.27`." - ) - - -def _check_fastai_fastcore_pyproject_versions( - storage_folder: str, - fastai_min_version: str = "2.4", - fastcore_min_version: str = "1.3.27", -): - """ - Checks that the `pyproject.toml` file in the directory `storage_folder` has fastai and fastcore versions - that are compatible with `from_pretrained_fastai` and `push_to_hub_fastai`. If `pyproject.toml` does not exist - or does not contain versions for fastai and fastcore, then it logs a warning. - - Args: - storage_folder (`str`): - Folder to look for the `pyproject.toml` file. - fastai_min_version (`str`, *optional*): - The minimum fastai version supported. - fastcore_min_version (`str`, *optional*): - The minimum fastcore version supported. - - - Raises the following errors: - - - [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) - if the `toml` module is not installed. - - [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) - if the `pyproject.toml` indicates a lower than minimum supported version of fastai or fastcore. - - - """ - - try: - import toml - except ModuleNotFoundError: - raise ImportError( - "`push_to_hub_fastai` and `from_pretrained_fastai` require the toml module." - " Install it with `pip install toml`." - ) - - # Checks that a `pyproject.toml`, with `build-system` and `requires` sections, exists in the repository. If so, get a list of required packages. - if not os.path.isfile(f"{storage_folder}/pyproject.toml"): - logger.warning( - "There is no `pyproject.toml` in the repository that contains the fastai" - " `Learner`. The `pyproject.toml` would allow us to verify that your fastai" - " and fastcore versions are compatible with those of the model you want to" - " load." - ) - return - pyproject_toml = toml.load(f"{storage_folder}/pyproject.toml") - - if "build-system" not in pyproject_toml.keys(): - logger.warning( - "There is no `build-system` section in the pyproject.toml of the repository" - " that contains the fastai `Learner`. The `build-system` would allow us to" - " verify that your fastai and fastcore versions are compatible with those" - " of the model you want to load." - ) - return - build_system_toml = pyproject_toml["build-system"] - - if "requires" not in build_system_toml.keys(): - logger.warning( - "There is no `requires` section in the pyproject.toml of the repository" - " that contains the fastai `Learner`. The `requires` would allow us to" - " verify that your fastai and fastcore versions are compatible with those" - " of the model you want to load." - ) - return - package_versions = build_system_toml["requires"] - - # Extracts contains fastai and fastcore versions from `pyproject.toml` if available. - # If the package is specified but not the version (e.g. "fastai" instead of "fastai=2.4"), the default versions are the highest. - fastai_packages = [pck for pck in package_versions if pck.startswith("fastai")] - if len(fastai_packages) == 0: - logger.warning("The repository does not have a fastai version specified in the `pyproject.toml`.") - # fastai_version is an empty string if not specified - else: - fastai_version = str(fastai_packages[0]).partition("=")[2] - if fastai_version != "" and version.Version(fastai_version) < version.Version(fastai_min_version): - raise ImportError( - "`from_pretrained_fastai` requires" - f" fastai>={fastai_min_version} version but the model to load uses" - f" {fastai_version} which is incompatible." - ) - - fastcore_packages = [pck for pck in package_versions if pck.startswith("fastcore")] - if len(fastcore_packages) == 0: - logger.warning("The repository does not have a fastcore version specified in the `pyproject.toml`.") - # fastcore_version is an empty string if not specified - else: - fastcore_version = str(fastcore_packages[0]).partition("=")[2] - if fastcore_version != "" and version.Version(fastcore_version) < version.Version(fastcore_min_version): - raise ImportError( - "`from_pretrained_fastai` requires" - f" fastcore>={fastcore_min_version} version, but you are using fastcore" - f" version {fastcore_version} which is incompatible." - ) - - -README_TEMPLATE = """--- -tags: -- fastai ---- - -# Amazing! - -🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! - -# Some next steps -1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! - -2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). - -3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! - -Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. - - ---- - - -# Model card - -## Model description -More information needed - -## Intended uses & limitations -More information needed - -## Training and evaluation data -More information needed -""" - -PYPROJECT_TEMPLATE = f"""[build-system] -requires = ["setuptools>=40.8.0", "wheel", "python={get_python_version()}", "fastai={get_fastai_version()}", "fastcore={get_fastcore_version()}"] -build-backend = "setuptools.build_meta:__legacy__" -""" - - -def _create_model_card(repo_dir: Path): - """ - Creates a model card for the repository. - - Args: - repo_dir (`Path`): - Directory where model card is created. - """ - readme_path = repo_dir / "README.md" - - if not readme_path.exists(): - with readme_path.open("w", encoding="utf-8") as f: - f.write(README_TEMPLATE) - - -def _create_model_pyproject(repo_dir: Path): - """ - Creates a `pyproject.toml` for the repository. - - Args: - repo_dir (`Path`): - Directory where `pyproject.toml` is created. - """ - pyproject_path = repo_dir / "pyproject.toml" - - if not pyproject_path.exists(): - with pyproject_path.open("w", encoding="utf-8") as f: - f.write(PYPROJECT_TEMPLATE) - - -def _save_pretrained_fastai( - learner, - save_directory: Union[str, Path], - config: Optional[Dict[str, Any]] = None, -): - """ - Saves a fastai learner to `save_directory` in pickle format using the default pickle protocol for the version of python used. - - Args: - learner (`Learner`): - The `fastai.Learner` you'd like to save. - save_directory (`str` or `Path`): - Specific directory in which you want to save the fastai learner. - config (`dict`, *optional*): - Configuration object. Will be uploaded as a .json file. Example: 'https://huggingface.co/espejelomar/fastai-pet-breeds-classification/blob/main/config.json'. - - - - Raises the following error: - - - [`RuntimeError`](https://docs.python.org/3/library/exceptions.html#RuntimeError) - if the config file provided is not a dictionary. - - - """ - _check_fastai_fastcore_versions() - - os.makedirs(save_directory, exist_ok=True) - - # if the user provides config then we update it with the fastai and fastcore versions in CONFIG_TEMPLATE. - if config is not None: - if not isinstance(config, dict): - raise RuntimeError(f"Provided config should be a dict. Got: '{type(config)}'") - path = os.path.join(save_directory, CONFIG_NAME) - with open(path, "w") as f: - json.dump(config, f) - - _create_model_card(Path(save_directory)) - _create_model_pyproject(Path(save_directory)) - - # learner.export saves the model in `self.path`. - learner.path = Path(save_directory) - os.makedirs(save_directory, exist_ok=True) - try: - learner.export( - fname="model.pkl", - pickle_protocol=DEFAULT_PROTOCOL, - ) - except PicklingError: - raise PicklingError( - "You are using a lambda function, i.e., an anonymous function. `pickle`" - " cannot pickle function objects and requires that all functions have" - " names. One possible solution is to name the function." - ) - - -@validate_hf_hub_args -def from_pretrained_fastai( - repo_id: str, - revision: Optional[str] = None, -): - """ - Load pretrained fastai model from the Hub or from a local directory. - - Args: - repo_id (`str`): - The location where the pickled fastai.Learner is. It can be either of the two: - - Hosted on the Hugging Face Hub. E.g.: 'espejelomar/fatai-pet-breeds-classification' or 'distilgpt2'. - You can add a `revision` by appending `@` at the end of `repo_id`. E.g.: `dbmdz/bert-base-german-cased@main`. - Revision is the specific model version to use. Since we use a git-based system for storing models and other - artifacts on the Hugging Face Hub, it can be a branch name, a tag name, or a commit id. - - Hosted locally. `repo_id` would be a directory containing the pickle and a pyproject.toml - indicating the fastai and fastcore versions used to build the `fastai.Learner`. E.g.: `./my_model_directory/`. - revision (`str`, *optional*): - Revision at which the repo's files are downloaded. See documentation of `snapshot_download`. - - Returns: - The `fastai.Learner` model in the `repo_id` repo. - """ - _check_fastai_fastcore_versions() - - # Load the `repo_id` repo. - # `snapshot_download` returns the folder where the model was stored. - # `cache_dir` will be the default '/root/.cache/huggingface/hub' - if not os.path.isdir(repo_id): - storage_folder = snapshot_download( - repo_id=repo_id, - revision=revision, - library_name="fastai", - library_version=get_fastai_version(), - ) - else: - storage_folder = repo_id - - _check_fastai_fastcore_pyproject_versions(storage_folder) - - from fastai.learner import load_learner # type: ignore - - return load_learner(os.path.join(storage_folder, "model.pkl")) - - -@validate_hf_hub_args -def push_to_hub_fastai( - learner, - *, - repo_id: str, - commit_message: str = "Push FastAI model using huggingface_hub.", - private: bool = False, - token: Optional[str] = None, - config: Optional[dict] = None, - branch: Optional[str] = None, - create_pr: Optional[bool] = None, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - delete_patterns: Optional[Union[List[str], str]] = None, - api_endpoint: Optional[str] = None, -): - """ - Upload learner checkpoint files to the Hub. - - Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use - `delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more - details. - - Args: - learner (`Learner`): - The `fastai.Learner' you'd like to push to the Hub. - repo_id (`str`): - The repository id for your model in Hub in the format of "namespace/repo_name". The namespace can be your individual account or an organization to which you have write access (for example, 'stanfordnlp/stanza-de'). - commit_message (`str`, *optional*): - Message to commit while pushing. Will default to :obj:`"add model"`. - private (`bool`, *optional*, defaults to `False`): - Whether or not the repository created should be private. - token (`str`, *optional*): - The Hugging Face account token to use as HTTP bearer authorization for remote files. If :obj:`None`, the token will be asked by a prompt. - config (`dict`, *optional*): - Configuration object to be saved alongside the model weights. - branch (`str`, *optional*): - The git branch on which to push the model. This defaults to - the default branch as specified in your repository, which - defaults to `"main"`. - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request from `branch` with that commit. - Defaults to `False`. - api_endpoint (`str`, *optional*): - The API endpoint to use when pushing the model to the hub. - allow_patterns (`List[str]` or `str`, *optional*): - If provided, only files matching at least one pattern are pushed. - ignore_patterns (`List[str]` or `str`, *optional*): - If provided, files matching any of the patterns are not pushed. - delete_patterns (`List[str]` or `str`, *optional*): - If provided, remote files matching any of the patterns will be deleted from the repo. - - Returns: - The url of the commit of your model in the given repository. - - - - Raises the following error: - - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if the user is not log on to the Hugging Face Hub. - - - """ - _check_fastai_fastcore_versions() - api = HfApi(endpoint=api_endpoint) - repo_id = api.create_repo(repo_id=repo_id, token=token, private=private, exist_ok=True).repo_id - - # Push the files to the repo in a single commit - with SoftTemporaryDirectory() as tmp: - saved_path = Path(tmp) / repo_id - _save_pretrained_fastai(learner, saved_path, config=config) - return api.upload_folder( - repo_id=repo_id, - token=token, - folder_path=saved_path, - commit_message=commit_message, - revision=branch, - create_pr=create_pr, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - delete_patterns=delete_patterns, - ) diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/streamToAsyncIterable.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/streamToAsyncIterable.ts deleted file mode 100644 index e935d719c8c29eb5e4efc30812f61b5f44716923..0000000000000000000000000000000000000000 --- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/streamToAsyncIterable.ts +++ /dev/null @@ -1,15 +0,0 @@ -// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of#iterating_over_async_generators -export async function* streamToAsyncIterable( - stream: ReadableStream -): AsyncIterableIterator { - const reader = stream.getReader(); - try { - while (true) { - const { done, value } = await reader.read(); - if (done) return; - yield value; - } - } finally { - reader.releaseLock(); - } -} diff --git a/spaces/Dagfinn1962/stablediffusion-members/appworks.py b/spaces/Dagfinn1962/stablediffusion-members/appworks.py deleted file mode 100644 index 878c757de65298f3affa61b5456b53e02dadb9fd..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/stablediffusion-members/appworks.py +++ /dev/null @@ -1,80 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - {"name": "Stable Diffusion 1.4","url": "CompVis/stable-diffusion-v1-4"}, - {"name": "Stable Diffusion 1.5","url": "runwayml/stable-diffusion-v1-5"}, - ] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/daspartho/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks() as myface: - gr.HTML(""" - """ - - ) - with gr.Row(): - input_text = gr.Textbox(label=" ",placeholder="PROMPT HERE ",lines=4) - # Model selection dropdown - model_name1 = gr.Dropdown( - label=" ", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - - - ) - with gr.Row(): - see_prompts = gr.Button("Generate Prompts") - run = gr.Button("Generate Images", varant="primery") - - with gr.Row(): - output1 = gr.Image(label="") - output2 = gr.Image(label="") - output3 = gr.Image(label="") - with gr.Row(): - magic1 = gr.Textbox(label="Generated Prompt", lines=2) - magic2 = gr.Textbox(label="Generated Prompt", lines=2) - magic3 = gr.Textbox(label="Generated Prompt", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3,]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/Daniton/prompthero-openjourney-lora/README.md b/spaces/Daniton/prompthero-openjourney-lora/README.md deleted file mode 100644 index 642099e28e55bae4d9fe7906069563ef47810c55..0000000000000000000000000000000000000000 --- a/spaces/Daniton/prompthero-openjourney-lora/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Prompthero Openjourney Lora -emoji: 📊 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Detomo/CuteRobot/TemplateData/style.css b/spaces/Detomo/CuteRobot/TemplateData/style.css deleted file mode 100644 index cdc3477fb8c1c824db96f451631bca7cde305923..0000000000000000000000000000000000000000 --- a/spaces/Detomo/CuteRobot/TemplateData/style.css +++ /dev/null @@ -1,105 +0,0 @@ -html { - box-sizing: border-box; -} -*, *:before, *:after { - box-sizing: inherit; -} -html, body { - height: 100%; -} -canvas { - display: block; -} -body { - margin: 0; -} -#unity-container { - width: 100%; - height: 100%; -} -#unity-canvas { - width: 100%; - height: 100%; - background: #231F20; -} -#loading-cover { - position: absolute; - top: 0; - left: 0; - width: 100%; - height: 100%; - display: flex; - justify-content: center; - align-items: center; -} -#unity-loading-bar { - flex: 1 1 auto; - display: flex; - flex-direction: column; - justify-content: center; - align-items: center; -} -#unity-logo { - text-align: center; -} -#unity-logo img { - max-width: 80%; -} -#unity-progress-bar-empty { - width: 80%; - height: 24px; - margin: 10px 20px 20px 10px; - text-align: left; - border: 1px solid white; - padding: 2px; -} -#unity-progress-bar-full { - width: 0%; - height: 100%; - background: #ffd21e; -} -.light #unity-progress-bar-empty { - border-color: black; -} -.light #unity-progress-bar-full { - background: black; -} - -#unity-fullscreen-button { - position: absolute; - right: 10px; - bottom: 10px; - width: 38px; - height: 38px; - background: url('fullscreen-button.png') no-repeat center; - background-size: contain; -} - -.spinner, -.spinner:after { - border-radius: 50%; - width: 5em; - height: 5em; -} -.spinner { - margin: 10px; - font-size: 10px; - position: relative; - text-indent: -9999em; - border-top: 1.1em solid rgba(255, 255, 255, 0.2); - border-right: 1.1em solid rgba(255, 255, 255, 0.2); - border-bottom: 1.1em solid rgba(255, 255, 255, 0.2); - border-left: 1.1em solid #ffffff; - transform: translateZ(0); - animation: spinner-spin 1.1s infinite linear; -} -@keyframes spinner-spin { - 0% { - transform: rotate(0deg); - } - 100% { - transform: rotate(360deg); - } -} - - diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/serverstate.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/serverstate.py deleted file mode 100644 index e7ddc790c3dfc881f8aa4322d10d90e4e4fc09f0..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/serverstate.py +++ /dev/null @@ -1,526 +0,0 @@ -import os, torch, numpy, base64, json, re, threading, random -from torch.utils.data import TensorDataset, DataLoader -from collections import defaultdict -from netdissect.easydict import EasyDict -from netdissect.modelconfig import create_instrumented_model -from netdissect.runningstats import RunningQuantile -from netdissect.dissection import safe_dir_name -from netdissect.zdataset import z_sample_for_model -from PIL import Image -from io import BytesIO - -class DissectionProject: - ''' - DissectionProject understand how to drive a GanTester within a - dissection project directory structure: it caches data in files, - creates image files, and translates data between plain python data - types and the pytorch-specific tensors required by GanTester. - ''' - def __init__(self, config, project_dir, path_url, public_host): - print('config done', project_dir) - self.use_cuda = torch.cuda.is_available() - self.dissect = config - self.project_dir = project_dir - self.path_url = path_url - self.public_host = public_host - self.cachedir = os.path.join(self.project_dir, 'cache') - self.tester = GanTester( - config.settings, dissectdir=project_dir, - device=torch.device('cuda') if self.use_cuda - else torch.device('cpu')) - self.stdz = [] - - def get_zs(self, size): - if size <= len(self.stdz): - return self.stdz[:size].tolist() - z_tensor = self.tester.standard_z_sample(size) - numpy_z = z_tensor.cpu().numpy() - self.stdz = numpy_z - return self.stdz.tolist() - - def get_z(self, id): - if id < len(self.stdz): - return self.stdz[id] - return self.get_zs((id + 1) * 2)[id] - - def get_zs_for_ids(self, ids): - max_id = max(ids) - if max_id >= len(self.stdz): - self.get_z(max_id) - return self.stdz[ids] - - def get_layers(self): - result = [] - layer_shapes = self.tester.layer_shapes() - for layer in self.tester.layers: - shape = layer_shapes[layer] - result.append(dict( - layer=layer, - channels=shape[1], - shape=[shape[2], shape[3]])) - return result - - def get_units(self, layer): - try: - dlayer = [dl for dl in self.dissect['layers'] - if dl['layer'] == layer][0] - except: - return None - - dunits = dlayer['units'] - result = [dict(unit=unit_num, - img='/%s/%s/s-image/%d-top.jpg' % - (self.path_url, layer, unit_num), - label=unit['iou_label']) - for unit_num, unit in enumerate(dunits)] - return result - - def get_rankings(self, layer): - try: - dlayer = [dl for dl in self.dissect['layers'] - if dl['layer'] == layer][0] - except: - return None - result = [dict(name=ranking['name'], - metric=ranking.get('metric', None), - scores=ranking['score']) - for ranking in dlayer['rankings']] - return result - - def get_levels(self, layer, quantiles): - levels = self.tester.levels( - layer, torch.from_numpy(numpy.array(quantiles))) - return levels.cpu().numpy().tolist() - - def generate_images(self, zs, ids, interventions, return_urls=False): - if ids is not None: - assert zs is None - zs = self.get_zs_for_ids(ids) - if not interventions: - # Do file caching when ids are given (and no ablations). - imgdir = os.path.join(self.cachedir, 'img', 'id') - os.makedirs(imgdir, exist_ok=True) - exist = set(os.listdir(imgdir)) - unfinished = [('%d.jpg' % id) not in exist for id in ids] - needed_z_tensor = torch.tensor(zs[unfinished]).float().to( - self.tester.device) - needed_ids = numpy.array(ids)[unfinished] - # Generate image files for just the needed images. - if len(needed_z_tensor): - imgs = self.tester.generate_images(needed_z_tensor - ).cpu().numpy() - for i, img in zip(needed_ids, imgs): - Image.fromarray(img.transpose(1, 2, 0)).save( - os.path.join(imgdir, '%d.jpg' % i), 'jpeg', - quality=99, optimize=True, progressive=True) - # Assemble a response. - imgurls = ['/%s/cache/img/id/%d.jpg' - % (self.path_url, i) for i in ids] - return [dict(id=i, d=d) for i, d in zip(ids, imgurls)] - # No file caching when ids are not given (or ablations are applied) - z_tensor = torch.tensor(zs).float().to(self.tester.device) - imgs = self.tester.generate_images(z_tensor, - intervention=decode_intervention_array(interventions, - self.tester.layer_shapes()), - ).cpu().numpy() - numpy_z = z_tensor.cpu().numpy() - if return_urls: - randdir = '%03d' % random.randrange(1000) - imgdir = os.path.join(self.cachedir, 'img', 'uniq', randdir) - os.makedirs(imgdir, exist_ok=True) - startind = random.randrange(100000) - imgurls = [] - for i, img in enumerate(imgs): - filename = '%d.jpg' % (i + startind) - Image.fromarray(img.transpose(1, 2, 0)).save( - os.path.join(imgdir, filename), 'jpeg', - quality=99, optimize=True, progressive=True) - image_url_path = ('/%s/cache/img/uniq/%s/%s' - % (self.path_url, randdir, filename)) - imgurls.append(image_url_path) - tweet_filename = 'tweet-%d.html' % (i + startind) - tweet_url_path = ('/%s/cache/img/uniq/%s/%s' - % (self.path_url, randdir, tweet_filename)) - with open(os.path.join(imgdir, tweet_filename), 'w') as f: - f.write(twitter_card(image_url_path, tweet_url_path, - self.public_host)) - return [dict(d=d) for d in imgurls] - imgurls = [img2base64(img.transpose(1, 2, 0)) for img in imgs] - return [dict(d=d) for d in imgurls] - - def get_features(self, ids, masks, layers, interventions): - zs = self.get_zs_for_ids(ids) - z_tensor = torch.tensor(zs).float().to(self.tester.device) - t_masks = torch.stack( - [torch.from_numpy(mask_to_numpy(mask)) for mask in masks] - )[:,None,:,:].to(self.tester.device) - t_features = self.tester.feature_stats(z_tensor, t_masks, - decode_intervention_array(interventions, - self.tester.layer_shapes()), layers) - # Convert torch arrays to plain python lists before returning. - return { layer: { key: value.cpu().numpy().tolist() - for key, value in feature.items() } - for layer, feature in t_features.items() } - - def get_featuremaps(self, ids, layers, interventions): - zs = self.get_zs_for_ids(ids) - z_tensor = torch.tensor(zs).float().to(self.tester.device) - # Quantilized features are returned. - q_features = self.tester.feature_maps(z_tensor, - decode_intervention_array(interventions, - self.tester.layer_shapes()), layers) - # Scale them 0-255 and return them. - # TODO: turn them into pngs for returning. - return { layer: [ - value.clamp(0, 1).mul(255).byte().cpu().numpy().tolist() - for value in valuelist ] - for layer, valuelist in q_features.items() - if (not layers) or (layer in layers) } - - def get_recipes(self): - recipedir = os.path.join(self.project_dir, 'recipe') - if not os.path.isdir(recipedir): - return [] - result = [] - for filename in os.listdir(recipedir): - with open(os.path.join(recipedir, filename)) as f: - result.append(json.load(f)) - return result - - - - -class GanTester: - ''' - GanTester holds on to a specific model to test. - - (1) loads and instantiates the GAN; - (2) instruments it at every layer so that units can be ablated - (3) precomputes z dimensionality, and output image dimensions. - ''' - def __init__(self, args, dissectdir=None, device=None): - self.cachedir = os.path.join(dissectdir, 'cache') - self.device = device if device is not None else torch.device('cpu') - self.dissectdir = dissectdir - self.modellock = threading.Lock() - - # Load the generator from the pth file. - args_copy = EasyDict(args) - args_copy.edit = True - model = create_instrumented_model(args_copy) - model.eval() - self.model = model - - # Get the set of layers of interest. - # Default: all shallow children except last. - self.layers = sorted(model.retained_features().keys()) - - # Move it to CUDA if wanted. - model.to(device) - - self.quantiles = { - layer: load_quantile_if_present(os.path.join(self.dissectdir, - safe_dir_name(layer)), 'quantiles.npz', - device=torch.device('cpu')) - for layer in self.layers } - - def layer_shapes(self): - return self.model.feature_shape - - def standard_z_sample(self, size=100, seed=1, device=None): - ''' - Generate a standard set of random Z as a (size, z_dimension) tensor. - With the same random seed, it always returns the same z (e.g., - the first one is always the same regardless of the size.) - ''' - result = z_sample_for_model(self.model, size) - if device is not None: - result = result.to(device) - return result - - def reset_intervention(self): - self.model.remove_edits() - - def apply_intervention(self, intervention): - ''' - Applies an ablation recipe of the form [(layer, unit, alpha)...]. - ''' - self.reset_intervention() - if not intervention: - return - for layer, (a, v) in intervention.items(): - self.model.edit_layer(layer, ablation=a, replacement=v) - - def generate_images(self, z_batch, intervention=None): - ''' - Makes some images. - ''' - with torch.no_grad(), self.modellock: - batch_size = 10 - self.apply_intervention(intervention) - test_loader = DataLoader(TensorDataset(z_batch[:,:,None,None]), - batch_size=batch_size, - pin_memory=('cuda' == self.device.type - and z_batch.device.type == 'cpu')) - result_img = torch.zeros( - *((len(z_batch), 3) + self.model.output_shape[2:]), - dtype=torch.uint8, device=self.device) - for batch_num, [batch_z,] in enumerate(test_loader): - batch_z = batch_z.to(self.device) - out = self.model(batch_z) - result_img[batch_num*batch_size: - batch_num*batch_size+len(batch_z)] = ( - (((out + 1) / 2) * 255).clamp(0, 255).byte()) - return result_img - - def get_layers(self): - return self.layers - - def feature_stats(self, z_batch, - masks=None, intervention=None, layers=None): - feature_stat = defaultdict(dict) - with torch.no_grad(), self.modellock: - batch_size = 10 - self.apply_intervention(intervention) - if masks is None: - masks = torch.ones(z_batch.size(0), 1, 1, 1, - device=z_batch.device, dtype=z_batch.dtype) - else: - assert masks.shape[0] == z_batch.shape[0] - assert masks.shape[1] == 1 - test_loader = DataLoader( - TensorDataset(z_batch[:,:,None,None], masks), - batch_size=batch_size, - pin_memory=('cuda' == self.device.type - and z_batch.device.type == 'cpu')) - processed = 0 - for batch_num, [batch_z, batch_m] in enumerate(test_loader): - batch_z, batch_m = [ - d.to(self.device) for d in [batch_z, batch_m]] - # Run model but disregard output - self.model(batch_z) - processing = batch_z.shape[0] - for layer, feature in self.model.retained_features().items(): - if layers is not None: - if layer not in layers: - continue - # Compute max features touching mask - resized_max = torch.nn.functional.adaptive_max_pool2d( - batch_m, - (feature.shape[2], feature.shape[3])) - max_feature = (feature * resized_max).view( - feature.shape[0], feature.shape[1], -1 - ).max(2)[0].max(0)[0] - if 'max' not in feature_stat[layer]: - feature_stat[layer]['max'] = max_feature - else: - torch.max(feature_stat[layer]['max'], max_feature, - out=feature_stat[layer]['max']) - # Compute mean features weighted by overlap with mask - resized_mean = torch.nn.functional.adaptive_avg_pool2d( - batch_m, - (feature.shape[2], feature.shape[3])) - mean_feature = (feature * resized_mean).view( - feature.shape[0], feature.shape[1], -1 - ).sum(2).sum(0) / (resized_mean.sum() + 1e-15) - if 'mean' not in feature_stat[layer]: - feature_stat[layer]['mean'] = mean_feature - else: - feature_stat[layer]['mean'] = ( - processed * feature_mean[layer]['mean'] - + processing * mean_feature) / ( - processed + processing) - processed += processing - # After summaries are done, also compute quantile stats - for layer, stats in feature_stat.items(): - if self.quantiles.get(layer, None) is not None: - for statname in ['max', 'mean']: - stats['%s_quantile' % statname] = ( - self.quantiles[layer].normalize(stats[statname])) - return feature_stat - - def levels(self, layer, quantiles): - return self.quantiles[layer].quantiles(quantiles) - - def feature_maps(self, z_batch, intervention=None, layers=None, - quantiles=True): - feature_map = defaultdict(list) - with torch.no_grad(), self.modellock: - batch_size = 10 - self.apply_intervention(intervention) - test_loader = DataLoader( - TensorDataset(z_batch[:,:,None,None]), - batch_size=batch_size, - pin_memory=('cuda' == self.device.type - and z_batch.device.type == 'cpu')) - processed = 0 - for batch_num, [batch_z] in enumerate(test_loader): - batch_z = batch_z.to(self.device) - # Run model but disregard output - self.model(batch_z) - processing = batch_z.shape[0] - for layer, feature in self.model.retained_features().items(): - for single_featuremap in feature: - if quantiles: - feature_map[layer].append(self.quantiles[layer] - .normalize(single_featuremap)) - else: - feature_map[layer].append(single_featuremap) - return feature_map - -def load_quantile_if_present(outdir, filename, device): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - result = RunningQuantile(state=data) - result.to_(device) - return result - return None - -if __name__ == '__main__': - test_main() - -def mask_to_numpy(mask_record): - # Detect a png image mask. - bitstring = mask_record['bitstring'] - bitnumpy = None - default_shape = (256, 256) - if 'image/png;base64,' in bitstring: - bitnumpy = base642img(bitstring) - default_shape = bitnumpy.shape[:2] - # Set up results - shape = mask_record.get('shape', None) - if not shape: # None or empty [] - shape = default_shape - result = numpy.zeros(shape=shape, dtype=numpy.float32) - bitbounds = mask_record.get('bitbounds', None) - if not bitbounds: # None or empty [] - bitbounds = ([0] * len(result.shape)) + list(result.shape) - start = bitbounds[:len(result.shape)] - end = bitbounds[len(result.shape):] - if bitnumpy is not None: - if bitnumpy.shape[2] == 4: - # Mask is any nontransparent bits in the alpha channel if present - result[start[0]:end[0], start[1]:end[1]] = (bitnumpy[:,:,3] > 0) - else: - # Or any nonwhite pixels in the red channel if no alpha. - result[start[0]:end[0], start[1]:end[1]] = (bitnumpy[:,:,0] < 255) - return result - else: - # Or bitstring can be just ones and zeros. - indexes = start.copy() - bitindex = 0 - while True: - result[tuple(indexes)] = (bitstring[bitindex] != '0') - for ii in range(len(indexes) - 1, -1, -1): - if indexes[ii] < end[ii] - 1: - break - indexes[ii] = start[ii] - else: - assert (bitindex + 1) == len(bitstring) - return result - indexes[ii] += 1 - bitindex += 1 - -def decode_intervention_array(interventions, layer_shapes): - result = {} - for channels in [decode_intervention(intervention, layer_shapes) - for intervention in (interventions or [])]: - for layer, channel in channels.items(): - if layer not in result: - result[layer] = channel - continue - accum = result[layer] - newalpha = 1 - (1 - channel[:1]) * (1 - accum[:1]) - newvalue = (accum[1:] * accum[:1] * (1 - channel[:1]) + - channel[1:] * channel[:1]) / (newalpha + 1e-40) - accum[:1] = newalpha - accum[1:] = newvalue - return result - -def decode_intervention(intervention, layer_shapes): - # Every plane of an intervention is a solid choice of activation - # over a set of channels, with a mask applied to alpha-blended channels - # (when the mask resolution is different from the feature map, it can - # be either a max-pooled or average-pooled to the proper resolution). - # This can be reduced to a single alpha-blended featuremap. - if intervention is None: - return None - mask = intervention.get('mask', None) - if mask: - mask = torch.from_numpy(mask_to_numpy(mask)) - maskpooling = intervention.get('maskpooling', 'max') - channels = {} # layer -> ([alpha, val], c) - for arec in intervention.get('ablations', []): - unit = arec['unit'] - layer = arec['layer'] - alpha = arec.get('alpha', 1.0) - if alpha is None: - alpha = 1.0 - value = arec.get('value', 0.0) - if value is None: - value = 0.0 - if alpha != 0.0 or value != 0.0: - if layer not in channels: - channels[layer] = torch.zeros(2, *layer_shapes[layer][1:]) - channels[layer][0, unit] = alpha - channels[layer][1, unit] = value - if mask is not None: - for layer in channels: - layer_shape = layer_shapes[layer][2:] - if maskpooling == 'mean': - layer_mask = torch.nn.functional.adaptive_avg_pool2d( - mask[None,None,...], layer_shape)[0] - else: - layer_mask = torch.nn.functional.adaptive_max_pool2d( - mask[None,None,...], layer_shape)[0] - channels[layer][0] *= layer_mask - return channels - -def img2base64(imgarray, for_html=True, image_format='jpeg'): - ''' - Converts a numpy array to a jpeg base64 url - ''' - input_image_buff = BytesIO() - Image.fromarray(imgarray).save(input_image_buff, image_format, - quality=99, optimize=True, progressive=True) - res = base64.b64encode(input_image_buff.getvalue()).decode('ascii') - if for_html: - return 'data:image/' + image_format + ';base64,' + res - else: - return res - -def base642img(stringdata): - stringdata = re.sub('^(?:data:)?image/\w+;base64,', '', stringdata) - im = Image.open(BytesIO(base64.b64decode(stringdata))) - return numpy.array(im) - -def twitter_card(image_path, tweet_path, public_host): - return '''\ - - - - - - - - - - - - -
        -

        Painting with GANs from MIT-IBM Watson AI Lab

        -

        This demo lets you modify a selection of meatningful GAN units for a generated image by simply painting.

        - -

        Redirecting to -GANPaint -

        -
        - -'''.format( - image_path=image_path, - tweet_path=tweet_path, - public_host=public_host) diff --git a/spaces/Duskfallcrew/textual-inversion-training/README.md b/spaces/Duskfallcrew/textual-inversion-training/README.md deleted file mode 100644 index 6de58c649cbd632ee422854b229e59706543985d..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/textual-inversion-training/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Textual Inversion Training -emoji: 📉 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Wryley1234/textual-inversion-training ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ECCV2022/bytetrack/tutorials/transtrack/mot_online/matching.py b/spaces/ECCV2022/bytetrack/tutorials/transtrack/mot_online/matching.py deleted file mode 100644 index d21c958237a64abf185f5298a62d2bcb9270e254..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/transtrack/mot_online/matching.py +++ /dev/null @@ -1,156 +0,0 @@ -import cv2 -import numpy as np -import scipy -import lap -from scipy.spatial.distance import cdist - -from cython_bbox import bbox_overlaps as bbox_ious -from mot_online import kalman_filter -import time - -def merge_matches(m1, m2, shape): - O,P,Q = shape - m1 = np.asarray(m1) - m2 = np.asarray(m2) - - M1 = scipy.sparse.coo_matrix((np.ones(len(m1)), (m1[:, 0], m1[:, 1])), shape=(O, P)) - M2 = scipy.sparse.coo_matrix((np.ones(len(m2)), (m2[:, 0], m2[:, 1])), shape=(P, Q)) - - mask = M1*M2 - match = mask.nonzero() - match = list(zip(match[0], match[1])) - unmatched_O = tuple(set(range(O)) - set([i for i, j in match])) - unmatched_Q = tuple(set(range(Q)) - set([j for i, j in match])) - - return match, unmatched_O, unmatched_Q - - -def _indices_to_matches(cost_matrix, indices, thresh): - matched_cost = cost_matrix[tuple(zip(*indices))] - matched_mask = (matched_cost <= thresh) - - matches = indices[matched_mask] - unmatched_a = tuple(set(range(cost_matrix.shape[0])) - set(matches[:, 0])) - unmatched_b = tuple(set(range(cost_matrix.shape[1])) - set(matches[:, 1])) - - return matches, unmatched_a, unmatched_b - - -def linear_assignment(cost_matrix, thresh): - if cost_matrix.size == 0: - return np.empty((0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(range(cost_matrix.shape[1])) - matches, unmatched_a, unmatched_b = [], [], [] - cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh) - for ix, mx in enumerate(x): - if mx >= 0: - matches.append([ix, mx]) - unmatched_a = np.where(x < 0)[0] - unmatched_b = np.where(y < 0)[0] - matches = np.asarray(matches) - return matches, unmatched_a, unmatched_b - - -def ious(atlbrs, btlbrs): - """ - Compute cost based on IoU - :type atlbrs: list[tlbr] | np.ndarray - :type atlbrs: list[tlbr] | np.ndarray - - :rtype ious np.ndarray - """ - ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float) - if ious.size == 0: - return ious - - ious = bbox_ious( - np.ascontiguousarray(atlbrs, dtype=np.float), - np.ascontiguousarray(btlbrs, dtype=np.float) - ) - - return ious - - -def iou_distance(atracks, btracks): - """ - Compute cost based on IoU - :type atracks: list[STrack] - :type btracks: list[STrack] - - :rtype cost_matrix np.ndarray - """ - - if (len(atracks)>0 and isinstance(atracks[0], np.ndarray)) or (len(btracks) > 0 and isinstance(btracks[0], np.ndarray)): - atlbrs = atracks - btlbrs = btracks - else: - atlbrs = [track.tlbr for track in atracks] - btlbrs = [track.tlbr for track in btracks] - _ious = ious(atlbrs, btlbrs) - cost_matrix = 1 - _ious - - return cost_matrix - -def embedding_distance(tracks, detections, metric='cosine'): - """ - :param tracks: list[STrack] - :param detections: list[BaseTrack] - :param metric: - :return: cost_matrix np.ndarray - """ - - cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float) - if cost_matrix.size == 0: - return cost_matrix - det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float) - track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float) - cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features - return cost_matrix - - -def gate_cost_matrix(kf, cost_matrix, tracks, detections, only_position=False): - if cost_matrix.size == 0: - return cost_matrix - gating_dim = 2 if only_position else 4 - gating_threshold = kalman_filter.chi2inv95[gating_dim] - measurements = np.asarray([det.to_xyah() for det in detections]) - for row, track in enumerate(tracks): - gating_distance = kf.gating_distance( - track.mean, track.covariance, measurements, only_position) - cost_matrix[row, gating_distance > gating_threshold] = np.inf - return cost_matrix - - -def fuse_motion(kf, cost_matrix, tracks, detections, only_position=False, lambda_=0.98): - if cost_matrix.size == 0: - return cost_matrix - gating_dim = 2 if only_position else 4 - gating_threshold = kalman_filter.chi2inv95[gating_dim] - measurements = np.asarray([det.to_xyah() for det in detections]) - for row, track in enumerate(tracks): - gating_distance = kf.gating_distance( - track.mean, track.covariance, measurements, only_position, metric='maha') - cost_matrix[row, gating_distance > gating_threshold] = np.inf - cost_matrix[row] = lambda_ * cost_matrix[row] + (1 - lambda_) * gating_distance - return cost_matrix - - -def fuse_iou(cost_matrix, tracks, detections): - if cost_matrix.size == 0: - return cost_matrix - reid_sim = 1 - cost_matrix - iou_dist = iou_distance(tracks, detections) - iou_sim = 1 - iou_dist - fuse_sim = reid_sim * (1 + iou_sim) / 2 - det_scores = np.array([det.score for det in detections]) - det_scores = np.expand_dims(det_scores, axis=0).repeat(cost_matrix.shape[0], axis=0) - #fuse_sim = fuse_sim * (1 + det_scores) / 2 - fuse_cost = 1 - fuse_sim - return fuse_cost - - -def fuse_iou_add(cost_matrix, tracks, detections, weight=0.5): - if cost_matrix.size == 0: - return cost_matrix - iou_dist = iou_distance(tracks, detections) - fuse_dist = weight * iou_dist + (1 - weight) * cost_matrix - return fuse_dist \ No newline at end of file diff --git a/spaces/ForBo7/FloodDetector/README.md b/spaces/ForBo7/FloodDetector/README.md deleted file mode 100644 index 8469fa1c59300bbb88309b2323484492530179fb..0000000000000000000000000000000000000000 --- a/spaces/ForBo7/FloodDetector/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: FloodDetector -emoji: 🌊 -colorFrom: blue -colorTo: gray -sdk: gradio -python_version: 3.10.7 -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: apache-2.0 -tags: [disaster relief, image classification] ---- - -The workflow works!!! - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FourthBrainGenAI/TalkToMyDoc-Hitch-Hikers-Guide/app.py b/spaces/FourthBrainGenAI/TalkToMyDoc-Hitch-Hikers-Guide/app.py deleted file mode 100644 index adba7447bdd5b769e828bb5b5a797f3edb848bd7..0000000000000000000000000000000000000000 --- a/spaces/FourthBrainGenAI/TalkToMyDoc-Hitch-Hikers-Guide/app.py +++ /dev/null @@ -1,36 +0,0 @@ -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.vectorstores import Chroma -from langchain.text_splitter import CharacterTextSplitter -from langchain.chains.question_answering import load_qa_chain -from langchain.llms import OpenAI -import os - -with open("guide1.txt") as f: - hitchhikersguide = f.read() - -text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0, separator = "\n") -texts = text_splitter.split_text(hitchhikersguide) - -embeddings = OpenAIEmbeddings() - -docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever() - -chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") - -def make_inference(query): - docs = docsearch.get_relevant_documents(query) - return(chain.run(input_documents=docs, question=query)) - -if __name__ == "__main__": - # make a gradio interface - import gradio as gr - - gr.Interface( - make_inference, - [ - gr.inputs.Textbox(lines=2, label="Query"), - ], - gr.outputs.Textbox(label="Response"), - title="🗣️TalkToMyDoc📄", - description="🗣️TalkToMyDoc📄 is a tool that allows you to ask questions about a document. In this case - Hitch Hitchhiker's Guide to the Galaxy.", - ).launch() \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/augment.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/augment.py deleted file mode 100644 index bb36d3298d89470f306316322e7587187819c94b..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/demucs/augment.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch as th -from torch import nn - - -class Shift(nn.Module): - """ - Randomly shift audio in time by up to `shift` samples. - """ - def __init__(self, shift=8192): - super().__init__() - self.shift = shift - - def forward(self, wav): - batch, sources, channels, time = wav.size() - length = time - self.shift - if self.shift > 0: - if not self.training: - wav = wav[..., :length] - else: - offsets = th.randint(self.shift, [batch, sources, 1, 1], device=wav.device) - offsets = offsets.expand(-1, -1, channels, -1) - indexes = th.arange(length, device=wav.device) - wav = wav.gather(3, indexes + offsets) - return wav - - -class FlipChannels(nn.Module): - """ - Flip left-right channels. - """ - def forward(self, wav): - batch, sources, channels, time = wav.size() - if self.training and wav.size(2) == 2: - left = th.randint(2, (batch, sources, 1, 1), device=wav.device) - left = left.expand(-1, -1, -1, time) - right = 1 - left - wav = th.cat([wav.gather(2, left), wav.gather(2, right)], dim=2) - return wav - - -class FlipSign(nn.Module): - """ - Random sign flip. - """ - def forward(self, wav): - batch, sources, channels, time = wav.size() - if self.training: - signs = th.randint(2, (batch, sources, 1, 1), device=wav.device, dtype=th.float32) - wav = wav * (2 * signs - 1) - return wav - - -class Remix(nn.Module): - """ - Shuffle sources to make new mixes. - """ - def __init__(self, group_size=4): - """ - Shuffle sources within one batch. - Each batch is divided into groups of size `group_size` and shuffling is done within - each group separatly. This allow to keep the same probability distribution no matter - the number of GPUs. Without this grouping, using more GPUs would lead to a higher - probability of keeping two sources from the same track together which can impact - performance. - """ - super().__init__() - self.group_size = group_size - - def forward(self, wav): - batch, streams, channels, time = wav.size() - device = wav.device - - if self.training: - group_size = self.group_size or batch - if batch % group_size != 0: - raise ValueError(f"Batch size {batch} must be divisible by group size {group_size}") - groups = batch // group_size - wav = wav.view(groups, group_size, streams, channels, time) - permutations = th.argsort(th.rand(groups, group_size, streams, 1, 1, device=device), - dim=1) - wav = wav.gather(1, permutations.expand(-1, -1, -1, channels, time)) - wav = wav.view(batch, streams, channels, time) - return wav - - -class Scale(nn.Module): - def __init__(self, proba=1., min=0.25, max=1.25): - super().__init__() - self.proba = proba - self.min = min - self.max = max - - def forward(self, wav): - batch, streams, channels, time = wav.size() - device = wav.device - if self.training and random.random() < self.proba: - scales = th.empty(batch, streams, 1, 1, device=device).uniform_(self.min, self.max) - wav *= scales - return wav diff --git a/spaces/Gauri54damle/McDFries-SDXL-Dreambooth-Lora-Model/app.py b/spaces/Gauri54damle/McDFries-SDXL-Dreambooth-Lora-Model/app.py deleted file mode 100644 index 78aa8be89181adcd32d939a930ca5292c60c2477..0000000000000000000000000000000000000000 --- a/spaces/Gauri54damle/McDFries-SDXL-Dreambooth-Lora-Model/app.py +++ /dev/null @@ -1,141 +0,0 @@ -from email import generator -from diffusers import DiffusionPipeline - -import gradio as gr -import torch -from PIL import Image, ImageDraw, ImageFont -## VAE - Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. -from diffusers import AutoencoderKL - - - - -model = "stabilityai/stable-diffusion-xl-base-1.0" -finetuningLayer = "Gauri54damle/sdxl-dreambooth-model-McDFries" - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -torch_dtype = torch.float16 if device.type == 'cuda' else torch.float32 - - - -import os -HF_API_TOKEN = os.getenv("HF_API_TOKEN") - -from huggingface_hub import login -login(token=HF_API_TOKEN) - - -vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch_dtype) -pipe = DiffusionPipeline.from_pretrained( - model, - vae=vae, - torch_dtype=torch_dtype, - use_safetensors=True -) -pipe.load_lora_weights(finetuningLayer) - -pipe = pipe.to(device) - - - - -def create_error_image(message): - # Create a blank image with white background - width, height = 512, 512 - image = Image.new('RGB', (width, height), 'white') - draw = ImageDraw.Draw(image) - - # Load a truetype or opentype font file - font = ImageFont.load_default() - - # Position and message - - draw.text((127,251), message, font=font, fill="black") - - return image - -def inference(model,finetuningLayer, prompt, guidance, steps, seed): - - - - if not prompt: - return create_error_image("Sorry, add your text prompt and try again!!") - else: - generator = torch.Generator(device).manual_seed(seed) - image = pipe( - prompt, - num_inference_steps=int(steps), - guidance_scale=guidance, - generator=generator).images[0] - - return image - - -css = """ - -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - """ -
        -
        -

        Finetuned Diffusion

        -
        -
        - """ - ) - with gr.Row(): - - with gr.Column(): - - model = gr.Dropdown(label="Base Model", choices=["stabilityai/stable-diffusion-xl-base-1.0"], default="stabilityai/stable-diffusion-xl-base-1.0") - finetuningLayer= gr.Dropdown(label="Finetuning Layer", choices=["Gauri54damle/sdxl-dreambooth-model-McDFries"], default="Gauri54damle/sdxl-dreambooth-model-McDFries") - - - - - prompt = gr.Textbox(label="Prompt", placeholder="photo of McDFries - it is unique identifier need to be used to identify fries") - - with gr.Accordion("Advanced options", open=True): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=50, maximum=100, minimum=2) - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - run = gr.Button(value="Run") - gr.Markdown(f"Running on: {device}") - with gr.Column(): - image_out = gr.Image() - - ## Add prompt and press enter to run - ##prompt.submit(inference, inputs=[model, finetuningLayer,prompt, guidance, steps, seed], outputs=image_out) - - ## Click run button to run - run.click(inference, inputs=[model, finetuningLayer, prompt, guidance, steps, seed], outputs=image_out) - - - -demo.queue() -demo.launch() \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/core/__init__.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/core/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/layer_stack.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/layer_stack.py deleted file mode 100644 index cbbb0dcb26445ec8ce57149f31aba9fc4de2863c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/layer_stack.py +++ /dev/null @@ -1,274 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Function to stack repeats of a layer function without shared parameters.""" - -import collections -import contextlib -import functools -import inspect -from typing import Any, Callable, Optional, Tuple, Union - -import haiku as hk -import jax -import jax.numpy as jnp - -LayerStackCarry = collections.namedtuple('LayerStackCarry', ['x', 'rng']) -LayerStackScanned = collections.namedtuple('LayerStackScanned', - ['i', 'args_ys']) - -# WrappedFn should take in arbitrarily nested `jnp.ndarray`, and return the -# exact same type. We cannot express this with `typing`. So we just use it -# to inform the user. In reality, the typing below will accept anything. -NestedArray = Any -WrappedFn = Callable[..., Union[NestedArray, Tuple[NestedArray]]] - - -def _check_no_varargs(f): - if list(inspect.signature( - f).parameters.values())[0].kind == inspect.Parameter.VAR_POSITIONAL: - raise ValueError( - 'The function `f` should not have any `varargs` (that is *args) ' - 'argument. Instead, it should only use explicit positional' - 'arguments.') - - -@contextlib.contextmanager -def nullcontext(): - yield - - -def maybe_with_rng(key): - if key is not None: - return hk.with_rng(key) - else: - return nullcontext() - - -def maybe_fold_in(key, data): - if key is not None: - return jax.random.fold_in(key, data) - else: - return None - - -class _LayerStack(hk.Module): - """Module to compose parameterized functions, implemented as a scan.""" - - def __init__(self, - count: int, - unroll: int, - name: Optional[str] = None): - """Iterate a function `f` `count` times, with non-shared parameters.""" - super().__init__(name=name) - self._count = count - self._unroll = unroll - - def __call__(self, x, *args_ys): - count = self._count - if hk.running_init(): - # At initialization time, we run just one layer but add an extra first - # dimension to every initialized tensor, making sure to use different - # random keys for different slices. - def creator(next_creator, shape, dtype, init, context): - del context - - def multi_init(shape, dtype): - assert shape[0] == count - key = hk.maybe_next_rng_key() - - def rng_context_init(slice_idx): - slice_key = maybe_fold_in(key, slice_idx) - with maybe_with_rng(slice_key): - return init(shape[1:], dtype) - - return jax.vmap(rng_context_init)(jnp.arange(count)) - - return next_creator((count,) + tuple(shape), dtype, multi_init) - - def getter(next_getter, value, context): - trailing_dims = len(context.original_shape) + 1 - sliced_value = jax.lax.index_in_dim( - value, index=0, axis=value.ndim - trailing_dims, keepdims=False) - return next_getter(sliced_value) - - with hk.experimental.custom_creator( - creator), hk.experimental.custom_getter(getter): - if len(args_ys) == 1 and args_ys[0] is None: - args0 = (None,) - else: - args0 = [ - jax.lax.dynamic_index_in_dim(ys, 0, keepdims=False) - for ys in args_ys - ] - x, z = self._call_wrapped(x, *args0) - if z is None: - return x, z - - # Broadcast state to hold each layer state. - def broadcast_state(layer_state): - return jnp.broadcast_to( - layer_state, [count,] + list(layer_state.shape)) - zs = jax.tree_util.tree_map(broadcast_state, z) - return x, zs - else: - # Use scan during apply, threading through random seed so that it's - # unique for each layer. - def layer(carry: LayerStackCarry, scanned: LayerStackScanned): - rng = carry.rng - - def getter(next_getter, value, context): - # Getter slices the full param at the current loop index. - trailing_dims = len(context.original_shape) + 1 - assert value.shape[value.ndim - trailing_dims] == count, ( - f'Attempting to use a parameter stack of size ' - f'{value.shape[value.ndim - trailing_dims]} for a LayerStack of ' - f'size {count}.') - - sliced_value = jax.lax.dynamic_index_in_dim( - value, scanned.i, axis=value.ndim - trailing_dims, keepdims=False) - return next_getter(sliced_value) - - with hk.experimental.custom_getter(getter): - if rng is None: - out_x, z = self._call_wrapped(carry.x, *scanned.args_ys) - else: - rng, rng_ = jax.random.split(rng) - with hk.with_rng(rng_): - out_x, z = self._call_wrapped(carry.x, *scanned.args_ys) - return LayerStackCarry(x=out_x, rng=rng), z - - carry = LayerStackCarry(x=x, rng=hk.maybe_next_rng_key()) - scanned = LayerStackScanned(i=jnp.arange(count, dtype=jnp.int32), - args_ys=args_ys) - - carry, zs = hk.scan( - layer, carry, scanned, length=count, unroll=self._unroll) - return carry.x, zs - - def _call_wrapped(self, - x: jnp.ndarray, - *args, - ) -> Tuple[jnp.ndarray, Optional[jnp.ndarray]]: - raise NotImplementedError() - - -class _LayerStackNoState(_LayerStack): - """_LayerStack impl with no per-layer state provided to the function.""" - - def __init__(self, - f: WrappedFn, - count: int, - unroll: int, - name: Optional[str] = None): - super().__init__(count=count, unroll=unroll, name=name) - _check_no_varargs(f) - self._f = f - - @hk.transparent - def _call_wrapped(self, args, y): - del y - ret = self._f(*args) - if len(args) == 1: - # If the function takes a single argument, the wrapped function receives - # a tuple of length 1, and therefore it must return a tuple of length 1. - ret = (ret,) - return ret, None - - -class _LayerStackWithState(_LayerStack): - """_LayerStack impl with per-layer state provided to the function.""" - - def __init__(self, - f: WrappedFn, - count: int, - unroll: int, - name: Optional[str] = None): - super().__init__(count=count, unroll=unroll, name=name) - self._f = f - - @hk.transparent - def _call_wrapped(self, x, *args): - return self._f(x, *args) - - -def layer_stack(num_layers: int, - with_state=False, - unroll: int = 1, - name: Optional[str] = None): - """Utility to wrap a Haiku function and recursively apply it to an input. - - A function is valid if it uses only explicit position parameters, and - its return type matches its input type. The position parameters can be - arbitrarily nested structures with `jnp.ndarray` at the leaf nodes. Note - that kwargs are not supported, neither are functions with variable number - of parameters (specified by `*args`). - - If `with_state=False` then the new, wrapped function can be understood as - performing the following: - ``` - for i in range(num_layers): - x = f(x) - return x - ``` - - And if `with_state=True`, assuming `f` takes two arguments on top of `x`: - ``` - for i in range(num_layers): - x, zs[i] = f(x, ys_0[i], ys_1[i]) - return x, zs - ``` - The code using `layer_stack` for the above function would be: - ``` - def f(x, y_0, y_1): - ... - return new_x, z - x, zs = layer_stack.layer_stack(num_layers, - with_state=True)(f)(x, ys_0, ys_1) - ``` - - Crucially, any parameters created inside `f` will not be shared across - iterations. - - Args: - num_layers: The number of times to iterate the wrapped function. - with_state: Whether or not to pass per-layer state to the wrapped function. - unroll: the unroll used by `scan`. - name: Name of the Haiku context. - - Returns: - Callable that will produce a layer stack when called with a valid function. - """ - def iterate(f): - if with_state: - @functools.wraps(f) - def wrapped(x, *args): - for ys in args: - assert ys.shape[0] == num_layers - return _LayerStackWithState( - f, num_layers, unroll=unroll, name=name)(x, *args) - else: - _check_no_varargs(f) - @functools.wraps(f) - def wrapped(*args): - ret = _LayerStackNoState( - f, num_layers, unroll=unroll, name=name)(args, None)[0] - if len(args) == 1: - # If the function takes a single argument, we must also return a - # single value, and not a tuple of length 1. - ret = ret[0] - return ret - - return wrapped - return iterate diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/default_runtime.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/default_runtime.py deleted file mode 100644 index 55097c5b242da66c9735c0b45cd84beefab487b1..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/default_runtime.py +++ /dev/null @@ -1,16 +0,0 @@ -checkpoint_config = dict(interval=1) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -custom_hooks = [dict(type='NumClassCheckHook')] - -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_uniformer_fpn.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_uniformer_fpn.py deleted file mode 100644 index 18678e98f24dfa9f6c2c4a753308d6eecd308124..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_uniformer_fpn.py +++ /dev/null @@ -1,201 +0,0 @@ -# model settings -model = dict( - type='CascadeRCNN', - pretrained=None, - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - mlp_ratio=4., - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2), - neck=dict( - type='FPN', - in_channels=[64, 128, 320, 512], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='CascadeRoIHead', - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg = dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg = dict( - rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index b249bfa0df6037f1433ef6d41f7da16b10645aa2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,14 +0,0 @@ -_base_ = './cascade_rcnn_r50_fpn_1x_coco.py' -model = dict( - type='CascadeRCNN', - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py deleted file mode 100644 index 519c4dbacb1a876dcd973f2a82ddeef98787619d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -# fp16 settings -fp16 = dict(loss_scale=512.) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py deleted file mode 100644 index 28f983c29edd071b32a50f18ac7b3f5c1bfdda88..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -model = dict( - bbox_head=dict( - _delete_=True, - type='FreeAnchorRetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=0.75))) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_x101_32x4d_fpn_16x1_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_x101_32x4d_fpn_16x1_20e_coco.py deleted file mode 100644 index b9e5524a6d8352201ae24b57560437b93de2ae80..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_x101_32x4d_fpn_16x1_20e_coco.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './htc_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch')) -data = dict(samples_per_gpu=1, workers_per_gpu=1) -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(type='EpochBasedRunner', max_epochs=20) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/point_rend/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/point_rend/README.md deleted file mode 100644 index af5ded187a732c3cece9c5e5b6534fb3e48ac4d1..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/point_rend/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# PointRend - -## Introduction - -[ALGORITHM] - -```latex -@InProceedings{kirillov2019pointrend, - title={{PointRend}: Image Segmentation as Rendering}, - author={Alexander Kirillov and Yuxin Wu and Kaiming He and Ross Girshick}, - journal={ArXiv:1912.08193}, - year={2019} -} -``` - -## Results and models - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: | -| R-50-FPN | caffe | 1x | 4.6 | | 38.4 | 36.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco/point_rend_r50_caffe_fpn_mstrain_1x_coco-1bcb5fb4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco/point_rend_r50_caffe_fpn_mstrain_1x_coco_20200612_161407.log.json) | -| R-50-FPN | caffe | 3x | 4.6 | | 41.0 | 38.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco/point_rend_r50_caffe_fpn_mstrain_3x_coco-e0ebb6b7.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco/point_rend_r50_caffe_fpn_mstrain_3x_coco_20200614_002632.log.json) | - -Note: All models are trained with multi-scale, the input image shorter side is randomly scaled to one of (640, 672, 704, 736, 768, 800). diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/native_scaler.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/native_scaler.py deleted file mode 100644 index 2a6fb51ee21c2e871967cfbe80f7fb080c07dfed..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/native_scaler.py +++ /dev/null @@ -1,82 +0,0 @@ -# -------------------------------------------------------- -# Based on timm and MAE-priv code bases -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/BUPT-PRIV/MAE-priv -# -------------------------------------------------------- - -import math - -import numpy as np -import torch -from torch._six import inf - - -class NativeScalerWithGradNormCount: - state_dict_key = "amp_scaler" - - def __init__(self, enabled=True): - self._scaler = torch.cuda.amp.GradScaler(enabled=enabled) - - def __call__(self, loss, optimizer, clip_grad=None, skip_grad=None, parameters=None, create_graph=False, update_grad=True): - self._scaler.scale(loss).backward(create_graph=create_graph) - if update_grad: - if clip_grad is not None: - assert parameters is not None - self._scaler.unscale_(optimizer) # unscale the gradients of optimizer's assigned params in-place - norm = torch.nn.utils.clip_grad_norm_(parameters, clip_grad) - elif skip_grad is not None: - self._scaler.unscale_(optimizer) - norm = get_grad_norm_(parameters) - if norm >= skip_grad: - self._scaler.update() - return norm - else: - self._scaler.unscale_(optimizer) - norm = get_grad_norm_(parameters) - self._scaler.step(optimizer) - self._scaler.update() - else: - norm = None - return norm - - def state_dict(self): - return self._scaler.state_dict() - - def load_state_dict(self, state_dict): - self._scaler.load_state_dict(state_dict) - - -def get_grad_norm_(parameters, norm_type: float = 2.0) -> torch.Tensor: - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = [p for p in parameters if p.grad is not None] - norm_type = float(norm_type) - if len(parameters) == 0: - return torch.tensor(0.) - device = parameters[0].grad.device - if norm_type == inf: - total_norm = max(p.grad.detach().abs().max().to(device) for p in parameters) - else: - total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), - norm_type) - return total_norm - - -def cosine_scheduler(base_value, final_value, epochs, niter_per_ep, warmup_epochs=0, - start_warmup_value=0, warmup_steps=-1): - warmup_schedule = np.array([]) - warmup_iters = warmup_epochs * niter_per_ep - if warmup_steps > 0: - warmup_iters = warmup_steps - print("Set warmup steps = %d" % warmup_iters) - if warmup_epochs > 0: - warmup_schedule = np.linspace(start_warmup_value, base_value, warmup_iters) - - iters = np.arange(epochs * niter_per_ep - warmup_iters) - schedule = np.array( - [final_value + 0.5 * (base_value - final_value) * (1 + math.cos(math.pi * i / (len(iters)))) for i in iters]) - - schedule = np.concatenate((warmup_schedule, schedule)) - - assert len(schedule) == epochs * niter_per_ep - return schedule diff --git a/spaces/HarryLee/eCommerceImageCaptioning/criterions/scst_loss.py b/spaces/HarryLee/eCommerceImageCaptioning/criterions/scst_loss.py deleted file mode 100644 index 22f62f6e50b1047d7ceb8ce90faa0a93fb7b4a07..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/criterions/scst_loss.py +++ /dev/null @@ -1,280 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -import string -from dataclasses import dataclass, field -from collections import OrderedDict -from typing import Optional - -import torch -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - -from data import data_utils -from utils.cider.pyciderevalcap.ciderD.ciderD import CiderD - - -def scst_loss(lprobs, target, reward, ignore_index=None, reduce=True): - loss = -lprobs.gather(dim=-1, index=target.unsqueeze(-1)).squeeze() * reward.unsqueeze(-1) - if ignore_index is not None: - pad_mask = target.eq(ignore_index) - loss.masked_fill_(pad_mask, 0.0) - ntokens = (~pad_mask).sum() - else: - loss = loss.squeeze(-1) - ntokens = target.numel() - if reduce: - loss = loss.sum() - return loss, ntokens - -@dataclass -class ScstRewardCriterionConfig(FairseqDataclass): - scst_cider_cached_tokens: str = field( - default="coco-train-words.p", - metadata={"help": "path to cached cPickle file used to calculate CIDEr scores"}, - ) - ignore_prefix_size: int = field( - default=0, - metadata={"help": "Ignore first N tokens"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - constraint_range: Optional[str] = field( - default=None, - metadata={"help": "constraint range"} - ) - - -@register_criterion( - "scst_reward_criterion", dataclass=ScstRewardCriterionConfig -) -class ScstRewardCriterion(FairseqCriterion): - CIDER_REWARD_WEIGHT = 1 - - def __init__( - self, - task, - scst_cider_cached_tokens, - sentence_avg, - ignore_prefix_size=0, - constraint_range=None - ): - super().__init__(task) - self.scst_cider_scorer = CiderD(df=scst_cider_cached_tokens) - self.sentence_avg = sentence_avg - self.ignore_prefix_size = ignore_prefix_size - self.transtab = str.maketrans({key: None for key in string.punctuation}) - - self.constraint_start = None - self.constraint_end = None - if constraint_range is not None: - constraint_start, constraint_end = constraint_range.split(',') - self.constraint_start = int(constraint_start) - self.constraint_end = int(constraint_end) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - loss, score, ntokens, nsentences = self.compute_loss(model, sample, reduce=reduce) - - sample_size = ( - nsentences if self.sentence_avg else ntokens - ) - logging_output = { - "loss": loss.data, - "score": score, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - def _calculate_eval_scores(self, gen_res, gt_idx, gt_res): - ''' - gen_res: generated captions, list of str - gt_idx: list of int, of the same length as gen_res - gt_res: ground truth captions, list of list of str. - gen_res[i] corresponds to gt_res[gt_idx[i]] - Each image can have multiple ground truth captions - ''' - gen_res_size = len(gen_res) - - res = OrderedDict() - for i in range(gen_res_size): - res[i] = [self._wrap_sentence(gen_res[i].strip().translate(self.transtab))] - - gts = OrderedDict() - gt_res_ = [ - [self._wrap_sentence(gt_res[i][j].strip().translate(self.transtab)) for j in range(len(gt_res[i]))] - for i in range(len(gt_res)) - ] - for i in range(gen_res_size): - gts[i] = gt_res_[gt_idx[i]] - - res_ = [{'image_id':i, 'caption': res[i]} for i in range(len(res))] - _, batch_cider_scores = self.scst_cider_scorer.compute_score(gts, res_) - scores = self.CIDER_REWARD_WEIGHT * batch_cider_scores - return scores - - @classmethod - def _wrap_sentence(self, s): - # ensure the sentence ends with token - # in order to keep consisitent with cider_cached_tokens - r = s.strip() - if r.endswith('.'): - r = r[:-1] - r += ' ' - return r - - def get_generator_out(self, model, sample): - def decode(toks): - hypo = toks.int().cpu() - hypo_str = self.task.tgt_dict.string(hypo) - hypo_str = self.task.bpe.decode(hypo_str).strip() - return hypo, hypo_str - - model.eval() - with torch.no_grad(): - self.task.scst_generator.model.eval() - gen_out = self.task.scst_generator.generate([model], sample) - - gen_target = [] - gen_res = [] - gt_res = [] - for i in range(len(gen_out)): - for j in range(len(gen_out[i])): - hypo, hypo_str = decode(gen_out[i][j]["tokens"]) - gen_target.append(hypo) - gen_res.append(hypo_str) - gt_res.append( - decode(utils.strip_pad(sample["target"][i], self.padding_idx))[1].split('&&') - ) - - return gen_target, gen_res, gt_res - - def get_reward_and_scores(self, gen_res, gt_res, device): - batch_size = len(gt_res) - gen_res_size = len(gen_res) - seq_per_img = gen_res_size // batch_size - - gt_idx = [i // seq_per_img for i in range(gen_res_size)] - scores = self._calculate_eval_scores(gen_res, gt_idx, gt_res) - sc_ = scores.reshape(batch_size, seq_per_img) - baseline = (sc_.sum(1, keepdims=True) - sc_) / (sc_.shape[1] - 1) - # sample - baseline - reward = scores.reshape(batch_size, seq_per_img) - reward = reward - baseline - reward = reward.reshape(gen_res_size) - reward = torch.as_tensor(reward, device=device, dtype=torch.float64) - - return reward, scores - - def get_net_output(self, model, sample, gen_target): - def merge(sample_list, eos=self.task.tgt_dict.eos(), move_eos_to_beginning=False): - return data_utils.collate_tokens( - sample_list, - pad_idx=self.padding_idx, - eos_idx=eos, - left_pad=False, - move_eos_to_beginning=move_eos_to_beginning, - ) - - batch_size = len(sample["target"]) - gen_target_size = len(gen_target) - seq_per_img = gen_target_size // batch_size - - model.train() - sample_src_tokens = torch.repeat_interleave( - sample['net_input']['src_tokens'], seq_per_img, dim=0 - ) - sample_src_lengths = torch.repeat_interleave( - sample['net_input']['src_lengths'], seq_per_img, dim=0 - ) - sample_patch_images = torch.repeat_interleave( - sample['net_input']['patch_images'], seq_per_img, dim=0 - ) - sample_patch_masks = torch.repeat_interleave( - sample['net_input']['patch_masks'], seq_per_img, dim=0 - ) - gen_prev_output_tokens = torch.as_tensor( - merge(gen_target, eos=self.task.tgt_dict.bos(), move_eos_to_beginning=True), - device=sample["target"].device, dtype=torch.int64 - ) - gen_target_tokens = torch.as_tensor( - merge(gen_target), device=sample["target"].device, dtype=torch.int64 - ) - net_output = model( - src_tokens=sample_src_tokens, src_lengths=sample_src_lengths, - patch_images=sample_patch_images, patch_masks=sample_patch_masks, - prev_output_tokens=gen_prev_output_tokens - ) - - return net_output, gen_target_tokens - - def get_lprobs_and_target(self, model, net_output, gen_target): - if self.constraint_start is not None and self.constraint_end is not None: - net_output[0][:, :, 4:self.constraint_start] = -math.inf - net_output[0][:, :, self.constraint_end:] = -math.inf - lprobs = model.get_normalized_probs(net_output, log_probs=True) - if self.ignore_prefix_size > 0: - if getattr(lprobs, "batch_first", False): - lprobs = lprobs[:, self.ignore_prefix_size :, :].contiguous() - gen_target = gen_target[:, self.ignore_prefix_size :].contiguous() - else: - lprobs = lprobs[self.ignore_prefix_size :, :, :].contiguous() - gen_target = gen_target[self.ignore_prefix_size :, :].contiguous() - return lprobs, gen_target - - def compute_loss(self, model, sample, reduce=True): - gen_target, gen_res, gt_res = self.get_generator_out(model, sample) - reward, scores = self.get_reward_and_scores(gen_res, gt_res, device=sample["target"].device) - net_output, gen_target_tokens = self.get_net_output(model, sample, gen_target) - gen_lprobs, gen_target_tokens = self.get_lprobs_and_target(model, net_output, gen_target_tokens) - loss, ntokens = scst_loss(gen_lprobs, gen_target_tokens, reward, ignore_index=self.padding_idx, reduce=reduce) - nsentences = gen_target_tokens.size(0) - - return loss, scores.sum(), ntokens, nsentences - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - score_sum = sum(log.get("score", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size, sample_size, round=3 - ) - metrics.log_scalar( - "score", score_sum / nsentences, nsentences, round=3 - ) - - metrics.log_scalar( - "ntokens", ntokens, 1, round=3 - ) - metrics.log_scalar( - "nsentences", nsentences, 1, round=3 - ) - metrics.log_scalar( - "sample_size", sample_size, 1, round=3 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py deleted file mode 100644 index 7c257c2700f015cb123a976584aef72f0429eb0c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .discriminative_reranking_criterion import KLDivergenceRerankingCriterion - - -__all__ = [ - "KLDivergenceRerankingCriterion", -] diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py deleted file mode 100644 index 9304f99eb8169a614f39babc830c84cac80e080b..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - blocks = [32, 64, 128, 256] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_forward(at::Tensor input, at::Tensor weight, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - switch = """ - switch(filterSize) { -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "dynamicconv_forward", ([&] {{ - dynamicconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<>>( - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - output.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break;\n -""" - - end = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } - - return {output}; -} -""" - - with open("dynamicconv_cuda_forward.cu", "w") as forward: - forward.write(head) - forward.write(switch) - for k in kernels: - b_size = 32 - for b in blocks: - if b > k: - b_size = b - break - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=b_size, pad=pad)) - forward.write(bad_padding) - forward.write(end) - - -def gen_backward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - thresh = [512, 512, 512, 512, 512, 380, 256, 256] - min_block = [64, 64, 64, 64, 64, 64, 128, 256] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_backward(at::Tensor gradOutput, int padding_l, at::Tensor input, at::Tensor weight) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - auto numChunks = 1; - - auto gradInput = at::zeros_like(input); - auto gradWeight = at::zeros_like(weight); - auto stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(minibatch, numHeads, numChunks); -""" - - sequence_if = """ - if (sequenceLength < {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - chunks_reset = """ - numChunks = int(ceilf(sequenceLength/float({b_size}))); - blocks = dim3(minibatch, numHeads, numChunks); -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(gradOutput.scalar_type(), "dynamicconv_backward", ([&] {{ - dynamicconv_backward_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - gradOutput.data(), - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - gradWeight.data(), - gradInput.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } - break;\n -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradWeight}; -} -""" - - with open("dynamicconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for seq in seqs: - backward.write(sequence_if.format(seq=seq)) - for k, t, m in zip(kernels, thresh, min_block): - backward.write(case_k.format(k=k)) - if seq <= t: - b_size = seq - else: - b_size = m - backward.write(chunks_reset.format(b_size=b_size)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=b_size, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(con_else) - backward.write(final_else) - for k, m in zip(kernels, min_block): - backward.write(case_k.format(k=k)) - backward.write(chunks_reset.format(b_size=m)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=m, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/rm_pt.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/rm_pt.py deleted file mode 100644 index 6cd063d21f0610fa7c42c2cfb2ee8af7c9c78677..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/rm_pt.py +++ /dev/null @@ -1,141 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import re -import shutil -import sys - - -pt_regexp = re.compile(r"checkpoint(\d+|_\d+_\d+|_[a-z]+)\.pt") -pt_regexp_epoch_based = re.compile(r"checkpoint(\d+)\.pt") -pt_regexp_update_based = re.compile(r"checkpoint_\d+_(\d+)\.pt") - - -def parse_checkpoints(files): - entries = [] - for f in files: - m = pt_regexp_epoch_based.fullmatch(f) - if m is not None: - entries.append((int(m.group(1)), m.group(0))) - else: - m = pt_regexp_update_based.fullmatch(f) - if m is not None: - entries.append((int(m.group(1)), m.group(0))) - return entries - - -def last_n_checkpoints(files, n): - entries = parse_checkpoints(files) - return [x[1] for x in sorted(entries, reverse=True)[:n]] - - -def every_n_checkpoints(files, n): - entries = parse_checkpoints(files) - return [x[1] for x in sorted(sorted(entries)[::-n])] - - -def main(): - parser = argparse.ArgumentParser( - description=( - "Recursively delete checkpoint files from `root_dir`, " - "but preserve checkpoint_best.pt and checkpoint_last.pt" - ) - ) - parser.add_argument("root_dirs", nargs="*") - parser.add_argument( - "--save-last", type=int, default=0, help="number of last checkpoints to save" - ) - parser.add_argument( - "--save-every", type=int, default=0, help="interval of checkpoints to save" - ) - parser.add_argument( - "--preserve-test", - action="store_true", - help="preserve checkpoints in dirs that start with test_ prefix (default: delete them)", - ) - parser.add_argument( - "--delete-best", action="store_true", help="delete checkpoint_best.pt" - ) - parser.add_argument( - "--delete-last", action="store_true", help="delete checkpoint_last.pt" - ) - parser.add_argument( - "--no-dereference", action="store_true", help="don't dereference symlinks" - ) - args = parser.parse_args() - - files_to_desymlink = [] - files_to_preserve = [] - files_to_delete = [] - for root_dir in args.root_dirs: - for root, _subdirs, files in os.walk(root_dir): - if args.save_last > 0: - to_save = last_n_checkpoints(files, args.save_last) - else: - to_save = [] - if args.save_every > 0: - to_save += every_n_checkpoints(files, args.save_every) - for file in files: - if not pt_regexp.fullmatch(file): - continue - full_path = os.path.join(root, file) - if ( - not os.path.basename(root).startswith("test_") or args.preserve_test - ) and ( - (file == "checkpoint_last.pt" and not args.delete_last) - or (file == "checkpoint_best.pt" and not args.delete_best) - or file in to_save - ): - if os.path.islink(full_path) and not args.no_dereference: - files_to_desymlink.append(full_path) - else: - files_to_preserve.append(full_path) - else: - files_to_delete.append(full_path) - - if len(files_to_desymlink) == 0 and len(files_to_delete) == 0: - print("Nothing to do.") - sys.exit(0) - - files_to_desymlink = sorted(files_to_desymlink) - files_to_preserve = sorted(files_to_preserve) - files_to_delete = sorted(files_to_delete) - - print("Operations to perform (in order):") - if len(files_to_desymlink) > 0: - for file in files_to_desymlink: - print(" - preserve (and dereference symlink): " + file) - if len(files_to_preserve) > 0: - for file in files_to_preserve: - print(" - preserve: " + file) - if len(files_to_delete) > 0: - for file in files_to_delete: - print(" - delete: " + file) - while True: - resp = input("Continue? (Y/N): ") - if resp.strip().lower() == "y": - break - elif resp.strip().lower() == "n": - sys.exit(0) - - print("Executing...") - if len(files_to_desymlink) > 0: - for file in files_to_desymlink: - realpath = os.path.realpath(file) - print("rm " + file) - os.remove(file) - print("cp {} {}".format(realpath, file)) - shutil.copyfile(realpath, file) - if len(files_to_delete) > 0: - for file in files_to_delete: - print("rm " + file) - os.remove(file) - - -if __name__ == "__main__": - main() diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/text/__init__.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/text/__init__.py deleted file mode 100644 index 3f5aa62bfcd56165b85d064f5ca0ba59fbe34a72..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/text/__init__.py +++ /dev/null @@ -1,84 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -import re -from text import cleaners - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r'(.*?)\{(.+?)\}(.*)') - - -def get_arpabet(word, dictionary): - word_arpabet = dictionary.lookup(word) - if word_arpabet is not None: - return "{" + word_arpabet[0] + "}" - else: - return word - - -def text_to_sequence(text, symbols, cleaner_names, dictionary=None): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - dictionary: arpabet class with arpabet dictionary - - Returns: - List of integers corresponding to the symbols in the text - ''' - # Mappings from symbol to numeric ID and vice versa: - global _id_to_symbol, _symbol_to_id - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - _id_to_symbol = {i: s for i, s in enumerate(symbols)} - - sequence = [] - - space = _symbols_to_sequence(' ') - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - clean_text = _clean_text(text, cleaner_names) - if dictionary is not None: - clean_text = [get_arpabet(w, dictionary) for w in clean_text.split(" ")] - for i in range(len(clean_text)): - t = clean_text[i] - if t.startswith("{"): - sequence += _arpabet_to_sequence(t[1:-1]) - else: - sequence += _symbols_to_sequence(t) - sequence += space - else: - sequence += _symbols_to_sequence(clean_text) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - # remove trailing space - if dictionary is not None: - sequence = sequence[:-1] if sequence[-1] == space[0] else sequence - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(['@' + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s is not '_' and s is not '~' \ No newline at end of file diff --git a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/learn_bpe.py b/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/learn_bpe.py deleted file mode 100644 index 7b01f046fa6b3fd8ba64b7658c23b6f80a4e6ba3..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/learn_bpe.py +++ /dev/null @@ -1,367 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Author: Rico Sennrich - -"""Use byte pair encoding (BPE) to learn a variable-length encoding of the vocabulary in a text. -Unlike the original BPE, it does not compress the plain text, but can be used to reduce the vocabulary -of a text to a configurable number of symbols, with only a small increase in the number of tokens. - -Reference: -Rico Sennrich, Barry Haddow and Alexandra Birch (2016). Neural Machine Translation of Rare Words with Subword Units. -Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany. -""" - -from __future__ import unicode_literals - -import os -import sys -import inspect -import codecs -import re -import copy -import argparse -import warnings -import tempfile -from multiprocessing import Pool, cpu_count -from collections import defaultdict, Counter - -try: - from tqdm import tqdm -except ImportError: - def tqdm(iterator, *args, **kwargs): - return iterator - -# hack for python2/3 compatibility -from io import open -argparse.open = open - -def create_parser(subparsers=None): - - if subparsers: - parser = subparsers.add_parser('learn-bpe', - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - else: - parser = argparse.ArgumentParser( - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - - parser.add_argument( - '--input', '-i', type=argparse.FileType('r'), default=sys.stdin, - metavar='PATH', - help="Input text (default: standard input).") - - parser.add_argument( - '--output', '-o', type=argparse.FileType('w'), default=sys.stdout, - metavar='PATH', - help="Output file for BPE codes (default: standard output)") - parser.add_argument( - '--symbols', '-s', type=int, default=10000, - help="Create this many new symbols (each representing a character n-gram) (default: %(default)s)") - parser.add_argument( - '--min-frequency', type=int, default=2, metavar='FREQ', - help='Stop if no symbol pair has frequency >= FREQ (default: %(default)s)') - parser.add_argument('--dict-input', action="store_true", - help="If set, input file is interpreted as a dictionary where each line contains a word-count pair") - parser.add_argument( - '--total-symbols', '-t', action="store_true", - help="subtract number of characters from the symbols to be generated (so that '--symbols' becomes an estimate for the total number of symbols needed to encode text).") - parser.add_argument( - '--num-workers', type=int, default=1, - help="Number of processors to process texts, only supported in Python3. If -1, set `multiprocessing.cpu_count()`. (default: %(default)s)") - parser.add_argument( - '--verbose', '-v', action="store_true", - help="verbose mode.") - - return parser - -def get_vocabulary(fobj, is_dict=False, num_workers=1): - """Read text and return dictionary that encodes vocabulary - """ - vocab = Counter() - if is_dict: - for i, line in enumerate(fobj): - try: - word, count = line.strip('\r\n ').split(' ') - except: - print('Failed reading vocabulary file at line {0}: {1}'.format(i, line)) - sys.exit(1) - vocab[word] += int(count) - elif num_workers == 1 or fobj.name == '': - if num_workers > 1: - warnings.warn("In parallel mode, the input cannot be STDIN. Using 1 processor instead.") - for i, line in enumerate(fobj): - for word in line.strip('\r\n ').split(' '): - if word: - vocab[word] += 1 - elif num_workers > 1: - - if sys.version_info < (3, 0): - print("Parallel mode is only supported in Python3.") - sys.exit(1) - - with open(fobj.name, encoding="utf8") as f: - size = os.fstat(f.fileno()).st_size - chunk_size = int(size / num_workers) - offsets = [0 for _ in range(num_workers + 1)] - for i in range(1, num_workers): - f.seek(chunk_size * i) - pos = f.tell() - while True: - try: - line = f.readline() - break - except UnicodeDecodeError: - pos -= 1 - f.seek(pos) - offsets[i] = f.tell() - assert 0 <= offsets[i] < 1e20, "Bad new line separator, e.g. '\\r'" - - vocab_files = [] - pool = Pool(processes=num_workers) - for i in range(num_workers): - tmp = tempfile.NamedTemporaryFile(delete=False) - tmp.close() - vocab_files.append(tmp) - pool.apply_async(_get_vocabulary, (fobj.name, tmp.name, offsets[i], offsets[i + 1])) - pool.close() - pool.join() - import pickle - for i in range(num_workers): - with open(vocab_files[i].name, 'rb') as f: - vocab += pickle.load(f) - os.remove(vocab_files[i].name) - else: - raise ValueError('`num_workers` is expected to be a positive number, but got {}.'.format(num_workers)) - return vocab - -def _get_vocabulary(infile, outfile, begin, end): - import pickle - vocab = Counter() - with open(infile, encoding="utf8") as f: - f.seek(begin) - line = f.readline() - while line: - pos = f.tell() - assert 0 <= pos < 1e20, "Bad new line separator, e.g. '\\r'" - if end > 0 and pos > end: - break - for word in line.strip('\r\n ').split(' '): - if word: - vocab[word] += 1 - line = f.readline() - with open(outfile, 'wb') as f: - pickle.dump(vocab, f) - -def update_pair_statistics(pair, changed, stats, indices): - """Minimally update the indices and frequency of symbol pairs - - if we merge a pair of symbols, only pairs that overlap with occurrences - of this pair are affected, and need to be updated. - """ - stats[pair] = 0 - indices[pair] = defaultdict(int) - first, second = pair - new_pair = first+second - for j, word, old_word, freq in changed: - - # find all instances of pair, and update frequency/indices around it - i = 0 - while True: - # find first symbol - try: - i = old_word.index(first, i) - except ValueError: - break - # if first symbol is followed by second symbol, we've found an occurrence of pair (old_word[i:i+2]) - if i < len(old_word)-1 and old_word[i+1] == second: - # assuming a symbol sequence "A B C", if "B C" is merged, reduce the frequency of "A B" - if i: - prev = old_word[i-1:i+1] - stats[prev] -= freq - indices[prev][j] -= 1 - if i < len(old_word)-2: - # assuming a symbol sequence "A B C B", if "B C" is merged, reduce the frequency of "C B". - # however, skip this if the sequence is A B C B C, because the frequency of "C B" will be reduced by the previous code block - if old_word[i+2] != first or i >= len(old_word)-3 or old_word[i+3] != second: - nex = old_word[i+1:i+3] - stats[nex] -= freq - indices[nex][j] -= 1 - i += 2 - else: - i += 1 - - i = 0 - while True: - try: - # find new pair - i = word.index(new_pair, i) - except ValueError: - break - # assuming a symbol sequence "A BC D", if "B C" is merged, increase the frequency of "A BC" - if i: - prev = word[i-1:i+1] - stats[prev] += freq - indices[prev][j] += 1 - # assuming a symbol sequence "A BC B", if "B C" is merged, increase the frequency of "BC B" - # however, if the sequence is A BC BC, skip this step because the count of "BC BC" will be incremented by the previous code block - if i < len(word)-1 and word[i+1] != new_pair: - nex = word[i:i+2] - stats[nex] += freq - indices[nex][j] += 1 - i += 1 - - -def get_pair_statistics(vocab): - """Count frequency of all symbol pairs, and create index""" - - # data structure of pair frequencies - stats = defaultdict(int) - - #index from pairs to words - indices = defaultdict(lambda: defaultdict(int)) - - for i, (word, freq) in enumerate(vocab): - prev_char = word[0] - for char in word[1:]: - stats[prev_char, char] += freq - indices[prev_char, char][i] += 1 - prev_char = char - - return stats, indices - - -def replace_pair(pair, vocab, indices): - """Replace all occurrences of a symbol pair ('A', 'B') with a new symbol 'AB'""" - first, second = pair - pair_str = ''.join(pair) - pair_str = pair_str.replace('\\','\\\\') - changes = [] - pattern = re.compile(r'(?'); - # version numbering allows bckward compatibility - outfile.write('#version: 0.2\n') - - vocab = get_vocabulary(infile, is_dict, num_workers) - vocab = dict([(tuple(x[:-1])+(x[-1]+'',) ,y) for (x,y) in vocab.items()]) - sorted_vocab = sorted(vocab.items(), key=lambda x: x[1], reverse=True) - - stats, indices = get_pair_statistics(sorted_vocab) - big_stats = copy.deepcopy(stats) - - if total_symbols: - uniq_char_internal = set() - uniq_char_final = set() - for word in vocab: - for char in word[:-1]: - uniq_char_internal.add(char) - uniq_char_final.add(word[-1]) - sys.stderr.write('Number of word-internal characters: {0}\n'.format(len(uniq_char_internal))) - sys.stderr.write('Number of word-final characters: {0}\n'.format(len(uniq_char_final))) - sys.stderr.write('Reducing number of merge operations by {0}\n'.format(len(uniq_char_internal) + len(uniq_char_final))) - num_symbols -= len(uniq_char_internal) + len(uniq_char_final) - - # threshold is inspired by Zipfian assumption, but should only affect speed - threshold = max(stats.values()) / 10 - for i in tqdm(range(num_symbols)): - if stats: - most_frequent = max(stats, key=lambda x: (stats[x], x)) - - # we probably missed the best pair because of pruning; go back to full statistics - if not stats or (i and stats[most_frequent] < threshold): - prune_stats(stats, big_stats, threshold) - stats = copy.deepcopy(big_stats) - most_frequent = max(stats, key=lambda x: (stats[x], x)) - # threshold is inspired by Zipfian assumption, but should only affect speed - threshold = stats[most_frequent] * i/(i+10000.0) - prune_stats(stats, big_stats, threshold) - - if stats[most_frequent] < min_frequency: - sys.stderr.write('no pair has frequency >= {0}. Stopping\n'.format(min_frequency)) - break - - if verbose: - sys.stderr.write('pair {0}: {1} {2} -> {1}{2} (frequency {3})\n'.format(i, most_frequent[0], most_frequent[1], stats[most_frequent])) - outfile.write('{0} {1}\n'.format(*most_frequent)) - changes = replace_pair(most_frequent, sorted_vocab, indices) - update_pair_statistics(most_frequent, changes, stats, indices) - stats[most_frequent] = 0 - if not i % 100: - prune_stats(stats, big_stats, threshold) - - -if __name__ == '__main__': - - currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) - newdir = os.path.join(currentdir, 'subword_nmt') - if os.path.isdir(newdir): - warnings.simplefilter('default') - warnings.warn( - "this script's location has moved to {0}. This symbolic link will be removed in a future version. Please point to the new location, or install the package and use the command 'subword-nmt'".format(newdir), - DeprecationWarning - ) - - # python 2/3 compatibility - if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) - else: - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer) - - parser = create_parser() - args = parser.parse_args() - - if args.num_workers <= 0: - args.num_workers = cpu_count() - - if sys.version_info < (3, 0) and args.num_workers > 1: - args.num_workers = 1 - warnings.warn("Parallel mode is only supported in Python3. Using 1 processor instead.") - - # read/write files as UTF-8 - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - - learn_bpe(args.input, args.output, args.symbols, args.min_frequency, args.verbose, is_dict=args.dict_input, total_symbols=args.total_symbols, num_workers=args.num_workers) diff --git a/spaces/Harveenchadha/oiTrans/subword-nmt/CHANGELOG.md b/spaces/Harveenchadha/oiTrans/subword-nmt/CHANGELOG.md deleted file mode 100644 index 9f6772079019214833a806a4de55564a491fa915..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/subword-nmt/CHANGELOG.md +++ /dev/null @@ -1,49 +0,0 @@ -CHANGELOG ---------- - -v0.3.8: - - multiprocessing support (get_vocab and apply_bpe) - - progress bar for learn_bpe - - seed parameter for deterministic BPE dropout - - ignore some unicode line separators which would crash subword-nmt - -v0.3.7: - - BPE dropout (Provilkov et al., 2019) - - more efficient glossaries (https://github.com/rsennrich/subword-nmt/pull/69) - -v0.3.6: - - fix to subword-bpe command encoding - -v0.3.5: - - fix to subword-bpe command under Python 2 - - wider support of --total-symbols argument - -v0.3.4: - - segment_tokens method to improve library usability (https://github.com/rsennrich/subword-nmt/pull/52) - - support regex glossaries (https://github.com/rsennrich/subword-nmt/pull/56) - - allow unicode separators (https://github.com/rsennrich/subword-nmt/pull/57) - - new option --total-symbols in learn-bpe (commit 61ad8) - - fix documentation (best practices) (https://github.com/rsennrich/subword-nmt/pull/60) - -v0.3: - - library is now installable via pip - - fix occasional problems with UTF-8 whitespace and new lines in learn_bpe and apply_bpe. - - do not silently convert UTF-8 newline characters into "\n" - - do not silently convert UTF-8 whitespace characters into " " - - UTF-8 whitespace and newline characters are now considered part of a word, and segmented by BPE - -v0.2: - - different, more consistent handling of end-of-word token (commit a749a7) (https://github.com/rsennrich/subword-nmt/issues/19) - - allow passing of vocabulary and frequency threshold to apply_bpe.py, preventing the production of OOV (or rare) subword units (commit a00db) - - made learn_bpe.py deterministic (commit 4c54e) - - various changes to make handling of UTF more consistent between Python versions - - new command line arguments for apply_bpe.py: - - '--glossaries' to prevent given strings from being affected by BPE - - '--merges' to apply a subset of learned BPE operations - - new command line arguments for learn_bpe.py: - - '--dict-input': rather than raw text file, interpret input as a frequency dictionary (as created by get_vocab.py). - - -v0.1: - - consistent cross-version unicode handling - - all scripts are now deterministic diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py deleted file mode 100644 index 10ad6ce47cfdf0a87ba089b299fe9551b29fa167..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import math -import numpy as np -import tqdm -import torch -from shutil import copyfile - -from npy_append_array import NpyAppendArray - - -def get_parser(): - parser = argparse.ArgumentParser( - description="transforms features via a given pca and stored them in target dir" - ) - # fmt: off - parser.add_argument('source', help='directory with features') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--pca-path', type=str, help='pca location. will append _A.npy and _b.npy', required=True) - parser.add_argument('--batch-size', type=int, default=2048000, help='batch size') - parser.add_argument('--unfiltered', action='store_true', help='process the unfiltered version') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - source_path = osp.join(args.source, args.split) - data_poth = source_path + "_unfiltered" if args.unfiltered else source_path - - print(f"data path: {data_poth}") - - features = np.load(data_poth + ".npy", mmap_mode="r") - pca_A = torch.from_numpy(np.load(args.pca_path + "_A.npy")).cuda() - pca_b = torch.from_numpy(np.load(args.pca_path + "_b.npy")).cuda() - - os.makedirs(args.save_dir, exist_ok=True) - save_path = osp.join(args.save_dir, args.split) - - copyfile(source_path + ".tsv", save_path + ".tsv") - copyfile(data_poth + ".lengths", save_path + ".lengths") - - if osp.exists(source_path + ".phn"): - copyfile(source_path + ".phn", save_path + ".phn") - - if osp.exists(source_path + ".wrd"): - copyfile(source_path + ".wrd", save_path + ".wrd") - - if osp.exists(save_path + ".npy"): - os.remove(save_path + ".npy") - npaa = NpyAppendArray(save_path + ".npy") - - batches = math.ceil(features.shape[0] / args.batch_size) - - with torch.no_grad(): - for b in tqdm.trange(batches): - start = b * args.batch_size - end = start + args.batch_size - x = torch.from_numpy(features[start:end]).cuda() - x = torch.matmul(x, pca_A) + pca_b - npaa.append(x.cpu().numpy()) - - -if __name__ == "__main__": - main() diff --git a/spaces/IdaLee/DrawEasy/pictureDeal.py b/spaces/IdaLee/DrawEasy/pictureDeal.py deleted file mode 100644 index 6e29a07b21d9f1c24886120e2f447a41db405b90..0000000000000000000000000000000000000000 --- a/spaces/IdaLee/DrawEasy/pictureDeal.py +++ /dev/null @@ -1,76 +0,0 @@ -import cv2 -from PIL import Image, ImageEnhance -import gradio as gr -from sklearn.cluster import KMeans -import numpy as np - -with gr.Blocks() as interface: - - with gr.Row(): - n_colors = gr.Slider(2, 32, 12, step=1, label="图片要加工的目标颜色数量") - - with gr.Row(): - img_input = gr.Image() - img_output = gr.Image() - - section_btn1 = gr.Button("合并色彩") - - # 图片模型训练 - def img_fit_predict(img,n_colors): - data = img.reshape(-1,3) - # 把原始图片压缩成n_colors个颜色 - kmeans = KMeans(n_clusters=n_colors) - y_ = kmeans.fit_predict(data) - # 模型合并颜色 - colors = kmeans.cluster_centers_/255 - output_temp = colors[y_].reshape(img.shape) - return output_temp - - section_btn1.click(img_fit_predict, inputs=[img_input,n_colors], outputs=img_output) - - with gr.Row(): - gaussian_blur = gr.Slider(1, 13, 13, step=2, label="整体降噪参数调整") - structuring_element = gr.Slider(1, 13, 3, step=2, label="去除小噪声") - canny_start = gr.Slider(1, 200, 4, step=1, label="边缘检测-开始参数") - canny_end = gr.Slider(1, 200, 10, step=1, label="边缘检测-结束参数") - - with gr.Row(): - thresh_val = gr.Slider(50, 500, 205, step=1, label="二值图像-thresh") - maxval = gr.Slider(50, 500, 330, step=1, label="二值图像-maxval") - enhance = gr.Slider(0, 1, 0.8, step=0.1, label="增强颜色-enhance") - blend = gr.Slider(0, 1, 0.4, step=0.1, label="增强颜色-blend") - - section_btn2 = gr.Button("调整图片") - with gr.Row(): - closed_output = gr.Image() - img_param_output = gr.Image() - - # 调整模型结果参数 - def turn_arguments(img,img_output,gaussian_blur,structuring_element,canny_start,canny_end,thresh_val,maxval,enhance,blend): - gray = cv2.cvtColor(img_output, cv2.COLOR_BGR2GRAY) - # 对灰度图像进行高斯滤波,以去除噪声 - gray = cv2.GaussianBlur(gray, (gaussian_blur,gaussian_blur), 0) - # 使用Canny算子进行边缘检测 - edges = cv2.Canny(gray, canny_start, canny_end) - # 将边缘图像转换为二值图像 - _, thresh = cv2.threshold(edges, thresh_val, maxval, cv2.THRESH_BINARY) - # 对二值图像进行形态学操作,以去除小的噪点 - kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (structuring_element, structuring_element)) - closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel) - image = Image.fromarray(img_output) - closed = closed.astype(img.dtype) - result = cv2.bitwise_and(img_output, img_output, mask=closed) - result[closed==0] = (255,255,255) - # 颜色空间转换 - enhancer = ImageEnhance.Color(image=image) - # 增强颜色 - img1 = enhancer.enhance(enhance).convert('RGB') - img2 = Image.fromarray(result).convert('RGB') - union_img = np.asarray(Image.blend(img2, img1, blend)) - return result,union_img - - section_btn2.click(turn_arguments,inputs=[img_input, img_output,gaussian_blur, - structuring_element,canny_start,canny_end,thresh_val,maxval,enhance,blend ], - outputs = [closed_output,img_param_output]) - -interface.launch() \ No newline at end of file diff --git a/spaces/Illumotion/Koboldcpp/examples/server/README.md b/spaces/Illumotion/Koboldcpp/examples/server/README.md deleted file mode 100644 index d409e8408f192df6c5ce7f331a73103e3bdf06fe..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/server/README.md +++ /dev/null @@ -1,242 +0,0 @@ -# llama.cpp/example/server - -This example demonstrates a simple HTTP API server and a simple web front end to interact with llama.cpp. - -Command line options: - -- `--threads N`, `-t N`: Set the number of threads to use during generation. -- `-tb N, --threads-batch N`: Set the number of threads to use during batch and prompt processing. If not specified, the number of threads will be set to the number of threads used for generation. -- `-m FNAME`, `--model FNAME`: Specify the path to the LLaMA model file (e.g., `models/7B/ggml-model.gguf`). -- `-m ALIAS`, `--alias ALIAS`: Set an alias for the model. The alias will be returned in API responses. -- `-c N`, `--ctx-size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference. The size may differ in other models, for example, baichuan models were build with a context of 4096. -- `-ngl N`, `--n-gpu-layers N`: When compiled with appropriate support (currently CLBlast or cuBLAS), this option allows offloading some layers to the GPU for computation. Generally results in increased performance. -- `-mg i, --main-gpu i`: When using multiple GPUs this option controls which GPU is used for small tensors for which the overhead of splitting the computation across all GPUs is not worthwhile. The GPU in question will use slightly more VRAM to store a scratch buffer for temporary results. By default GPU 0 is used. Requires cuBLAS. -- `-ts SPLIT, --tensor-split SPLIT`: When using multiple GPUs this option controls how large tensors should be split across all GPUs. `SPLIT` is a comma-separated list of non-negative values that assigns the proportion of data that each GPU should get in order. For example, "3,2" will assign 60% of the data to GPU 0 and 40% to GPU 1. By default the data is split in proportion to VRAM but this may not be optimal for performance. Requires cuBLAS. -- `-b N`, `--batch-size N`: Set the batch size for prompt processing. Default: `512`. -- `--memory-f32`: Use 32-bit floats instead of 16-bit floats for memory key+value. Not recommended. -- `--mlock`: Lock the model in memory, preventing it from being swapped out when memory-mapped. -- `--no-mmap`: Do not memory-map the model. By default, models are mapped into memory, which allows the system to load only the necessary parts of the model as needed. -- `--numa`: Attempt optimizations that help on some NUMA systems. -- `--lora FNAME`: Apply a LoRA (Low-Rank Adaptation) adapter to the model (implies --no-mmap). This allows you to adapt the pretrained model to specific tasks or domains. -- `--lora-base FNAME`: Optional model to use as a base for the layers modified by the LoRA adapter. This flag is used in conjunction with the `--lora` flag, and specifies the base model for the adaptation. -- `-to N`, `--timeout N`: Server read/write timeout in seconds. Default `600`. -- `--host`: Set the hostname or ip address to listen. Default `127.0.0.1`. -- `--port`: Set the port to listen. Default: `8080`. -- `--path`: path from which to serve static files (default examples/server/public) -- `--embedding`: Enable embedding extraction, Default: disabled. - -## Build - -server is build alongside everything else from the root of the project - -- Using `make`: - - ```bash - make - ``` - -- Using `CMake`: - - ```bash - cmake --build . --config Release - ``` - -## Quick Start - -To get started right away, run the following command, making sure to use the correct path for the model you have: - -### Unix-based systems (Linux, macOS, etc.): - -```bash -./server -m models/7B/ggml-model.gguf -c 2048 -``` - -### Windows: - -```powershell -server.exe -m models\7B\ggml-model.gguf -c 2048 -``` -The above command will start a server that by default listens on `127.0.0.1:8080`. -You can consume the endpoints with Postman or NodeJS with axios library. You can visit the web front end at the same url. - -## Testing with CURL - -Using [curl](https://curl.se/). On Windows `curl.exe` should be available in the base OS. - -```sh -curl --request POST \ - --url http://localhost:8080/completion \ - --header "Content-Type: application/json" \ - --data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 128}' -``` - -## Node JS Test - -You need to have [Node.js](https://nodejs.org/en) installed. - -```bash -mkdir llama-client -cd llama-client -``` - -Create a index.js file and put inside this: - -```javascript -const prompt = `Building a website can be done in 10 simple steps:`; - -async function Test() { - let response = await fetch("http://127.0.0.1:8080/completion", { - method: 'POST', - body: JSON.stringify({ - prompt, - n_predict: 512, - }) - }) - console.log((await response.json()).content) -} - -Test() -``` - -And run it: - -```bash -node index.js -``` - -## API Endpoints - -- **POST** `/completion`: Given a prompt, it returns the predicted completion. - - *Options:* - - `temperature`: Adjust the randomness of the generated text (default: 0.8). - - `top_k`: Limit the next token selection to the K most probable tokens (default: 40). - - `top_p`: Limit the next token selection to a subset of tokens with a cumulative probability above a threshold P (default: 0.9). - - `n_predict`: Set the number of tokens to predict when generating text. **Note:** May exceed the set limit slightly if the last token is a partial multibyte character. When 0, no tokens will be generated but the prompt is evaluated into the cache. (default: 128, -1 = infinity). - - `n_keep`: Specify the number of tokens from the initial prompt to retain when the model resets its internal context. - By default, this value is set to 0 (meaning no tokens are kept). Use `-1` to retain all tokens from the initial prompt. - - `stream`: It allows receiving each predicted token in real-time instead of waiting for the completion to finish. To enable this, set to `true`. - - `prompt`: Provide a prompt as a string, or as an array of strings and numbers representing tokens. Internally, the prompt is compared, and it detects if a part has already been evaluated, and the remaining part will be evaluate. If the prompt is a string, or an array with the first element given as a string, a space is inserted in the front like main.cpp does. - - `stop`: Specify a JSON array of stopping strings. - These words will not be included in the completion, so make sure to add them to the prompt for the next iteration (default: []). - - `tfs_z`: Enable tail free sampling with parameter z (default: 1.0, 1.0 = disabled). - - `typical_p`: Enable locally typical sampling with parameter p (default: 1.0, 1.0 = disabled). - - `repeat_penalty`: Control the repetition of token sequences in the generated text (default: 1.1). - - `repeat_last_n`: Last n tokens to consider for penalizing repetition (default: 64, 0 = disabled, -1 = ctx-size). - - `penalize_nl`: Penalize newline tokens when applying the repeat penalty (default: true). - - `presence_penalty`: Repeat alpha presence penalty (default: 0.0, 0.0 = disabled). - - `frequency_penalty`: Repeat alpha frequency penalty (default: 0.0, 0.0 = disabled); - - `mirostat`: Enable Mirostat sampling, controlling perplexity during text generation (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0). - - `mirostat_tau`: Set the Mirostat target entropy, parameter tau (default: 5.0). - - `mirostat_eta`: Set the Mirostat learning rate, parameter eta (default: 0.1). - - `grammar`: Set grammar for grammar-based sampling (default: no grammar) - - `seed`: Set the random number generator (RNG) seed (default: -1, -1 = random seed). - - `ignore_eos`: Ignore end of stream token and continue generating (default: false). - - `logit_bias`: Modify the likelihood of a token appearing in the generated text completion. For example, use `"logit_bias": [[15043,1.0]]` to increase the likelihood of the token 'Hello', or `"logit_bias": [[15043,-1.0]]` to decrease its likelihood. Setting the value to false, `"logit_bias": [[15043,false]]` ensures that the token `Hello` is never produced (default: []). - -- **POST** `/tokenize`: Tokenize a given text. - - *Options:* - - `content`: Set the text to tokenize. - - Note that the special `BOS` token is not added in front of the text and also a space character is not inserted automatically as it is for `/completion`. - -- **POST** `/detokenize`: Convert tokens to text. - - *Options:* - - `tokens`: Set the tokens to detokenize. - -- **POST** `/embedding`: Generate embedding of a given text just as [the embedding example](../embedding) does. - - *Options:* - - `content`: Set the text to process. - -## More examples - -### Interactive mode - -Check the sample in [chat.mjs](chat.mjs). -Run with NodeJS version 16 or later: - -```sh -node chat.mjs -``` - -Another sample in [chat.sh](chat.sh). -Requires [bash](https://www.gnu.org/software/bash/), [curl](https://curl.se) and [jq](https://jqlang.github.io/jq/). -Run with bash: - -```sh -bash chat.sh -``` - -### API like OAI - -API example using Python Flask: [api_like_OAI.py](api_like_OAI.py) -This example must be used with server.cpp - -```sh -python api_like_OAI.py -``` - -After running the API server, you can use it in Python by setting the API base URL. -```python -openai.api_base = "http://:port" -``` - -Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API - -### Extending or building alternative Web Front End - -The default location for the static files is `examples/server/public`. You can extend the front end by running the server binary with `--path` set to `./your-directory` and importing `/completion.js` to get access to the llamaComplete() method. - -Read the documentation in `/completion.js` to see convenient ways to access llama. - -A simple example is below: - -```html - - -
        -      
        -    
        - - -``` diff --git a/spaces/Ipkc/text_generator/app.py b/spaces/Ipkc/text_generator/app.py deleted file mode 100644 index 0ff064c41c32b8c2626de25166997a955499587f..0000000000000000000000000000000000000000 --- a/spaces/Ipkc/text_generator/app.py +++ /dev/null @@ -1,11 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -title="My First Text Generator" -description="Input text and submit," - - -model1=gr.Interface.load('huggingface/EleutherAI/gpt-neo-1.3B') - - -gr.Parallel(model1, title=title, description=description).launch() \ No newline at end of file diff --git a/spaces/IshA2023/Image-Generation/app.py b/spaces/IshA2023/Image-Generation/app.py deleted file mode 100644 index c1ada7533f7ef9c272f1d38933e644369d018728..0000000000000000000000000000000000000000 --- a/spaces/IshA2023/Image-Generation/app.py +++ /dev/null @@ -1,24 +0,0 @@ -import gradio as gr - -from diffusers import DiffusionPipeline - -pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") - -def get_completion(prompt): - return pipeline(prompt).images[0] - -def generate(prompt): - output = get_completion(prompt) - return output - -gr.close_all() - -demo = gr.Interface(fn=generate, - inputs=[gr.Textbox(label="Your prompt")], - outputs=[gr.Image(label="Result")], - title="Image Generation with Stable Diffusion", - description="Generate any image with Stable Diffusion", - allow_flagging="never", - examples=["a rainbow above moon"]) - -demo.launch() diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/cropImage.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/cropImage.ts deleted file mode 100644 index 2d6b7e1f8c112564f372ab1da3af76a337b7f35b..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/lib/cropImage.ts +++ /dev/null @@ -1,53 +0,0 @@ -async function cropImage(inputImage: string): Promise<{ croppedImage: string; x: number; y: number; width: number; height: number }> { - return new Promise((resolve, reject) => { - const img = new Image(); - img.src = inputImage; - img.onload = () => { - const canvas = document.createElement('canvas'); - const context = canvas.getContext('2d'); - if (!context) { - reject("Context is null"); - return; - } - canvas.width = img.width; - canvas.height = img.height; - context.drawImage(img, 0, 0, img.width, img.height); - const imageData = context.getImageData(0, 0, img.width, img.height); - const data = imageData.data; - let minX = img.width, minY = img.height, maxX = 0, maxY = 0; - - for (let y = 0; y < img.height; y++) { - for (let x = 0; x < img.width; x++) { - const i = (y * 4) * img.width + x * 4; - const avg = (data[i] + data[i + 1] + data[i + 2]) / 3; - if (avg < 255) { - minX = Math.min(minX, x); - minY = Math.min(minY, y); - maxX = Math.max(maxX, x); - maxY = Math.max(maxY, y); - } - } - } - - const width = maxX - minX; - const height = maxY - minY; - const croppedCanvas = document.createElement('canvas'); - croppedCanvas.width = width; - croppedCanvas.height = height; - const croppedCtx = croppedCanvas.getContext('2d'); - if (!croppedCtx) { - reject("croppedCtx is null"); - return; - } - croppedCtx.drawImage(canvas, minX, minY, width, height, 0, 0, width, height); - resolve({ - croppedImage: croppedCanvas.toDataURL(), - x: minX, - y: minY, - width, - height - }); - }; - img.onerror = reject; - }); -} \ No newline at end of file diff --git a/spaces/Jikiwi/sovits-models/hubert/hubert_model.py b/spaces/Jikiwi/sovits-models/hubert/hubert_model.py deleted file mode 100644 index 7fb642d89b07ca60792debab18e3454f52d8f357..0000000000000000000000000000000000000000 --- a/spaces/Jikiwi/sovits-models/hubert/hubert_model.py +++ /dev/null @@ -1,222 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/Justin-Choo/Diffusion50XX/README.md b/spaces/Justin-Choo/Diffusion50XX/README.md deleted file mode 100644 index 78fe23f7af1892a3aa9638e187e4537b2868408c..0000000000000000000000000000000000000000 --- a/spaces/Justin-Choo/Diffusion50XX/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Maximum Multiplier -emoji: 🌄 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Justin-Choo/Waifu-Diffusion_WEB_UI/README.md b/spaces/Justin-Choo/Waifu-Diffusion_WEB_UI/README.md deleted file mode 100644 index a0fd370d728f53bb6be0ffacc1b1bcd9613beb43..0000000000000000000000000000000000000000 --- a/spaces/Justin-Choo/Waifu-Diffusion_WEB_UI/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Waifu Diffusion Webui on Cpu -emoji: 👑 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -python_version: 3.10.6 -duplicated_from: Justin-Chew/Counterfeit-V3.0_WEB_UI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/modules.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/KarmKarma/genshinimpact-rvc-models-v2/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/KarmKarma/genshinimpact-rvc-models-v2/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kevin676/AutoGPT/autogpt/commands/improve_code.py b/spaces/Kevin676/AutoGPT/autogpt/commands/improve_code.py deleted file mode 100644 index e3440d8b7c6ee8cb62d73df48623ab757c973c59..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/commands/improve_code.py +++ /dev/null @@ -1,29 +0,0 @@ -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def improve_code(suggestions: list[str], code: str) -> str: - """ - A function that takes in code and suggestions and returns a response from create - chat completion api call. - - Parameters: - suggestions (List): A list of suggestions around what needs to be improved. - code (str): Code to be improved. - Returns: - A result string from create chat completion. Improved code in response. - """ - - function_string = ( - "def generate_improved_code(suggestions: List[str], code: str) -> str:" - ) - args = [json.dumps(suggestions), code] - description_string = ( - "Improves the provided code based on the suggestions" - " provided, making no other changes." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/Lianjd/stock_dashboard/backtrader/filters/datafilter.py b/spaces/Lianjd/stock_dashboard/backtrader/filters/datafilter.py deleted file mode 100644 index cb3120ef8c2565ee539fd03525b20d290b9830ea..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/filters/datafilter.py +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import backtrader as bt - - -class DataFilter(bt.AbstractDataBase): - ''' - This class filters out bars from a given data source. In addition to the - standard parameters of a DataBase it takes a ``funcfilter`` parameter which - can be any callable - - Logic: - - - ``funcfilter`` will be called with the underlying data source - - It can be any callable - - - Return value ``True``: current data source bar values will used - - Return value ``False``: current data source bar values will discarded - ''' - params = (('funcfilter', None),) - - def preload(self): - if len(self.p.dataname) == self.p.dataname.buflen(): - # if data is not preloaded .... do it - self.p.dataname.start() - self.p.dataname.preload() - self.p.dataname.home() - - # Copy timeframe from data after start (some sources do autodetection) - self.p.timeframe = self._timeframe = self.p.dataname._timeframe - self.p.compression = self._compression = self.p.dataname._compression - - super(DataFilter, self).preload() - - def _load(self): - if not len(self.p.dataname): - self.p.dataname.start() # start data if not done somewhere else - - # Tell underlying source to get next data - while self.p.dataname.next(): - # Try to load the data from the underlying source - if not self.p.funcfilter(self.p.dataname): - continue - - # Data is allowed - Copy size which is "number of lines" - for i in range(self.p.dataname.size()): - self.lines[i][0] = self.p.dataname.lines[i][0] - - return True - - return False # no more data from underlying source diff --git a/spaces/Lianjd/stock_dashboard/backtrader/filters/session.py b/spaces/Lianjd/stock_dashboard/backtrader/filters/session.py deleted file mode 100644 index 6610f9539b48db084e39fa21957d7cd97ddcce3f..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/filters/session.py +++ /dev/null @@ -1,244 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from datetime import datetime, timedelta - -from backtrader import TimeFrame -from backtrader.utils.py3 import with_metaclass -from .. import metabase - - -class SessionFiller(with_metaclass(metabase.MetaParams, object)): - ''' - Bar Filler for a Data Source inside the declared session start/end times. - - The fill bars are constructed using the declared Data Source ``timeframe`` - and ``compression`` (used to calculate the intervening missing times) - - Params: - - - fill_price (def: None): - - If None is passed, the closing price of the previous bar will be - used. To end up with a bar which for example takes time but it is not - displayed in a plot ... use float('Nan') - - - fill_vol (def: float('NaN')): - - Value to use to fill the missing volume - - - fill_oi (def: float('NaN')): - - Value to use to fill the missing Open Interest - - - skip_first_fill (def: True): - - Upon seeing the 1st valid bar do not fill from the sessionstart up to - that bar - ''' - params = (('fill_price', None), - ('fill_vol', float('NaN')), - ('fill_oi', float('NaN')), - ('skip_first_fill', True)) - - MAXDATE = datetime.max - - # Minimum delta unit in between bars - _tdeltas = { - TimeFrame.Minutes: timedelta(seconds=60), - TimeFrame.Seconds: timedelta(seconds=1), - TimeFrame.MicroSeconds: timedelta(microseconds=1), - } - - def __init__(self, data): - # Calculate and save timedelta for timeframe - self._tdframe = self._tdeltas[data._timeframe] - self._tdunit = self._tdeltas[data._timeframe] * data._compression - - self.seenbar = False # control if at least one bar has been seen - self.sessend = self.MAXDATE # maxdate is the control for session bar - - def __call__(self, data): - ''' - Params: - - data: the data source to filter/process - - Returns: - - False (always) because this filter does not remove bars from the - stream - - The logic (starting with a session end control flag of MAXDATE) - - - If new bar is over session end (never true for 1st bar) - - Fill up to session end. Reset sessionend to MAXDATE & fall through - - - If session end is flagged as MAXDATE - - Recalculate session limits and check whether the bar is within them - - if so, fill up and record the last seen tim - - - Else ... the incoming bar is in the session, fill up to it - ''' - # Get time of current (from data source) bar - ret = False - - dtime_cur = data.datetime.datetime() - - if dtime_cur > self.sessend: - # bar over session end - fill up and invalidate - # Do not put current bar in stack to let it be evaluated below - # Fill up to endsession + smallest unit of timeframe - ret = self._fillbars(data, self.dtime_prev, - self.sessend + self._tdframe, - tostack=False) - self.sessend = self.MAXDATE - - # Fall through from previous check ... the bar which is over the - # session could already be in a new session and within the limits - if self.sessend == self.MAXDATE: - # No bar seen yet or one went over previous session limit - ddate = dtime_cur.date() - sessstart = datetime.combine(ddate, data.p.sessionstart) - self.sessend = sessend = datetime.combine(ddate, data.p.sessionend) - - if sessstart <= dtime_cur <= sessend: - # 1st bar from session in the session - fill from session start - if self.seenbar or not self.p.skip_first_fill: - ret = self._fillbars(data, - sessstart - self._tdunit, dtime_cur) - - self.seenbar = True - self.dtime_prev = dtime_cur - - else: - # Seen a previous bar and this is in the session - fill up to it - ret = self._fillbars(data, self.dtime_prev, dtime_cur) - self.dtime_prev = dtime_cur - - return ret - - def _fillbars(self, data, time_start, time_end, tostack=True): - ''' - Fills one by one bars as needed from time_start to time_end - - Invalidates the control dtime_prev if requested - ''' - # Control flag - bars added to the stack - dirty = 0 - - time_start += self._tdunit - while time_start < time_end: - dirty += self._fillbar(data, time_start) - time_start += self._tdunit - - if dirty and tostack: - data._save2stack(erase=True) - - return bool(dirty) or not tostack - - def _fillbar(self, data, dtime): - # Prepare an array of the needed size - bar = [float('Nan')] * data.size() - - # Fill datetime - bar[data.DateTime] = data.date2num(dtime) - - # Fill the prices - price = self.p.fill_price or data.close[-1] - for pricetype in [data.Open, data.High, data.Low, data.Close]: - bar[pricetype] = price - - # Fill volume and open interest - bar[data.Volume] = self.p.fill_vol - bar[data.OpenInterest] = self.p.fill_oi - - # Fill extra lines the data feed may have defined beyond DateTime - for i in range(data.DateTime + 1, data.size()): - bar[i] = data.lines[i][0] - - # Add tot he stack of bars to save - data._add2stack(bar) - - return True - - -class SessionFilterSimple(with_metaclass(metabase.MetaParams, object)): - ''' - This class can be applied to a data source as a filter and will filter out - intraday bars which fall outside of the regular session times (ie: pre/post - market data) - - This is a "simple" filter and must NOT manage the stack of the data (passed - during init and __call__) - - It needs no "last" method because it has nothing to deliver - - Bar Management will be done by the SimpleFilterWrapper class made which is - added durint the DataBase.addfilter_simple call - ''' - def __init__(self, data): - pass - - def __call__(self, data): - ''' - Return Values: - - - False: nothing to filter - - True: filter current bar (because it's not in the session times) - ''' - # Both ends of the comparison are in the session - return not ( - data.p.sessionstart <= data.datetime.time(0) <= data.p.sessionend) - - -class SessionFilter(with_metaclass(metabase.MetaParams, object)): - ''' - This class can be applied to a data source as a filter and will filter out - intraday bars which fall outside of the regular session times (ie: pre/post - market data) - - This is a "non-simple" filter and must manage the stack of the data (passed - during init and __call__) - - It needs no "last" method because it has nothing to deliver - ''' - def __init__(self, data): - pass - - def __call__(self, data): - ''' - Return Values: - - - False: data stream was not touched - - True: data stream was manipulated (bar outside of session times and - - removed) - ''' - if data.p.sessionstart <= data.datetime.time(0) <= data.p.sessionend: - # Both ends of the comparison are in the session - return False # say the stream is untouched - - # bar outside of the regular session times - data.backwards() # remove bar from data stack - return True # signal the data was manipulated diff --git a/spaces/LightSY/W2L-TD/facelib/utils/__init__.py b/spaces/LightSY/W2L-TD/facelib/utils/__init__.py deleted file mode 100644 index 087d0691642e95a12c5b9a67487158ea1f0a2b83..0000000000000000000000000000000000000000 --- a/spaces/LightSY/W2L-TD/facelib/utils/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .face_utils import align_crop_face_landmarks, compute_increased_bbox, get_valid_bboxes, paste_face_back -# from .misc import img2tensor, load_file_from_url, download_pretrained_models, scandir -from .misc import img2tensor, download_pretrained_models, scandir - -__all__ = [ - 'align_crop_face_landmarks', 'compute_increased_bbox', 'get_valid_bboxes', 'load_file_from_url', - 'download_pretrained_models', 'paste_face_back', 'img2tensor', 'scandir' -] diff --git a/spaces/LuxOAI/ChatGpt-Web/app/config/server.ts b/spaces/LuxOAI/ChatGpt-Web/app/config/server.ts deleted file mode 100644 index e47e03f25dbc88b3a2078df202df4428182bbbb0..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/config/server.ts +++ /dev/null @@ -1,46 +0,0 @@ -import md5 from "spark-md5"; - -declare global { - namespace NodeJS { - interface ProcessEnv { - OPENAI_API_KEY?: string; - CODE?: string; - PROXY_URL?: string; - VERCEL?: string; - HIDE_USER_API_KEY?: string; // disable user's api key input - DISABLE_GPT4?: string; - } - } -} - -const ACCESS_CODES = (function getAccessCodes(): Set { - const code = process.env.CODE; - - try { - const codes = (code?.split(",") ?? []) - .filter((v) => !!v) - .map((v) => md5.hash(v.trim())); - return new Set(codes); - } catch (e) { - return new Set(); - } -})(); - -export const getServerSideConfig = () => { - if (typeof process === "undefined") { - throw Error( - "[Server Config] you are importing a nodejs-only module outside of nodejs", - ); - } - - return { - apiKey: process.env.OPENAI_API_KEY, - code: process.env.CODE, - codes: ACCESS_CODES, - needCode: ACCESS_CODES.size > 0, - proxyUrl: process.env.PROXY_URL, - isVercel: !!process.env.VERCEL, - hideUserApiKey: !!process.env.HIDE_USER_API_KEY, - enableGPT4: !process.env.DISABLE_GPT4, - }; -}; diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/options/train_options.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/options/train_options.py deleted file mode 100644 index 6cc3296657043568a3a961d793f2c69f568bab1a..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/options/train_options.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -from .base_options import BaseOptions - -class TrainOptions(BaseOptions): - def initialize(self): - BaseOptions.initialize(self) - # for displays - self.parser.add_argument('--display_freq', type=int, default=100, help='frequency of showing training results on screen') - self.parser.add_argument('--print_freq', type=int, default=100, help='frequency of showing training results on console') - self.parser.add_argument('--save_latest_freq', type=int, default=10000, help='frequency of saving the latest results') - self.parser.add_argument('--save_epoch_freq', type=int, default=1, help='frequency of saving checkpoints at the end of epochs') - self.parser.add_argument('--no_html', action='store_true', help='do not save intermediate training results to [opt.checkpoints_dir]/[opt.name]/web/') - self.parser.add_argument('--debug', action='store_true', help='only do one epoch and displays at each iteration') - - # for training - self.parser.add_argument('--continue_train', action='store_true', help='continue training: load the latest model') - # self.parser.add_argument('--load_pretrain', type=str, default='', help='load the pretrained model from the specified location') - self.parser.add_argument('--which_epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model') - self.parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc') - self.parser.add_argument('--niter', type=int, default=100, help='# of iter at starting learning rate') - self.parser.add_argument('--niter_decay', type=int, default=100, help='# of iter to linearly decay learning rate to zero') - self.parser.add_argument('--beta1', type=float, default=0.5, help='momentum term of adam') - self.parser.add_argument('--lr', type=float, default=0.0002, help='initial learning rate for adam') - self.parser.add_argument('--training_dataset',type=str,default='',help='training use which dataset') - - # for discriminators - self.parser.add_argument('--num_D', type=int, default=2, help='number of discriminators to use') - self.parser.add_argument('--n_layers_D', type=int, default=3, help='only used if which_model_netD==n_layers') - self.parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in first conv layer') - self.parser.add_argument('--lambda_feat', type=float, default=10.0, help='weight for feature matching loss') - self.parser.add_argument('--l2_feat', type=float, help='weight for feature mapping loss') - self.parser.add_argument('--use_l1_feat', action='store_true', help='use l1 for feat mapping') - self.parser.add_argument('--no_ganFeat_loss', action='store_true', help='if specified, do *not* use discriminator feature matching loss') - self.parser.add_argument('--no_vgg_loss', action='store_true', help='if specified, do *not* use VGG feature matching loss') - self.parser.add_argument('--no_lsgan', action='store_true', help='do *not* use least square GAN, if false, use vanilla GAN') - self.parser.add_argument('--gan_type', type=str, default='lsgan', help='Choose the loss type of GAN') - self.parser.add_argument('--pool_size', type=int, default=0, help='the size of image buffer that stores previously generated images') - self.parser.add_argument('--norm_D',type=str, default='spectralinstance', help='instance normalization or batch normalization') - self.parser.add_argument('--init_D',type=str,default='xavier',help='normal|xavier|xavier_uniform|kaiming|orthogonal|none') - - self.parser.add_argument('--no_TTUR',action='store_true',help='No TTUR') - - self.parser.add_argument('--start_epoch',type=int,default=-1,help='write the start_epoch of iter.txt into this parameter') - self.parser.add_argument('--no_degradation',action='store_true',help='when train the mapping, enable this parameter --> no degradation will be added into clean image') - self.parser.add_argument('--no_load_VAE',action='store_true',help='when train the mapping, enable this parameter --> random initialize the encoder an decoder') - self.parser.add_argument('--use_v2_degradation',action='store_true',help='enable this parameter --> 4 kinds of degradations will be used to synthesize corruption') - self.parser.add_argument('--use_vae_which_epoch',type=str,default='200') - - - self.parser.add_argument('--use_focal_loss',action='store_true') - - self.parser.add_argument('--mask_need_scale',action='store_true',help='enable this param means that the pixel range of mask is 0-255') - self.parser.add_argument('--positive_weight',type=float,default=1.0,help='(For scratch detection) Since the scratch number is less, and we use a weight strategy. This parameter means that we want to decrease the weight.') - - self.parser.add_argument('--no_update_lr',action='store_true',help='use this means we do not update the LR while training') - - - self.isTrain = True diff --git a/spaces/Mansib/Allure/app.py b/spaces/Mansib/Allure/app.py deleted file mode 100644 index 09281fe443d25dcae1f10cf435a021421809742e..0000000000000000000000000000000000000000 --- a/spaces/Mansib/Allure/app.py +++ /dev/null @@ -1,121 +0,0 @@ -import torch -import clip -from PIL import Image -import gradio as gr -import datetime - -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load("ViT-B/32", device=device) - - -def allure(image, gender): - image = Image.fromarray(image.astype("uint8"), "RGB") - gender = gender.lower() - image = preprocess(image).unsqueeze(0).to(device) - positive_terms = [f'a hot {gender}', - f'a beautiful {gender}', f'an alluring {gender}', 'a photorealistic image taken with a high-quality camera', 'a photorealistic image taken with a low-quality/bad camera'] - negative_terms = [f'a gross {gender}', - f'an ugly {gender}', f'a hideous {gender}', 'a toonish, unrealistic or photoshopped image taken with a high-quality camera', 'a toonish, unrealistic or photoshopped image taken with a low-quality/bad camera'] - - pairs = list(zip(positive_terms, negative_terms)) - - def evaluate(terms): - text = clip.tokenize(terms).to(device) - - with torch.no_grad(): - logits_per_image, logits_per_text = model(image, text) - probs = logits_per_image.softmax(dim=-1).cpu().numpy() - return probs[0] - - probs = [evaluate(pair) for pair in pairs] - - positive_probs = [prob[0] for prob in probs] - negative_probs = [prob[1] for prob in probs] - - hotness_score = round((probs[0][0] - probs[0][1] + 1) * 50, 2) - beauty_score = round((probs[1][0] - probs[1][1] + 1) * 50, 2) - attractiveness_score = round((probs[2][0] - probs[2][1] + 1) * 50, 2) - - authenticity_score_lq = round((probs[-1][0] - probs[-1][1] + 1) * 50, 2) - authenticity_score_hq = round((probs[-2][0] - probs[-2][1] + 1) * 50, 2) - authenticity_score = (authenticity_score_lq + authenticity_score_hq)/2 - - hot_score = sum(positive_probs[:-1])/len(positive_probs[:-1]) - ugly_score = sum(negative_probs[:-1])/len(negative_probs[:-1]) - composite = ((hot_score - ugly_score)+1) * 50 - composite = round(composite, 2) - - judgement = "extremely toonish and/or distorted" - - if authenticity_score >= 90: - judgement = "likely real" - elif authenticity_score >= 80: - judgement = "slightly altered" - elif authenticity_score >= 70: - judgement = "moderately altered" - elif authenticity_score >= 50: - judgement = "significantly toonish or altered" - - return composite, hotness_score, beauty_score, attractiveness_score, authenticity_score_hq, authenticity_score_lq, f"{authenticity_score} ({judgement})" - - -# theme = gr.themes.Soft( -# font=[gr.themes.GoogleFont("Quicksand"), -# "ui-sans-serif", "sans-serif"], -# font_mono=[gr.themes.GoogleFont("IBM Plex Mono"), -# "ui-monospace", "monospace"], -# primary_hue="cyan", -# secondary_hue="cyan", -# radius_size="lg") - -# theme.set( -# input_radius="64px", -# button_large_radius='64px', -# button_small_radius='64px', -# body_background_fill=theme.block_background_fill_dark, -# block_shadow=theme.block_shadow_dark, -# block_label_radius='64px', -# block_label_right_radius='64px', -# background_fill_primary=theme.background_fill_primary_dark, -# background_fill_secondary=theme.background_fill_secondary_dark, -# block_label_border_width=theme.block_label_border_width_dark, -# block_label_border_color=theme.block_label_border_color_dark -# ) - -with gr.Interface( - # theme=theme, - fn=allure, - inputs=[ - gr.Image(label="Image"), - gr.Dropdown( - [ - 'Person', 'Man', 'Woman' - ], - default='Person', - label="Gender" - ) - ], - outputs=[ - gr.Textbox(label="Composite Score (%)"), - gr.Textbox(label="Hotness (%)"), - gr.Textbox(label="Beauty (%)"), - gr.Textbox(label="Allure (%)"), - gr.Textbox(label="HQ Authenticity (%)"), - gr.Textbox(label="LQ Authenticity (%)"), - gr.Textbox(label="Composite Authenticity (≥ 90% → likely real)"), - ], - examples=[ - ['Mansib_01_x2048.png', 'Man'], - ['Mansib_02_x2048.png', 'Man'] - ], - title=f"Attractiveness Evaluator (powered by OpenAI CLIP) [Updated on {datetime.datetime.now().strftime('%A, %b %d %Y %I:%M:%S%p')}]", - description=f"""A simple attractiveness evaluation app using the latest, current (newest stable release as of {datetime.datetime.now().strftime('%A, %b %d %Y %I:%M:%S%p')}) version of OpenAI's CLIP model.""", -) as iface: - with gr.Accordion("How does it work?"): - gr.Markdown( - """The input image is passed to OpenAI's CLIP image captioning model and evaluated for how much it conforms to the model's idea of hotness, beauty, and attractiveness. -These values are then combined to produce a composite score on a scale of 0 to 100. -# ⚠️ WARNING: This is meant solely for educational use!""") - -iface.queue(api_open=False) # Add `api_open = False` to disable direct API access. -iface.launch() diff --git a/spaces/Marshalls/testmtd/models/moglow/utils.py b/spaces/Marshalls/testmtd/models/moglow/utils.py deleted file mode 100644 index 6764ea8b08277a5d7041ce422e7ed8a3d9c67cae..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/models/moglow/utils.py +++ /dev/null @@ -1,255 +0,0 @@ -import os -import re -import copy -import torch -import numpy as np -import matplotlib -matplotlib.use("Agg") -import matplotlib.pyplot as plt -from shutil import copyfile - - -def get_proper_cuda_device(device, verbose=True): - if not isinstance(device, list): - device = [device] - count = torch.cuda.device_count() - if verbose: - print("[Builder]: Found {} gpu".format(count)) - for i in range(len(device)): - d = device[i] - did = None - if isinstance(d, str): - if re.search("cuda:[\d]+", d): - did = int(d[5:]) - elif isinstance(d, int): - did = d - if did is None: - raise ValueError("[Builder]: Wrong cuda id {}".format(d)) - if did < 0 or did >= count: - if verbose: - print("[Builder]: {} is not found, ignore.".format(d)) - device[i] = None - else: - device[i] = did - device = [d for d in device if d is not None] - return device - - -def get_proper_device(devices, verbose=True): - origin = copy.copy(devices) - devices = copy.copy(devices) - if not isinstance(devices, list): - devices = [devices] - use_cpu = any([d.find("cpu")>=0 for d in devices]) - use_gpu = any([(d.find("cuda")>=0 or isinstance(d, int)) for d in devices]) - assert not (use_cpu and use_gpu), "{} contains cpu and cuda device.".format(devices) - if use_gpu: - devices = get_proper_cuda_device(devices, verbose) - if len(devices) == 0: - if verbose: - print("[Builder]: Failed to find any valid gpu in {}, use `cpu`.".format(origin)) - devices = ["cpu"] - return devices - - -def _file_at_step(step): - return "save_{}k{}.pkg".format(int(step // 1000), int(step % 1000)) - - -def _file_best(): - return "trained.pkg" - -def save(global_step, graph, optim, criterion_dict=None, pkg_dir="", is_best=False, max_checkpoints=None): - if optim is None: - raise ValueError("cannot save without optimzier") - state = { - "global_step": global_step, - # DataParallel wrap model in attr `module`. - "graph": graph.module.state_dict() if hasattr(graph, "module") else graph.state_dict(), - "optim": optim.state_dict(), - "criterion": {} - } - if criterion_dict is not None: - for k in criterion_dict: - state["criterion"][k] = criterion_dict[k].state_dict() - save_path = os.path.join(pkg_dir, _file_at_step(global_step)) - best_path = os.path.join(pkg_dir, _file_best()) - torch.save(state, save_path) - if is_best: - copyfile(save_path, best_path) - if max_checkpoints is not None: - history = [] - for file_name in os.listdir(pkg_dir): - if re.search("save_\d*k\d*\.pkg", file_name): - digits = file_name.replace("save_", "").replace(".pkg", "").split("k") - number = int(digits[0]) * 1000 + int(digits[1]) - history.append(number) - history.sort() - while len(history) > max_checkpoints: - path = os.path.join(pkg_dir, _file_at_step(history[0])) - print("[Checkpoint]: remove {} to keep {} checkpoints".format(path, max_checkpoints)) - if os.path.exists(path): - os.remove(path) - history.pop(0) - - -def load(step_or_path, graph, optim=None, criterion_dict=None, pkg_dir="", device=None): - step = step_or_path - save_path = None - - print("LOADING FROM pkg_dir: " + pkg_dir) - if isinstance(step, int): - save_path = os.path.join(pkg_dir, _file_at_step(step)) - if isinstance(step, str): - if pkg_dir is not None: - if step == "best": - save_path = os.path.join(pkg_dir, _file_best()) - else: - save_path = os.path.join(pkg_dir, step) - else: - save_path = step - if save_path is not None and not os.path.exists(save_path): - print("[Checkpoint]: Failed to find {}".format(save_path)) - return - if save_path is None: - print("[Checkpoint]: Cannot load the checkpoint with given step or filename or `best`") - return - - # begin to load - state = torch.load(save_path, map_location=device) - global_step = state["global_step"] - graph.load_state_dict(state["graph"]) - if optim is not None: - optim.load_state_dict(state["optim"]) - if criterion_dict is not None: - for k in criterion_dict: - criterion_dict[k].load_state_dict(state["criterion"][k]) - - graph.set_actnorm_init(inited=True) - - print("[Checkpoint]: Load {} successfully".format(save_path)) - return global_step - - -def __save_figure_to_numpy(fig): - # save it to a numpy array. - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - return data - - -def __to_ndarray_list(tensors, titles): - if not isinstance(tensors, list): - tensors = [tensors] - titles = [titles] - assert len(titles) == len(tensors),\ - "[visualizer]: {} titles are not enough for {} tensors".format( - len(titles), len(tensors)) - for i in range(len(tensors)): - if torch.is_tensor(tensors[i]): - tensors[i] = tensors[i].cpu().detach().numpy() - return tensors, titles - - -def __get_figures(num_tensors, figsize): - fig, axes = plt.subplots(num_tensors, 1, figsize=figsize) - if not isinstance(axes, np.ndarray): - axes = np.asarray([axes]) - return fig, axes - - -def __make_dir(file_name, plot_dir): - if file_name is not None and not os.path.exists(plot_dir): - os.makedirs(plot_dir) - - -def __draw(fig, file_name, plot_dir): - if file_name is not None: - plt.savefig('{}/{}.png'.format(plot_dir, file_name), format='png') - plt.close(fig) - return None - else: - fig.tight_layout() - fig.canvas.draw() - data = __save_figure_to_numpy(fig) - plt.close(fig) - return data - -def __prepare_cond(autoreg, control, data_device): - nn,seqlen,n_feats = autoreg.shape - - autoreg = autoreg.reshape((nn, seqlen*n_feats)) - nn,seqlen,n_feats = control.shape - control = control.reshape((nn, seqlen*n_feats)) - cond = torch.from_numpy(np.expand_dims(np.concatenate((autoreg,control),axis=1), axis=-1)) - return cond.to(data_device) - -def __generate_sample(graph, data_batch, device, eps_std=1.0): - print("generate_sample") - - seqlen = data_batch["seqlen"].cpu()[0].numpy() - fps = data_batch["frame_rate"].cpu()[0].numpy() - - autoreg_all = data_batch["autoreg"].cpu().numpy() - control_all = data_batch["control"].cpu().numpy() - - print("autoreg_all: " +str(autoreg_all.shape)) - autoreg = autoreg_all[:,:seqlen,:] - if hasattr(graph, "module"): - graph.module.init_lstm_hidden() - else: - graph.init_lstm_hidden() - - sampled_all = np.zeros(autoreg_all.shape) - sampled_all[:,:seqlen,:] = autoreg_all[:,:seqlen,:] - autoreg = autoreg_all[:,:seqlen,:] - for i in range(0,control_all.shape[1]-seqlen): - control = control_all[:,i:(i+seqlen+1),:] - cond = __prepare_cond(autoreg, control, device) - sampled = graph(z=None, cond=cond, eps_std=eps_std, reverse=True) - sampled = sampled.cpu().numpy() - sampled_all[:,(i+seqlen),:] = sampled[:,:,0] - autoreg = np.concatenate((autoreg[:,1:,:], sampled.swapaxes(1,2)), axis=1) - - anim_clip = np.concatenate((sampled_all, control_all), axis=2) - - return anim_clip - - -def __get_size_for_spec(tensors): - spectrogram = tensors[0] - fig_w = np.min([int(np.ceil(spectrogram.shape[1] / 10.0)), 10]) - fig_w = np.max([fig_w, 3]) - fig_h = np.max([3 * len(tensors), 3]) - return (fig_w, fig_h) - - -def __get_aspect(spectrogram): - fig_w = np.min([int(np.ceil(spectrogram.shape[1] / 10.0)), 10]) - fig_w = np.max([fig_w, 3]) - aspect = 3.0 / fig_w - if spectrogram.shape[1] > 50: - aspect = aspect * spectrogram.shape[1] / spectrogram.shape[0] - else: - aspect = aspect * spectrogram.shape[1] / (spectrogram.shape[0]) - return aspect - - -def plot_prob(done, title="", file_name=None, plot_dir=None): - __make_dir(file_name, plot_dir) - - done, title = __to_ndarray_list(done, title) - for i in range(len(done)): - done[i] = np.reshape(done[i], (-1, done[i].shape[-1])) - figsize = (5, 5 * len(done)) - fig, axes = __get_figures(len(done), figsize) - for ax, d, t in zip(axes, done, title): - im = ax.imshow(d, vmin=0, vmax=1, cmap="Blues", aspect=d.shape[1]/d.shape[0]) - ax.set_title(t) - ax.set_yticks(np.arange(d.shape[0])) - lables = ["Frame{}".format(i+1) for i in range(d.shape[0])] - ax.set_yticklabels(lables) - ax.set_yticks(np.arange(d.shape[0])-.5, minor=True) - ax.grid(which="minor", color="g", linestyle='-.', linewidth=1) - ax.invert_yaxis() - return __draw(fig, file_name, plot_dir) diff --git a/spaces/Mayanand/Automatic-Number-Plate-Recognition/README.md b/spaces/Mayanand/Automatic-Number-Plate-Recognition/README.md deleted file mode 100644 index a6181090091ac8afcdb4071a09639e2d84cff208..0000000000000000000000000000000000000000 --- a/spaces/Mayanand/Automatic-Number-Plate-Recognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Automatic Number Plate Recognition -emoji: 💩 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MichaelT8093/Mandarin-TTS/transforms.py b/spaces/MichaelT8093/Mandarin-TTS/transforms.py deleted file mode 100644 index bb374d3868723c181717782b132f3427260b017d..0000000000000000000000000000000000000000 --- a/spaces/MichaelT8093/Mandarin-TTS/transforms.py +++ /dev/null @@ -1,210 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/MilaNLProc/wordify/Makefile b/spaces/MilaNLProc/wordify/Makefile deleted file mode 100644 index 1d937233dc42d429482a4d29c51ea961011280e8..0000000000000000000000000000000000000000 --- a/spaces/MilaNLProc/wordify/Makefile +++ /dev/null @@ -1,32 +0,0 @@ -# Docker image build info -PROJECT:=wordify -BUILD_TAG?=v2.0 -sources = src - -######################################################## -## Local development -######################################################## -dev: ARGS?=/bin/bash -dev: DARGS?=-v "${CURDIR}":/var/dev -dev: ## run a foreground container - docker run -it --rm -p 8501:8501 $(DARGS) $(PROJECT):${BUILD_TAG} $(ARGS) - -build: DARGS?= -build: ## build the latest image for a project - docker build $(DARGS) --build-arg BUILD_TAG=${BUILD_TAG} --rm --force-rm -t $(PROJECT):${BUILD_TAG} . - -######################################################## -## Deployment -######################################################## -run: - docker run -d --name $(PROJECT)-${BUILD_TAG}-container -it --rm -p 4321:8501 $(PROJECT):${BUILD_TAG} - -stop: - docker stop $(PROJECT)-${BUILD_TAG}-container - -format: - isort $(sources) - black $(sources) - -lint: - flake8 $(sources) diff --git a/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/leds.py b/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/leds.py deleted file mode 100644 index b6df5d4dbaa9e0cf54caab6fa1d229b9972bd8a9..0000000000000000000000000000000000000000 --- a/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/leds.py +++ /dev/null @@ -1,113 +0,0 @@ -from periphery import GPIO, PWM -from enum import Enum - -class Color(Enum): - RED = 0 - BLUE = 1 - PURPLE = 2 - CLOSED = 3 - -class LEDs: - def __init__(self): - self.led1_R = PWM(0, 0) - self.led1_B = PWM(2, 0) - self.led1_gnd = PWM(1, 0) - - self.led2_R = GPIO("/dev/gpiochip0", 8, "out") - self.led2_B = GPIO("/dev/gpiochip0", 6, "out") - self.led2_gnd = GPIO("/dev/gpiochip0", 7, "out") - - self.led3_R = GPIO("/dev/gpiochip2", 0, "out") - #self.led3_B = GPIO("/dev/gpiochip2", 5, "out") - self.led3_gnd = GPIO("/dev/gpiochip2", 8, "out") - - self.acq = GPIO("/dev/gpiochip2", 20, "out") - - # Init LEDs - # RGBs - self.led1_R.frequency = 1e3 - self.led1_R.duty_cycle = 0.0 - self.led1_R.enable() - - self.led1_B.frequency = 1e3 - self.led1_B.duty_cycle = 0.0 - self.led1_B.enable() - - self.led1_gnd.frequency = 1e3 - self.led1_gnd.duty_cycle = 0.0 - self.led1_gnd.enable() - - # LED2 - self.led2_R.write(False) - self.led2_B.write(False) - self.led2_gnd.write(False) - - # LED3 - self.led3_R.write(False) - #self.led3_B.write(False) - self.led3_gnd.write(False) - - def aquisition(self, val: bool): - self.acq.write(val) - - # red, green & blue are between 0 and 100 inclusively - def led1(self, red: int, green: int, blue: int): - assert 0 <= red <= 100, "Red should be between 0 and 100" - assert 0 <= green <= 100, "Green should be between 0 and 100" - assert 0 <= blue <= 100, "Blue should be between 0 and 100" - self.led1_R.duty_cycle = red / 100 - self.led1_B.duty_cycle = blue / 100 - - def led2(self, value: Color): - if value == Color.RED: - self.led2_R.write(True) - self.led2_B.write(False) - elif value == Color.BLUE: - self.led2_R.write(False) - self.led2_B.write(True) - elif value == Color.PURPLE: - self.led2_R.write(True) - self.led2_B.write(True) - elif value == Color.CLOSED: - self.led2_R.write(False) - self.led2_B.write(False) - else: - assert False, "Unknown color" - - def led3(self, value: Color): - if value == Color.RED: - self.led3_R.write(True) - elif value == Color.CLOSED: - self.led3_R.write(False) - else: - assert False, "Unknown color" - - def close(self): - # LED1 - self.led1_R.disable() - self.led1_B.disable() - self.led1_gnd.disable() - self.led1_R.close() - self.led1_B.close() - self.led1_gnd.close() - - # LED2 - self.led2_R.write(False) - self.led2_B.write(False) - self.led2_gnd.write(False) - self.led2_R.close() - self.led2_B.close() - self.led2_gnd.close() - - # LED3 - self.led3_R.write(False) - #self.led3_B.write(False) - self.led3_gnd.write(False) - self.led3_R.close() - #self.led3_B.close() - self.led3_gnd.close() - - # AQUISITION - self.acq.write(False) - self.acq.close() - diff --git a/spaces/NATSpeech/DiffSpeech/utils/audio/__init__.py b/spaces/NATSpeech/DiffSpeech/utils/audio/__init__.py deleted file mode 100644 index e8cc4466b27eeda4026e945a5388dca04817e8a1..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/utils/audio/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import librosa -import numpy as np -import pyloudnorm as pyln - -from utils.audio.vad import trim_long_silences - - -def librosa_pad_lr(x, fsize, fshift, pad_sides=1): - '''compute right padding (final frame) or both sides padding (first and final frames) - ''' - assert pad_sides in (1, 2) - # return int(fsize // 2) - pad = (x.shape[0] // fshift + 1) * fshift - x.shape[0] - if pad_sides == 1: - return 0, pad - else: - return pad // 2, pad // 2 + pad % 2 - - -def amp_to_db(x): - return 20 * np.log10(np.maximum(1e-5, x)) - - -def db_to_amp(x): - return 10.0 ** (x * 0.05) - - -def normalize(S, min_level_db): - return (S - min_level_db) / -min_level_db - - -def denormalize(D, min_level_db): - return (D * -min_level_db) + min_level_db - - -def librosa_wav2spec(wav_path, - fft_size=1024, - hop_size=256, - win_length=1024, - window="hann", - num_mels=80, - fmin=80, - fmax=-1, - eps=1e-6, - sample_rate=22050, - loud_norm=False, - trim_long_sil=False): - if isinstance(wav_path, str): - if trim_long_sil: - wav, _, _ = trim_long_silences(wav_path, sample_rate) - else: - wav, _ = librosa.core.load(wav_path, sr=sample_rate) - else: - wav = wav_path - - if loud_norm: - meter = pyln.Meter(sample_rate) # create BS.1770 meter - loudness = meter.integrated_loudness(wav) - wav = pyln.normalize.loudness(wav, loudness, -22.0) - if np.abs(wav).max() > 1: - wav = wav / np.abs(wav).max() - - # get amplitude spectrogram - x_stft = librosa.stft(wav, n_fft=fft_size, hop_length=hop_size, - win_length=win_length, window=window, pad_mode="constant") - linear_spc = np.abs(x_stft) # (n_bins, T) - - # get mel basis - fmin = 0 if fmin == -1 else fmin - fmax = sample_rate / 2 if fmax == -1 else fmax - mel_basis = librosa.filters.mel(sample_rate, fft_size, num_mels, fmin, fmax) - - # calculate mel spec - mel = mel_basis @ linear_spc - mel = np.log10(np.maximum(eps, mel)) # (n_mel_bins, T) - l_pad, r_pad = librosa_pad_lr(wav, fft_size, hop_size, 1) - wav = np.pad(wav, (l_pad, r_pad), mode='constant', constant_values=0.0) - wav = wav[:mel.shape[1] * hop_size] - - # log linear spec - linear_spc = np.log10(np.maximum(eps, linear_spc)) - return {'wav': wav, 'mel': mel.T, 'linear': linear_spc.T, 'mel_basis': mel_basis} diff --git a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_slim.py b/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_slim.py deleted file mode 100644 index 0a838c4b8e2619b2573c490f546044b113f3bb55..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_slim.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Defines the 'VGGish' model used to generate AudioSet embedding features. - -The public AudioSet release (https://research.google.com/audioset/download.html) -includes 128-D features extracted from the embedding layer of a VGG-like model -that was trained on a large Google-internal YouTube dataset. Here we provide -a TF-Slim definition of the same model, without any dependences on libraries -internal to Google. We call it 'VGGish'. - -Note that we only define the model up to the embedding layer, which is the -penultimate layer before the final classifier layer. We also provide various -hyperparameter values (in vggish_params.py) that were used to train this model -internally. - -For comparison, here is TF-Slim's VGG definition: -https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py -""" - -import tensorflow.compat.v1 as tf -tf.disable_v2_behavior() -import tf_slim as slim - -import vggish_params as params - - -def define_vggish_slim(training=False): - """Defines the VGGish TensorFlow model. - - All ops are created in the current default graph, under the scope 'vggish/'. - - The input is a placeholder named 'vggish/input_features' of type float32 and - shape [batch_size, num_frames, num_bands] where batch_size is variable and - num_frames and num_bands are constants, and [num_frames, num_bands] represents - a log-mel-scale spectrogram patch covering num_bands frequency bands and - num_frames time frames (where each frame step is usually 10ms). This is - produced by computing the stabilized log(mel-spectrogram + params.LOG_OFFSET). - The output is an op named 'vggish/embedding' which produces the activations of - a 128-D embedding layer, which is usually the penultimate layer when used as - part of a full model with a final classifier layer. - - Args: - training: If true, all parameters are marked trainable. - - Returns: - The op 'vggish/embeddings'. - """ - # Defaults: - # - All weights are initialized to N(0, INIT_STDDEV). - # - All biases are initialized to 0. - # - All activations are ReLU. - # - All convolutions are 3x3 with stride 1 and SAME padding. - # - All max-pools are 2x2 with stride 2 and SAME padding. - with slim.arg_scope([slim.conv2d, slim.fully_connected], - weights_initializer=tf.truncated_normal_initializer( - stddev=params.INIT_STDDEV), - biases_initializer=tf.zeros_initializer(), - activation_fn=tf.nn.relu, - trainable=training), \ - slim.arg_scope([slim.conv2d], - kernel_size=[3, 3], stride=1, padding='SAME'), \ - slim.arg_scope([slim.max_pool2d], - kernel_size=[2, 2], stride=2, padding='SAME'), \ - tf.variable_scope('vggish'): - # Input: a batch of 2-D log-mel-spectrogram patches. - features = tf.placeholder( - tf.float32, shape=(None, params.NUM_FRAMES, params.NUM_BANDS), - name='input_features') - # Reshape to 4-D so that we can convolve a batch with conv2d(). - net = tf.reshape(features, [-1, params.NUM_FRAMES, params.NUM_BANDS, 1]) - - # The VGG stack of alternating convolutions and max-pools. - net = slim.conv2d(net, 64, scope='conv1') - net = slim.max_pool2d(net, scope='pool1') - net = slim.conv2d(net, 128, scope='conv2') - net = slim.max_pool2d(net, scope='pool2') - net = slim.repeat(net, 2, slim.conv2d, 256, scope='conv3') - net = slim.max_pool2d(net, scope='pool3') - net = slim.repeat(net, 2, slim.conv2d, 512, scope='conv4') - net = slim.max_pool2d(net, scope='pool4') - - # Flatten before entering fully-connected layers - net = slim.flatten(net) - net = slim.repeat(net, 2, slim.fully_connected, 4096, scope='fc1') - # The embedding layer. - net = slim.fully_connected(net, params.EMBEDDING_SIZE, scope='fc2') - return tf.identity(net, name='embedding') - - -def load_vggish_slim_checkpoint(session, checkpoint_path): - """Loads a pre-trained VGGish-compatible checkpoint. - - This function can be used as an initialization function (referred to as - init_fn in TensorFlow documentation) which is called in a Session after - initializating all variables. When used as an init_fn, this will load - a pre-trained checkpoint that is compatible with the VGGish model - definition. Only variables defined by VGGish will be loaded. - - Args: - session: an active TensorFlow session. - checkpoint_path: path to a file containing a checkpoint that is - compatible with the VGGish model definition. - """ - # Get the list of names of all VGGish variables that exist in - # the checkpoint (i.e., all inference-mode VGGish variables). - with tf.Graph().as_default(): - define_vggish_slim(training=False) - vggish_var_names = [v.name for v in tf.global_variables()] - - # Get the list of all currently existing variables that match - # the list of variable names we just computed. - vggish_vars = [v for v in tf.global_variables() if v.name in vggish_var_names] - - # Use a Saver to restore just the variables selected above. - saver = tf.train.Saver(vggish_vars, name='vggish_load_pretrained', - write_version=1) - saver.restore(session, checkpoint_path) diff --git a/spaces/Neprox/like-it-or-not/README.md b/spaces/Neprox/like-it-or-not/README.md deleted file mode 100644 index 1f60676ef56216f5681319d14cdaf6d9ecd11c0a..0000000000000000000000000000000000000000 --- a/spaces/Neprox/like-it-or-not/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Like It Or Not -emoji: 🦀 -colorFrom: green -colorTo: gray -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/visualizations.py b/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/visualizations.py deleted file mode 100644 index ec00fc64d6e9fda2bb8e613531066ac824df1451..0000000000000000000000000000000000000000 --- a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/visualizations.py +++ /dev/null @@ -1,178 +0,0 @@ -from speaker_encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset -from datetime import datetime -from time import perf_counter as timer -import matplotlib.pyplot as plt -import numpy as np -# import webbrowser -import visdom -import umap - -colormap = np.array([ - [76, 255, 0], - [0, 127, 70], - [255, 0, 0], - [255, 217, 38], - [0, 135, 255], - [165, 0, 165], - [255, 167, 255], - [0, 255, 255], - [255, 96, 38], - [142, 76, 0], - [33, 0, 127], - [0, 0, 0], - [183, 183, 183], -], dtype=np.float) / 255 - - -class Visualizations: - def __init__(self, env_name=None, update_every=10, server="http://localhost", disabled=False): - # Tracking data - self.last_update_timestamp = timer() - self.update_every = update_every - self.step_times = [] - self.losses = [] - self.eers = [] - print("Updating the visualizations every %d steps." % update_every) - - # If visdom is disabled TODO: use a better paradigm for that - self.disabled = disabled - if self.disabled: - return - - # Set the environment name - now = str(datetime.now().strftime("%d-%m %Hh%M")) - if env_name is None: - self.env_name = now - else: - self.env_name = "%s (%s)" % (env_name, now) - - # Connect to visdom and open the corresponding window in the browser - try: - self.vis = visdom.Visdom(server, env=self.env_name, raise_exceptions=True) - except ConnectionError: - raise Exception("No visdom server detected. Run the command \"visdom\" in your CLI to " - "start it.") - # webbrowser.open("http://localhost:8097/env/" + self.env_name) - - # Create the windows - self.loss_win = None - self.eer_win = None - # self.lr_win = None - self.implementation_win = None - self.projection_win = None - self.implementation_string = "" - - def log_params(self): - if self.disabled: - return - from speaker_encoder import params_data - from speaker_encoder import params_model - param_string = "Model parameters:
        " - for param_name in (p for p in dir(params_model) if not p.startswith("__")): - value = getattr(params_model, param_name) - param_string += "\t%s: %s
        " % (param_name, value) - param_string += "Data parameters:
        " - for param_name in (p for p in dir(params_data) if not p.startswith("__")): - value = getattr(params_data, param_name) - param_string += "\t%s: %s
        " % (param_name, value) - self.vis.text(param_string, opts={"title": "Parameters"}) - - def log_dataset(self, dataset: SpeakerVerificationDataset): - if self.disabled: - return - dataset_string = "" - dataset_string += "Speakers: %s\n" % len(dataset.speakers) - dataset_string += "\n" + dataset.get_logs() - dataset_string = dataset_string.replace("\n", "
        ") - self.vis.text(dataset_string, opts={"title": "Dataset"}) - - def log_implementation(self, params): - if self.disabled: - return - implementation_string = "" - for param, value in params.items(): - implementation_string += "%s: %s\n" % (param, value) - implementation_string = implementation_string.replace("\n", "
        ") - self.implementation_string = implementation_string - self.implementation_win = self.vis.text( - implementation_string, - opts={"title": "Training implementation"} - ) - - def update(self, loss, eer, step): - # Update the tracking data - now = timer() - self.step_times.append(1000 * (now - self.last_update_timestamp)) - self.last_update_timestamp = now - self.losses.append(loss) - self.eers.append(eer) - print(".", end="") - - # Update the plots every steps - if step % self.update_every != 0: - return - time_string = "Step time: mean: %5dms std: %5dms" % \ - (int(np.mean(self.step_times)), int(np.std(self.step_times))) - print("\nStep %6d Loss: %.4f EER: %.4f %s" % - (step, np.mean(self.losses), np.mean(self.eers), time_string)) - if not self.disabled: - self.loss_win = self.vis.line( - [np.mean(self.losses)], - [step], - win=self.loss_win, - update="append" if self.loss_win else None, - opts=dict( - legend=["Avg. loss"], - xlabel="Step", - ylabel="Loss", - title="Loss", - ) - ) - self.eer_win = self.vis.line( - [np.mean(self.eers)], - [step], - win=self.eer_win, - update="append" if self.eer_win else None, - opts=dict( - legend=["Avg. EER"], - xlabel="Step", - ylabel="EER", - title="Equal error rate" - ) - ) - if self.implementation_win is not None: - self.vis.text( - self.implementation_string + ("%s" % time_string), - win=self.implementation_win, - opts={"title": "Training implementation"}, - ) - - # Reset the tracking - self.losses.clear() - self.eers.clear() - self.step_times.clear() - - def draw_projections(self, embeds, utterances_per_speaker, step, out_fpath=None, - max_speakers=10): - max_speakers = min(max_speakers, len(colormap)) - embeds = embeds[:max_speakers * utterances_per_speaker] - - n_speakers = len(embeds) // utterances_per_speaker - ground_truth = np.repeat(np.arange(n_speakers), utterances_per_speaker) - colors = [colormap[i] for i in ground_truth] - - reducer = umap.UMAP() - projected = reducer.fit_transform(embeds) - plt.scatter(projected[:, 0], projected[:, 1], c=colors) - plt.gca().set_aspect("equal", "datalim") - plt.title("UMAP projection (step %d)" % step) - if not self.disabled: - self.projection_win = self.vis.matplot(plt, win=self.projection_win) - if out_fpath is not None: - plt.savefig(out_fpath) - plt.clf() - - def save(self): - if not self.disabled: - self.vis.save([self.env_name]) - \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/preprocess_GLUE_tasks.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/preprocess_GLUE_tasks.sh deleted file mode 100644 index 7f215a3b53e1c4a7b1f0320102915a49d84a5015..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/preprocess_GLUE_tasks.sh +++ /dev/null @@ -1,185 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -# raw glue data as downloaded by glue download script (https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e) -if [[ $# -ne 2 ]]; then - echo "Run as following:" - echo "./examples/roberta/preprocess_GLUE_tasks.sh " - exit 1 -fi - -GLUE_DATA_FOLDER=$1 - -# download bpe encoder.json, vocabulary and fairseq dictionary -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -TASKS=$2 # QQP - -if [ "$TASKS" = "ALL" ] -then - TASKS="QQP MNLI QNLI MRPC RTE STS-B SST-2 CoLA" -fi - -for TASK in $TASKS -do - echo "Preprocessing $TASK" - - TASK_DATA_FOLDER="$GLUE_DATA_FOLDER/$TASK" - echo "Raw data as downloaded from glue website: $TASK_DATA_FOLDER" - - SPLITS="train dev test" - INPUT_COUNT=2 - if [ "$TASK" = "QQP" ] - then - INPUT_COLUMNS=( 4 5 ) - TEST_INPUT_COLUMNS=( 2 3 ) - LABEL_COLUMN=6 - elif [ "$TASK" = "MNLI" ] - then - SPLITS="train dev_matched dev_mismatched test_matched test_mismatched" - INPUT_COLUMNS=( 9 10 ) - TEST_INPUT_COLUMNS=( 9 10 ) - DEV_LABEL_COLUMN=16 - LABEL_COLUMN=12 - elif [ "$TASK" = "QNLI" ] - then - INPUT_COLUMNS=( 2 3 ) - TEST_INPUT_COLUMNS=( 2 3 ) - LABEL_COLUMN=4 - elif [ "$TASK" = "MRPC" ] - then - INPUT_COLUMNS=( 4 5 ) - TEST_INPUT_COLUMNS=( 4 5 ) - LABEL_COLUMN=1 - elif [ "$TASK" = "RTE" ] - then - INPUT_COLUMNS=( 2 3 ) - TEST_INPUT_COLUMNS=( 2 3 ) - LABEL_COLUMN=4 - elif [ "$TASK" = "STS-B" ] - then - INPUT_COLUMNS=( 8 9 ) - TEST_INPUT_COLUMNS=( 8 9 ) - LABEL_COLUMN=10 - # Following are single sentence tasks. - elif [ "$TASK" = "SST-2" ] - then - INPUT_COLUMNS=( 1 ) - TEST_INPUT_COLUMNS=( 2 ) - LABEL_COLUMN=2 - INPUT_COUNT=1 - elif [ "$TASK" = "CoLA" ] - then - INPUT_COLUMNS=( 4 ) - TEST_INPUT_COLUMNS=( 2 ) - LABEL_COLUMN=2 - INPUT_COUNT=1 - fi - - # Strip out header and filter lines that don't have expected number of fields. - rm -rf "$TASK_DATA_FOLDER/processed" - mkdir -p "$TASK_DATA_FOLDER/processed" - for SPLIT in $SPLITS - do - # CoLA train and dev doesn't have header. - if [[ ( "$TASK" = "CoLA") && ( "$SPLIT" != "test" ) ]] - then - cp "$TASK_DATA_FOLDER/$SPLIT.tsv" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp"; - else - tail -n +2 "$TASK_DATA_FOLDER/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp"; - fi - - # Remove unformatted lines from train and dev files for QQP dataset. - if [[ ( "$TASK" = "QQP") && ( "$SPLIT" != "test" ) ]] - then - awk -F '\t' -v NUM_FIELDS=6 'NF==NUM_FIELDS{print}{}' "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp" > "$TASK_DATA_FOLDER/processed/$SPLIT.tsv"; - else - cp "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv"; - fi - rm "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp"; - done - - # Split into input0, input1 and label - for SPLIT in $SPLITS - do - for INPUT_TYPE in $(seq 0 $((INPUT_COUNT-1))) - do - if [[ "$SPLIT" != test* ]] - then - COLUMN_NUMBER=${INPUT_COLUMNS[$INPUT_TYPE]} - else - COLUMN_NUMBER=${TEST_INPUT_COLUMNS[$INPUT_TYPE]} - fi - cut -f"$COLUMN_NUMBER" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.raw.input$INPUT_TYPE"; - done - - if [[ "$SPLIT" != test* ]] - then - if [ "$TASK" = "MNLI" ] && [ "$SPLIT" != "train" ] - then - cut -f"$DEV_LABEL_COLUMN" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.label"; - else - cut -f"$LABEL_COLUMN" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.label"; - fi - fi - - # BPE encode. - for INPUT_TYPE in $(seq 0 $((INPUT_COUNT-1))) - do - LANG="input$INPUT_TYPE" - echo "BPE encoding $SPLIT/$LANG" - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$TASK_DATA_FOLDER/processed/$SPLIT.raw.$LANG" \ - --outputs "$TASK_DATA_FOLDER/processed/$SPLIT.$LANG" \ - --workers 60 \ - --keep-empty; - done - done - - # Remove output directory. - rm -rf "$TASK-bin" - - DEVPREF="$TASK_DATA_FOLDER/processed/dev.LANG" - TESTPREF="$TASK_DATA_FOLDER/processed/test.LANG" - if [ "$TASK" = "MNLI" ] - then - DEVPREF="$TASK_DATA_FOLDER/processed/dev_matched.LANG,$TASK_DATA_FOLDER/processed/dev_mismatched.LANG" - TESTPREF="$TASK_DATA_FOLDER/processed/test_matched.LANG,$TASK_DATA_FOLDER/processed/test_mismatched.LANG" - fi - - # Run fairseq preprocessing: - for INPUT_TYPE in $(seq 0 $((INPUT_COUNT-1))) - do - LANG="input$INPUT_TYPE" - fairseq-preprocess \ - --only-source \ - --trainpref "$TASK_DATA_FOLDER/processed/train.$LANG" \ - --validpref "${DEVPREF//LANG/$LANG}" \ - --testpref "${TESTPREF//LANG/$LANG}" \ - --destdir "$TASK-bin/$LANG" \ - --workers 60 \ - --srcdict dict.txt; - done - if [[ "$TASK" != "STS-B" ]] - then - fairseq-preprocess \ - --only-source \ - --trainpref "$TASK_DATA_FOLDER/processed/train.label" \ - --validpref "${DEVPREF//LANG/label}" \ - --destdir "$TASK-bin/label" \ - --workers 60; - else - # For STS-B output range is converted to be between: [0.0, 1.0] - mkdir -p "$TASK-bin/label" - awk '{print $1 / 5.0 }' "$TASK_DATA_FOLDER/processed/train.label" > "$TASK-bin/label/train.label" - awk '{print $1 / 5.0 }' "$TASK_DATA_FOLDER/processed/dev.label" > "$TASK-bin/label/valid.label" - fi -done diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/utils/functions.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/utils/functions.py deleted file mode 100644 index 590a6c11cea222ac9096b19f0e3dfe1b71b6c10b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/utils/functions.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def prob_check(tensor, eps=1e-10): - assert not torch.isnan(tensor).any(), ( - "Nan in a probability tensor." - ) - # Add the eps here to prevent errors introduced by precision - assert tensor.le(1.0 + eps).all() and tensor.ge(0.0 - eps).all(), ( - "Incorrect values in a probability tensor" - ", 0.0 <= tensor <= 1.0" - ) - - -def exclusive_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - Implementing exclusive cumprod. - There is cumprod in pytorch, however there is no exclusive mode. - cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i] - exclusive means - cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i] - """ - tensor_size = list(tensor.size()) - tensor_size[dim] = 1 - return_tensor = safe_cumprod( - torch.cat([torch.ones(tensor_size).type_as(tensor), tensor], dim=dim), - dim=dim, - eps=eps, - ) - - if dim == 0: - return return_tensor[:-1] - elif dim == 1: - return return_tensor[:, :-1] - elif dim == 2: - return return_tensor[:, :, :-1] - else: - raise RuntimeError( - "Cumprod on dimension 3 and more is not implemented" - ) - - -def safe_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - An implementation of cumprod to prevent precision issue. - cumprod(x) - = [x1, x1x2, x1x2x3, ....] - = [exp(log(x1)), exp(log(x1) + log(x2)), exp(log(x1) + log(x2) + log(x3)), ...] - = exp(cumsum(log(x))) - """ - - if (tensor + eps < 0).any().item(): - raise RuntimeError( - "Safe cumprod can only take non-negative tensors as input." - "Consider use torch.cumprod if you want to calculate negative values." - ) - - log_tensor = torch.log(tensor + eps) - cumsum_log_tensor = torch.cumsum(log_tensor, dim) - exp_cumsum_log_tensor = torch.exp(cumsum_log_tensor) - return exp_cumsum_log_tensor - - -def moving_sum(x, start_idx: int, end_idx: int): - """ - From MONOTONIC CHUNKWISE ATTENTION - https://arxiv.org/pdf/1712.05382.pdf - Equation (18) - - x = [x_1, x_2, ..., x_N] - MovingSum(x, start_idx, end_idx)_n = Sigma_{m=n−(start_idx−1)}^{n+end_idx-1} x_m - for n in {1, 2, 3, ..., N} - - x : src_len, batch_size - start_idx : start idx - end_idx : end idx - - Example - src_len = 5 - batch_size = 3 - x = - [[ 0, 5, 10], - [ 1, 6, 11], - [ 2, 7, 12], - [ 3, 8, 13], - [ 4, 9, 14]] - - MovingSum(x, 3, 1) = - [[ 0, 5, 10], - [ 1, 11, 21], - [ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39]] - - MovingSum(x, 1, 3) = - [[ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39], - [ 7, 17, 27], - [ 4, 9, 14]] - """ - # TODO: Make dimension configurable - assert start_idx > 0 and end_idx > 0 - batch_size, tgt_len, src_len = x.size() - x = x.view(-1, src_len).unsqueeze(1) - # batch_size, 1, src_len - moving_sum_weight = torch.ones([1, 1, end_idx + start_idx - 1]).type_as(x) - - moving_sum = torch.nn.functional.conv1d( - x, moving_sum_weight, padding=start_idx + end_idx - 1 - ).squeeze(1) - - moving_sum = moving_sum[:, end_idx:-start_idx] - - assert src_len == moving_sum.size(1) - assert batch_size * tgt_len == moving_sum.size(0) - - moving_sum = moving_sum.view(batch_size, tgt_len, src_len) - - return moving_sum diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/preprocessing/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/preprocessing/__init__.py deleted file mode 100644 index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/preprocessing/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/text_compressor.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/text_compressor.py deleted file mode 100644 index 561e9ac89ad9f1e88df95647cfdc53e4fcf5d157..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/text_compressor.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from enum import Enum - - -class TextCompressionLevel(Enum): - none = 0 - low = 1 - high = 2 - - -class TextCompressor(object): - def __init__( - self, level: TextCompressionLevel, - max_input_byte_length: int = 2 ** 16 - ): - self.level = level - self.max_input_length = max_input_byte_length - - def compress(self, text: str) -> bytes: - if self.level == TextCompressionLevel.low: - import zlib - # zlib: built-in, fast - return zlib.compress(text.encode(), level=0) - elif self.level == TextCompressionLevel.high: - try: - import unishox2 - # unishox2: optimized for short text but slower - except ImportError: - raise ImportError( - "Please install unishox2 for the text compression feature: " - "pip install unishox2-py3" - ) - assert len(text.encode()) <= self.max_input_length - return unishox2.compress(text)[0] - else: - return text.encode() - - def decompress(self, compressed: bytes) -> str: - if self.level == TextCompressionLevel.low: - import zlib - return zlib.decompress(compressed).decode() - elif self.level == TextCompressionLevel.high: - try: - import unishox2 - except ImportError: - raise ImportError( - "Please install unishox2 for the text compression feature: " - "pip install unishox2-py3" - ) - return unishox2.decompress(compressed, self.max_input_length) - else: - return compressed.decode() diff --git a/spaces/OFA-Sys/OFA-vqa/models/sequence_generator.py b/spaces/OFA-Sys/OFA-vqa/models/sequence_generator.py deleted file mode 100644 index 7afe0757e38603740f7c2186d5410f9346e6b568..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/models/sequence_generator.py +++ /dev/null @@ -1,1053 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Optional -import sys - -import torch -import torch.nn as nn -from fairseq import search, utils -from fairseq.models import FairseqIncrementalDecoder -from torch import Tensor -from fairseq.ngram_repeat_block import NGramRepeatBlock - -from data import data_utils - -class SequenceGenerator(nn.Module): - def __init__( - self, - models, - tgt_dict, - beam_size=1, - max_len_a=0, - max_len_b=200, - max_len=0, - min_len=1, - normalize_scores=True, - len_penalty=1.0, - unk_penalty=0.0, - temperature=1.0, - match_source_len=False, - no_repeat_ngram_size=0, - search_strategy=None, - eos=None, - symbols_to_strip_from_output=None, - lm_model=None, - lm_weight=1.0, - constraint_trie=None, - constraint_range=None, - gen_code=False, - gen_box=False, - ignore_eos=False, - zero_shot=False - ): - """Generates translations of a given source sentence. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models, - currently support fairseq.models.TransformerModel for scripting - beam_size (int, optional): beam width (default: 1) - max_len_a/b (int, optional): generate sequences of maximum length - ax + b, where x is the source length - max_len (int, optional): the maximum length of the generated output - (not including end-of-sentence) - min_len (int, optional): the minimum length of the generated output - (not including end-of-sentence) - normalize_scores (bool, optional): normalize scores by the length - of the output (default: True) - len_penalty (float, optional): length penalty, where <1.0 favors - shorter, >1.0 favors longer sentences (default: 1.0) - unk_penalty (float, optional): unknown word penalty, where <0 - produces more unks, >0 produces fewer (default: 0.0) - temperature (float, optional): temperature, where values - >1.0 produce more uniform samples and values <1.0 produce - sharper samples (default: 1.0) - match_source_len (bool, optional): outputs should match the source - length (default: False) - """ - super().__init__() - if isinstance(models, EnsembleModel): - self.model = models - else: - self.model = EnsembleModel(models) - self.gen_code = gen_code - self.gen_box = gen_box - self.ignore_eos = ignore_eos - self.tgt_dict = tgt_dict - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.bos = tgt_dict.bos() - self.eos = tgt_dict.eos() if eos is None else eos - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.bos, self.eos} - ) - self.vocab_size = len(tgt_dict) - self.beam_size = beam_size - # the max beam size is the dictionary size - 1, since we never select pad - self.beam_size = min(beam_size, self.vocab_size - 1) - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.min_len = min_len - self.max_len = max_len or self.model.max_decoder_positions() - - self.normalize_scores = normalize_scores - self.len_penalty = len_penalty - self.unk_penalty = unk_penalty - self.temperature = temperature - self.match_source_len = match_source_len - self.zero_shot = zero_shot - - if no_repeat_ngram_size > 0: - self.repeat_ngram_blocker = NGramRepeatBlock(no_repeat_ngram_size) - else: - self.repeat_ngram_blocker = None - - assert temperature > 0, "--temperature must be greater than 0" - - self.search = ( - search.BeamSearch(tgt_dict) if search_strategy is None else search_strategy - ) - # We only need to set src_lengths in LengthConstrainedBeamSearch. - # As a module attribute, setting it would break in multithread - # settings when the model is shared. - self.should_set_src_lengths = ( - hasattr(self.search, "needs_src_lengths") and self.search.needs_src_lengths - ) - - self.model.eval() - - self.lm_model = lm_model - self.lm_weight = lm_weight - if self.lm_model is not None: - self.lm_model.eval() - - self.constraint_trie = constraint_trie - - self.constraint_start = None - self.constraint_end = None - if constraint_range is not None: - constraint_start, constraint_end = constraint_range.split(',') - self.constraint_start = int(constraint_start) - self.constraint_end = int(constraint_end) - - def cuda(self): - self.model.cuda() - return self - - @torch.no_grad() - def forward( - self, - sample: Dict[str, Dict[str, Tensor]], - prefix_tokens: Optional[Tensor] = None, - bos_token: Optional[int] = None, - ): - """Generate a batch of translations. - - Args: - sample (dict): batch - prefix_tokens (torch.LongTensor, optional): force decoder to begin - with these tokens - bos_token (int, optional): beginning of sentence token - (default: self.eos) - """ - return self._generate(sample, prefix_tokens, bos_token=bos_token) - - # TODO(myleott): unused, deprecate after pytorch-translate migration - def generate_batched_itr(self, data_itr, beam_size=None, cuda=False, timer=None): - """Iterate over a batched dataset and yield individual translations. - Args: - cuda (bool, optional): use GPU for generation - timer (StopwatchMeter, optional): time generations - """ - for sample in data_itr: - s = utils.move_to_cuda(sample) if cuda else sample - if "net_input" not in s: - continue - input = s["net_input"] - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in input.items() if k != "prev_output_tokens" - } - if timer is not None: - timer.start() - with torch.no_grad(): - hypos = self.generate(encoder_input) - if timer is not None: - timer.stop(sum(len(h[0]["tokens"]) for h in hypos)) - for i, id in enumerate(s["id"].data): - # remove padding - src = utils.strip_pad(input["src_tokens"].data[i, :], self.pad) - ref = ( - utils.strip_pad(s["target"].data[i, :], self.pad) - if s["target"] is not None - else None - ) - yield id, src, ref, hypos[i] - - @torch.no_grad() - def generate(self, models, sample: Dict[str, Dict[str, Tensor]], **kwargs) -> List[List[Dict[str, Tensor]]]: - """Generate translations. Match the api of other fairseq generators. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models - sample (dict): batch - prefix_tokens (torch.LongTensor, optional): force decoder to begin - with these tokens - constraints (torch.LongTensor, optional): force decoder to include - the list of constraints - bos_token (int, optional): beginning of sentence token - (default: self.eos) - """ - return self._generate(models, sample, **kwargs) - - def _generate( - self, - models, - sample: Dict[str, Dict[str, Tensor]], - prefix_tokens: Optional[Tensor] = None, - constraints: Optional[Tensor] = None, - bos_token: Optional[int] = None, - ): - model = EnsembleModel(models) - incremental_states = torch.jit.annotate( - List[Dict[str, Dict[str, Optional[Tensor]]]], - [ - torch.jit.annotate(Dict[str, Dict[str, Optional[Tensor]]], {}) - for i in range(model.models_size) - ], - ) - net_input = sample["net_input"] - - if "src_tokens" in net_input: - src_tokens = net_input["src_tokens"] - # length of the source text being the character length except EndOfSentence and pad - src_lengths = ( - (src_tokens.ne(self.eos) & src_tokens.ne(self.pad)).long().sum(dim=1) - ) - elif "source" in net_input: - src_tokens = net_input["source"] - src_lengths = ( - net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1) - if net_input["padding_mask"] is not None - else torch.tensor(src_tokens.size(-1)).to(src_tokens) - ) - elif "features" in net_input: - src_tokens = net_input["features"] - src_lengths = ( - net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1) - if net_input["padding_mask"] is not None - else torch.tensor(src_tokens.size(-1)).to(src_tokens) - ) - else: - raise Exception("expected src_tokens or source in net input. input keys: " + str(net_input.keys())) - - # bsz: total number of sentences in beam - # Note that src_tokens may have more than 2 dimensions (i.e. audio features) - bsz, src_len = src_tokens.size()[:2] - beam_size = self.beam_size - - if constraints is not None and not self.search.supports_constraints: - raise NotImplementedError( - "Target-side constraints were provided, but search method doesn't support them" - ) - - # Initialize constraints, when active - self.search.init_constraints(constraints, beam_size) - - max_len: int = -1 - if self.match_source_len: - max_len = src_lengths.max().item() - else: - max_len = int(self.max_len_a * src_len + self.max_len_b) - assert ( - self.min_len <= max_len - ), "min_len cannot be larger than max_len, please adjust these!" - # compute the encoder output for each beam - with torch.autograd.profiler.record_function("EnsembleModel: forward_encoder"): - encoder_outs = model.forward_encoder(net_input) - - # placeholder of indices for bsz * beam_size to hold tokens and accumulative scores - new_order = torch.arange(bsz).view(-1, 1).repeat(1, beam_size).view(-1) - new_order = new_order.to(src_tokens.device).long() - encoder_outs = model.reorder_encoder_out(encoder_outs, new_order) - # ensure encoder_outs is a List. - assert encoder_outs is not None - - # initialize buffers - scores = ( - torch.zeros(bsz * beam_size, max_len + 1).to(src_tokens).float() - ) # +1 for eos; pad is never chosen for scoring - tokens = ( - torch.zeros(bsz * beam_size, max_len + 2) - .to(src_tokens) - .long() - .fill_(self.pad) - ) # +2 for eos and pad - # tokens[:, 0] = self.eos if bos_token is None else bos_token - tokens[:, 0] = self.bos - attn: Optional[Tensor] = None - - # A list that indicates candidates that should be ignored. - # For example, suppose we're sampling and have already finalized 2/5 - # samples. Then cands_to_ignore would mark 2 positions as being ignored, - # so that we only finalize the remaining 3 samples. - cands_to_ignore = ( - torch.zeros(bsz, beam_size).to(src_tokens).eq(-1) - ) # forward and backward-compatible False mask - - # list of completed sentences - finalized = torch.jit.annotate( - List[List[Dict[str, Tensor]]], - [torch.jit.annotate(List[Dict[str, Tensor]], []) for i in range(bsz)], - ) # contains lists of dictionaries of infomation about the hypothesis being finalized at each step - - # a boolean array indicating if the sentence at the index is finished or not - finished = [False for i in range(bsz)] - num_remaining_sent = bsz # number of sentences remaining - - # number of candidate hypos per step - cand_size = 2 * beam_size # 2 x beam size in case half are EOS - - # offset arrays for converting between different indexing schemes - bbsz_offsets = ( - (torch.arange(0, bsz) * beam_size) - .unsqueeze(1) - .type_as(tokens) - .to(src_tokens.device) - ) - cand_offsets = torch.arange(0, cand_size).type_as(tokens).to(src_tokens.device) - - reorder_state: Optional[Tensor] = None - batch_idxs: Optional[Tensor] = None - - original_batch_idxs: Optional[Tensor] = None - if "id" in sample and isinstance(sample["id"], Tensor): - original_batch_idxs = sample["id"] - else: - original_batch_idxs = torch.arange(0, bsz).type_as(tokens) - - for step in range(max_len + 1): # one extra step for EOS marker - # reorder decoder internal states based on the prev choice of beams - if reorder_state is not None: - if batch_idxs is not None: - # update beam indices to take into account removed sentences - corr = batch_idxs - torch.arange(batch_idxs.numel()).type_as( - batch_idxs - ) - reorder_state.view(-1, beam_size).add_( - corr.unsqueeze(-1) * beam_size - ) - original_batch_idxs = original_batch_idxs[batch_idxs] - model.reorder_incremental_state(incremental_states, reorder_state) - encoder_outs = model.reorder_encoder_out( - encoder_outs, reorder_state - ) - with torch.autograd.profiler.record_function("EnsembleModel: forward_decoder"): - lprobs, avg_attn_scores = model.forward_decoder( - tokens[:, : step + 1], - encoder_outs, - incremental_states, - self.temperature, - constraint_trie=self.constraint_trie, - constraint_start=self.constraint_start, - constraint_end=self.constraint_end, - gen_code=self.gen_code, - zero_shot=self.zero_shot, - prefix_tokens=prefix_tokens - ) - - if self.lm_model is not None: - lm_out = self.lm_model(tokens[:, : step + 1]) - probs = self.lm_model.get_normalized_probs( - lm_out, log_probs=True, sample=None - ) - probs = probs[:, -1, :] * self.lm_weight - lprobs += probs - # handle prefix tokens (possibly with different lengths) - if ( - prefix_tokens is not None - and step < prefix_tokens.size(1) - and step < max_len - ): - lprobs, tokens, scores = self._prefix_tokens( - step, lprobs, scores, tokens, prefix_tokens, beam_size - ) - elif step < self.min_len: - # minimum length constraint (does not apply if using prefix_tokens) - lprobs[:, self.eos] = -math.inf - - lprobs[lprobs != lprobs] = torch.tensor(-math.inf).to(lprobs) - - lprobs[:, self.pad] = -math.inf # never select pad - lprobs[:, self.unk] -= self.unk_penalty # apply unk penalty - - if (self.gen_code or self.gen_box) and step < max_len: - lprobs[:, :4] = -math.inf - if self.gen_box: - lprobs[:, -1] = -math.inf - if (step + 1) % 5 == 0: - lprobs[:, self.constraint_start:59457] = -math.inf - else: - lprobs[:, 59457:] = -math.inf - - # handle max length constraint - if step >= max_len: - lprobs[:, : self.eos] = -math.inf - lprobs[:, self.eos + 1 :] = -math.inf - if self.ignore_eos: - lprobs[:, self.eos] = 1 - - # Record attention scores, only support avg_attn_scores is a Tensor - if avg_attn_scores is not None: - if attn is None: - attn = torch.empty( - bsz * beam_size, avg_attn_scores.size(1), max_len + 2 - ).to(scores) - attn[:, :, step + 1].copy_(avg_attn_scores) - - scores = scores.type_as(lprobs) - eos_bbsz_idx = torch.empty(0).to( - tokens - ) # indices of hypothesis ending with eos (finished sentences) - eos_scores = torch.empty(0).to( - scores - ) # scores of hypothesis ending with eos (finished sentences) - - if self.should_set_src_lengths: - self.search.set_src_lengths(src_lengths) - - if self.repeat_ngram_blocker is not None: - lprobs = self.repeat_ngram_blocker(tokens, lprobs, bsz, beam_size, step) - - # Shape: (batch, cand_size) - cand_scores, cand_indices, cand_beams = self.search.step( - step, - lprobs.view(bsz, -1, self.vocab_size), - scores.view(bsz, beam_size, -1)[:, :, :step], - tokens[:, : step + 1], - original_batch_idxs, - ) - - # cand_bbsz_idx contains beam indices for the top candidate - # hypotheses, with a range of values: [0, bsz*beam_size), - # and dimensions: [bsz, cand_size] - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - - # finalize hypotheses that end in eos - # Shape of eos_mask: (batch size, beam size) - eos_mask = cand_indices.eq(self.eos) & cand_scores.ne(-math.inf) - eos_mask[:, :beam_size][cands_to_ignore] = torch.tensor(0).to(eos_mask) - - # only consider eos when it's among the top beam_size indices - # Now we know what beam item(s) to finish - # Shape: 1d list of absolute-numbered - eos_bbsz_idx = torch.masked_select( - cand_bbsz_idx[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - - finalized_sents: List[int] = [] - if eos_bbsz_idx.numel() > 0: - eos_scores = torch.masked_select( - cand_scores[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - - finalized_sents = self.finalize_hypos( - step, - eos_bbsz_idx, - eos_scores, - tokens, - scores, - finalized, - finished, - beam_size, - attn, - src_lengths, - max_len, - ) - num_remaining_sent -= len(finalized_sents) - - assert num_remaining_sent >= 0 - if num_remaining_sent == 0: - break - if self.search.stop_on_max_len and step >= max_len: - break - assert step < max_len, f"{step} < {max_len}" - - # Remove finalized sentences (ones for which {beam_size} - # finished hypotheses have been generated) from the batch. - if len(finalized_sents) > 0: - new_bsz = bsz - len(finalized_sents) - - # construct batch_idxs which holds indices of batches to keep for the next pass - batch_mask = torch.ones( - bsz, dtype=torch.bool, device=cand_indices.device - ) - batch_mask[finalized_sents] = False - # TODO replace `nonzero(as_tuple=False)` after TorchScript supports it - batch_idxs = torch.arange( - bsz, device=cand_indices.device - ).masked_select(batch_mask) - - # Choose the subset of the hypothesized constraints that will continue - self.search.prune_sentences(batch_idxs) - - eos_mask = eos_mask[batch_idxs] - cand_beams = cand_beams[batch_idxs] - bbsz_offsets.resize_(new_bsz, 1) - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - cand_scores = cand_scores[batch_idxs] - cand_indices = cand_indices[batch_idxs] - - if prefix_tokens is not None: - prefix_tokens = prefix_tokens[batch_idxs] - src_lengths = src_lengths[batch_idxs] - cands_to_ignore = cands_to_ignore[batch_idxs] - - scores = scores.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - tokens = tokens.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - if attn is not None: - attn = attn.view(bsz, -1)[batch_idxs].view( - new_bsz * beam_size, attn.size(1), -1 - ) - bsz = new_bsz - else: - batch_idxs = None - - # Set active_mask so that values > cand_size indicate eos hypos - # and values < cand_size indicate candidate active hypos. - # After, the min values per row are the top candidate active hypos - - # Rewrite the operator since the element wise or is not supported in torchscript. - - eos_mask[:, :beam_size] = ~((~cands_to_ignore) & (~eos_mask[:, :beam_size])) - active_mask = torch.add( - eos_mask.type_as(cand_offsets) * cand_size, - cand_offsets[: eos_mask.size(1)], - ) - - # get the top beam_size active hypotheses, which are just - # the hypos with the smallest values in active_mask. - # {active_hypos} indicates which {beam_size} hypotheses - # from the list of {2 * beam_size} candidates were - # selected. Shapes: (batch size, beam size) - new_cands_to_ignore, active_hypos = torch.topk( - active_mask, k=beam_size, dim=1, largest=False - ) - - # update cands_to_ignore to ignore any finalized hypos. - cands_to_ignore = new_cands_to_ignore.ge(cand_size)[:, :beam_size] - # Make sure there is at least one active item for each sentence in the batch. - assert (~cands_to_ignore).any(dim=1).all() - - # update cands_to_ignore to ignore any finalized hypos - - # {active_bbsz_idx} denotes which beam number is continued for each new hypothesis (a beam - # can be selected more than once). - active_bbsz_idx = torch.gather(cand_bbsz_idx, dim=1, index=active_hypos) - active_scores = torch.gather(cand_scores, dim=1, index=active_hypos) - - active_bbsz_idx = active_bbsz_idx.view(-1) - active_scores = active_scores.view(-1) - - # copy tokens and scores for active hypotheses - - # Set the tokens for each beam (can select the same row more than once) - tokens[:, : step + 1] = torch.index_select( - tokens[:, : step + 1], dim=0, index=active_bbsz_idx - ) - # Select the next token for each of them - tokens.view(bsz, beam_size, -1)[:, :, step + 1] = torch.gather( - cand_indices, dim=1, index=active_hypos - ) - if step > 0: - scores[:, :step] = torch.index_select( - scores[:, :step], dim=0, index=active_bbsz_idx - ) - scores.view(bsz, beam_size, -1)[:, :, step] = torch.gather( - cand_scores, dim=1, index=active_hypos - ) - - # Update constraints based on which candidates were selected for the next beam - self.search.update_constraints(active_hypos) - - # copy attention for active hypotheses - if attn is not None: - attn[:, :, : step + 2] = torch.index_select( - attn[:, :, : step + 2], dim=0, index=active_bbsz_idx - ) - - # reorder incremental state in decoder - reorder_state = active_bbsz_idx - - # sort by score descending - for sent in range(len(finalized)): - scores = torch.tensor( - [float(elem["score"].item()) for elem in finalized[sent]] - ) - _, sorted_scores_indices = torch.sort(scores, descending=True) - finalized[sent] = [finalized[sent][ssi] for ssi in sorted_scores_indices] - finalized[sent] = torch.jit.annotate( - List[Dict[str, Tensor]], finalized[sent] - ) - return finalized - - def _prefix_tokens( - self, step: int, lprobs, scores, tokens, prefix_tokens, beam_size: int - ): - """Handle prefix tokens""" - prefix_toks = prefix_tokens[:, step].unsqueeze(-1).repeat(1, beam_size).view(-1) - prefix_lprobs = lprobs.gather(-1, prefix_toks.unsqueeze(-1)) - prefix_mask = prefix_toks.ne(self.pad) - if self.constraint_trie is None: - lprobs[prefix_mask] = torch.min(prefix_lprobs) - 1 - else: - lprobs[prefix_mask] = -math.inf - lprobs[prefix_mask] = lprobs[prefix_mask].scatter( - -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_lprobs[prefix_mask] - ) - # if prefix includes eos, then we should make sure tokens and - # scores are the same across all beams - eos_mask = prefix_toks.eq(self.eos) - if eos_mask.any(): - # validate that the first beam matches the prefix - first_beam = tokens[eos_mask].view(-1, beam_size, tokens.size(-1))[ - :, 0, 1 : step + 1 - ] - eos_mask_batch_dim = eos_mask.view(-1, beam_size)[:, 0] - target_prefix = prefix_tokens[eos_mask_batch_dim][:, :step] - assert (first_beam == target_prefix).all() - - # copy tokens, scores and lprobs from the first beam to all beams - tokens = self.replicate_first_beam(tokens, eos_mask_batch_dim, beam_size) - scores = self.replicate_first_beam(scores, eos_mask_batch_dim, beam_size) - lprobs = self.replicate_first_beam(lprobs, eos_mask_batch_dim, beam_size) - return lprobs, tokens, scores - - def replicate_first_beam(self, tensor, mask, beam_size: int): - tensor = tensor.view(-1, beam_size, tensor.size(-1)) - tensor[mask] = tensor[mask][:, :1, :] - return tensor.view(-1, tensor.size(-1)) - - def finalize_hypos( - self, - step: int, - bbsz_idx, - eos_scores, - tokens, - scores, - finalized: List[List[Dict[str, Tensor]]], - finished: List[bool], - beam_size: int, - attn: Optional[Tensor], - src_lengths, - max_len: int, - ): - """Finalize hypothesis, store finalized information in `finalized`, and change `finished` accordingly. - A sentence is finalized when {beam_size} finished items have been collected for it. - - Returns number of sentences (not beam items) being finalized. - These will be removed from the batch and not processed further. - Args: - bbsz_idx (Tensor): - """ - assert bbsz_idx.numel() == eos_scores.numel() - - # clone relevant token and attention tensors. - # tokens is (batch * beam, max_len). So the index_select - # gets the newly EOS rows, then selects cols 1..{step + 2} - tokens_clone = tokens.index_select(0, bbsz_idx)[ - :, 1 : step + 2 - ] # skip the first index, which is EOS - - tokens_clone[:, step] = self.eos - attn_clone = ( - attn.index_select(0, bbsz_idx)[:, :, 1 : step + 2] - if attn is not None - else None - ) - - # compute scores per token position - pos_scores = scores.index_select(0, bbsz_idx)[:, : step + 1] - pos_scores[:, step] = eos_scores - # convert from cumulative to per-position scores - pos_scores[:, 1:] = pos_scores[:, 1:] - pos_scores[:, :-1] - - # normalize sentence-level scores - if self.normalize_scores: - eos_scores /= (step + 1) ** self.len_penalty - - # cum_unfin records which sentences in the batch are finished. - # It helps match indexing between (a) the original sentences - # in the batch and (b) the current, possibly-reduced set of - # sentences. - cum_unfin: List[int] = [] - prev = 0 - for f in finished: - if f: - prev += 1 - else: - cum_unfin.append(prev) - cum_fin_tensor = torch.tensor(cum_unfin, dtype=torch.int).to(bbsz_idx) - - unfin_idx = bbsz_idx // beam_size - sent = unfin_idx + torch.index_select(cum_fin_tensor, 0, unfin_idx) - - # Create a set of "{sent}{unfin_idx}", where - # "unfin_idx" is the index in the current (possibly reduced) - # list of sentences, and "sent" is the index in the original, - # unreduced batch - # For every finished beam item - # sentence index in the current (possibly reduced) batch - seen = (sent << 32) + unfin_idx - unique_seen: List[int] = torch.unique(seen).tolist() - - if self.match_source_len: - condition = step > torch.index_select(src_lengths, 0, unfin_idx) - eos_scores = torch.where(condition, torch.tensor(-math.inf), eos_scores) - sent_list: List[int] = sent.tolist() - for i in range(bbsz_idx.size()[0]): - # An input sentence (among those in a batch) is finished when - # beam_size hypotheses have been collected for it - if len(finalized[sent_list[i]]) < beam_size: - if attn_clone is not None: - # remove padding tokens from attn scores - hypo_attn = attn_clone[i] - else: - hypo_attn = torch.empty(0) - - finalized[sent_list[i]].append( - { - "tokens": tokens_clone[i], - "score": eos_scores[i], - "attention": hypo_attn, # src_len x tgt_len - "alignment": torch.empty(0), - "positional_scores": pos_scores[i], - } - ) - - newly_finished: List[int] = [] - for unique_s in unique_seen: - # check termination conditions for this sentence - unique_sent: int = unique_s >> 32 - unique_unfin_idx: int = unique_s - (unique_sent << 32) - - if not finished[unique_sent] and self.is_finished( - step, unique_unfin_idx, max_len, len(finalized[unique_sent]), beam_size - ): - finished[unique_sent] = True - newly_finished.append(unique_unfin_idx) - - return newly_finished - - def is_finished( - self, - step: int, - unfin_idx: int, - max_len: int, - finalized_sent_len: int, - beam_size: int, - ): - """ - Check whether decoding for a sentence is finished, which - occurs when the list of finalized sentences has reached the - beam size, or when we reach the maximum length. - """ - assert finalized_sent_len <= beam_size - if finalized_sent_len == beam_size or step == max_len: - return True - return False - - -class EnsembleModel(nn.Module): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__() - self.models_size = len(models) - # method '__len__' is not supported in ModuleList for torch script - self.single_model = models[0] - self.models = nn.ModuleList(models) - - self.has_incremental: bool = False - if all( - hasattr(m, "decoder") and isinstance(m.decoder, FairseqIncrementalDecoder) - for m in models - ): - self.has_incremental = True - - def forward(self): - pass - - def has_encoder(self): - return hasattr(self.single_model, "encoder") - - def has_incremental_states(self): - return self.has_incremental - - def max_decoder_positions(self): - return min([m.max_decoder_positions() for m in self.models if hasattr(m, "max_decoder_positions")] + [sys.maxsize]) - - @torch.jit.export - def forward_encoder(self, net_input: Dict[str, Tensor]): - if not self.has_encoder(): - return None - return [model.encoder.forward_torchscript(net_input) for model in self.models] - - @torch.jit.export - def forward_decoder( - self, - tokens, - encoder_outs: List[Dict[str, List[Tensor]]], - incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]], - temperature: float = 1.0, - constraint_trie=None, - constraint_start=None, - constraint_end=None, - gen_code=False, - zero_shot=False, - prefix_tokens=None - ): - log_probs = [] - avg_attn: Optional[Tensor] = None - encoder_out: Optional[Dict[str, List[Tensor]]] = None - code_mask = (tokens.new_ones(tokens.size(0))*gen_code).bool() - for i, model in enumerate(self.models): - if self.has_encoder(): - encoder_out = encoder_outs[i] - # decode each model - if self.has_incremental_states(): - decoder_out = model.decoder.forward( - tokens, - code_masks=code_mask, - encoder_out=encoder_out, - incremental_state=incremental_states[i], - ) - else: - if hasattr(model, "decoder"): - decoder_out = model.decoder.forward(tokens, code_masks=code_mask, encoder_out=encoder_out) - else: - decoder_out = model.forward(tokens) - - attn: Optional[Tensor] = None - decoder_len = len(decoder_out) - if decoder_len > 1 and decoder_out[1] is not None: - if isinstance(decoder_out[1], Tensor): - attn = decoder_out[1] - else: - attn_holder = decoder_out[1]["attn"] - if isinstance(attn_holder, Tensor): - attn = attn_holder - elif attn_holder is not None: - attn = attn_holder[0] - if attn is not None: - attn = attn[:, -1, :] - - decoder_out_tuple = ( - decoder_out[0][:, -1:, :].div_(temperature), - None if decoder_len <= 1 else decoder_out[1], - ) - - beam_size = decoder_out_tuple[0].size(0) // prefix_tokens.size(0) if prefix_tokens is not None else 0 - if constraint_trie is not None and not zero_shot: - assert constraint_start is None and constraint_end is None - constraint_masks = decoder_out_tuple[0].new_zeros(decoder_out_tuple[0].size()).bool() - constraint_prefix_tokens = tokens.tolist() - for token_index, constraint_prefix_token in enumerate(constraint_prefix_tokens): - prefix_len = prefix_tokens[token_index // beam_size].ne(1).sum().item() if prefix_tokens is not None else 0 - if len(constraint_prefix_token) > prefix_len: - constraint_prefix_token = [0] + constraint_prefix_token[prefix_len+1:] - constraint_nodes = constraint_trie.get_next_layer(constraint_prefix_token) - constraint_masks[token_index][:, constraint_nodes] = True - else: - constraint_masks[token_index] = True - decoder_out_tuple[0].masked_fill_(~constraint_masks, -math.inf) - if constraint_start is not None and constraint_end is not None and not zero_shot: - assert constraint_trie is None - decoder_out_tuple[0][:, :, 4:constraint_start] = -math.inf - decoder_out_tuple[0][:, :, constraint_end:] = -math.inf - - probs = model.get_normalized_probs( - decoder_out_tuple, log_probs=True, sample=None - ) - if constraint_trie is not None and zero_shot: - assert constraint_start is None and constraint_end is None - constraint_masks = decoder_out_tuple[0].new_zeros(decoder_out_tuple[0].size()).bool() - constraint_prefix_tokens = tokens.tolist() - for token_index, constraint_prefix_token in enumerate(constraint_prefix_tokens): - constraint_nodes = constraint_trie.get_next_layer(constraint_prefix_token) - constraint_masks[token_index][:, constraint_nodes] = True - probs.masked_fill_(~constraint_masks, -math.inf) - if constraint_start is not None and constraint_end is not None and zero_shot: - assert constraint_trie is None - probs[:, :, 4:constraint_start] = -math.inf - probs[:, :, constraint_end:] = -math.inf - probs = probs[:, -1, :] - if self.models_size == 1: - return probs, attn - - log_probs.append(probs) - if attn is not None: - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - - avg_probs = torch.logsumexp(torch.stack(log_probs, dim=0), dim=0) - math.log( - self.models_size - ) - - if avg_attn is not None: - avg_attn.div_(self.models_size) - return avg_probs, avg_attn - - @torch.jit.export - def reorder_encoder_out( - self, encoder_outs: Optional[List[Dict[str, List[Tensor]]]], new_order - ): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - new_outs: List[Dict[str, List[Tensor]]] = [] - if not self.has_encoder(): - return new_outs - for i, model in enumerate(self.models): - assert encoder_outs is not None - new_outs.append( - model.encoder.reorder_encoder_out(encoder_outs[i], new_order) - ) - return new_outs - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]], - new_order, - ): - if not self.has_incremental_states(): - return - for i, model in enumerate(self.models): - model.decoder.reorder_incremental_state_scripting( - incremental_states[i], new_order - ) - - -class SequenceGeneratorWithAlignment(SequenceGenerator): - def __init__( - self, models, tgt_dict, left_pad_target=False, print_alignment="hard", **kwargs - ): - """Generates translations of a given source sentence. - - Produces alignments following "Jointly Learning to Align and - Translate with Transformer Models" (Garg et al., EMNLP 2019). - - Args: - left_pad_target (bool, optional): Whether or not the - hypothesis should be left padded or not when they are - teacher forced for generating alignments. - """ - super().__init__(EnsembleModelWithAlignment(models), tgt_dict, **kwargs) - self.left_pad_target = left_pad_target - - if print_alignment == "hard": - self.extract_alignment = utils.extract_hard_alignment - elif print_alignment == "soft": - self.extract_alignment = utils.extract_soft_alignment - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - finalized = super()._generate(sample, **kwargs) - - src_tokens = sample["net_input"]["src_tokens"] - bsz = src_tokens.shape[0] - beam_size = self.beam_size - ( - src_tokens, - src_lengths, - prev_output_tokens, - tgt_tokens, - ) = self._prepare_batch_for_alignment(sample, finalized) - if any(getattr(m, "full_context_alignment", False) for m in self.model.models): - attn = self.model.forward_align(src_tokens, src_lengths, prev_output_tokens) - else: - attn = [ - finalized[i // beam_size][i % beam_size]["attention"].transpose(1, 0) - for i in range(bsz * beam_size) - ] - - if src_tokens.device != "cpu": - src_tokens = src_tokens.to("cpu") - tgt_tokens = tgt_tokens.to("cpu") - attn = [i.to("cpu") for i in attn] - - # Process the attn matrix to extract hard alignments. - for i in range(bsz * beam_size): - alignment = self.extract_alignment( - attn[i], src_tokens[i], tgt_tokens[i], self.pad, self.eos - ) - finalized[i // beam_size][i % beam_size]["alignment"] = alignment - return finalized - - def _prepare_batch_for_alignment(self, sample, hypothesis): - src_tokens = sample["net_input"]["src_tokens"] - bsz = src_tokens.shape[0] - src_tokens = ( - src_tokens[:, None, :] - .expand(-1, self.beam_size, -1) - .contiguous() - .view(bsz * self.beam_size, -1) - ) - src_lengths = sample["net_input"]["src_lengths"] - src_lengths = ( - src_lengths[:, None] - .expand(-1, self.beam_size) - .contiguous() - .view(bsz * self.beam_size) - ) - prev_output_tokens = data_utils.collate_tokens( - [beam["tokens"] for example in hypothesis for beam in example], - self.pad, - self.eos, - self.left_pad_target, - move_eos_to_beginning=True, - ) - tgt_tokens = data_utils.collate_tokens( - [beam["tokens"] for example in hypothesis for beam in example], - self.pad, - self.eos, - self.left_pad_target, - move_eos_to_beginning=False, - ) - return src_tokens, src_lengths, prev_output_tokens, tgt_tokens - - -class EnsembleModelWithAlignment(EnsembleModel): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__(models) - - def forward_align(self, src_tokens, src_lengths, prev_output_tokens): - avg_attn = None - for model in self.models: - decoder_out = model(src_tokens, src_lengths, prev_output_tokens) - attn = decoder_out[1]["attn"][0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(self.models) > 1: - avg_attn.div_(len(self.models)) - return avg_attn diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py deleted file mode 100644 index e00de4ad28fd81483c9e1161394b7b508fdad91f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py +++ /dev/null @@ -1,419 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import functools -import io -import struct -import types -import torch - -from detectron2.modeling import meta_arch -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads import keypoint_head -from detectron2.structures import Boxes, ImageList, Instances, RotatedBoxes - -from .c10 import Caffe2Compatible -from .caffe2_patch import ROIHeadsPatcher, patch_generalized_rcnn -from .shared import ( - alias, - check_set_pb_arg, - get_pb_arg_floats, - get_pb_arg_valf, - get_pb_arg_vali, - get_pb_arg_vals, - mock_torch_nn_functional_interpolate, -) - - -def assemble_rcnn_outputs_by_name(image_sizes, tensor_outputs, force_mask_on=False): - """ - A function to assemble caffe2 model's outputs (i.e. Dict[str, Tensor]) - to detectron2's format (i.e. list of Instances instance). - This only works when the model follows the Caffe2 detectron's naming convention. - - Args: - image_sizes (List[List[int, int]]): [H, W] of every image. - tensor_outputs (Dict[str, Tensor]): external_output to its tensor. - - force_mask_on (Bool): if true, the it make sure there'll be pred_masks even - if the mask is not found from tensor_outputs (usually due to model crash) - """ - - results = [Instances(image_size) for image_size in image_sizes] - - batch_splits = tensor_outputs.get("batch_splits", None) - if batch_splits: - raise NotImplementedError() - assert len(image_sizes) == 1 - result = results[0] - - bbox_nms = tensor_outputs["bbox_nms"] - score_nms = tensor_outputs["score_nms"] - class_nms = tensor_outputs["class_nms"] - # Detection will always success because Conv support 0-batch - assert bbox_nms is not None - assert score_nms is not None - assert class_nms is not None - if bbox_nms.shape[1] == 5: - result.pred_boxes = RotatedBoxes(bbox_nms) - else: - result.pred_boxes = Boxes(bbox_nms) - result.scores = score_nms - result.pred_classes = class_nms.to(torch.int64) - - mask_fcn_probs = tensor_outputs.get("mask_fcn_probs", None) - if mask_fcn_probs is not None: - # finish the mask pred - mask_probs_pred = mask_fcn_probs - num_masks = mask_probs_pred.shape[0] - class_pred = result.pred_classes - indices = torch.arange(num_masks, device=class_pred.device) - mask_probs_pred = mask_probs_pred[indices, class_pred][:, None] - result.pred_masks = mask_probs_pred - elif force_mask_on: - # NOTE: there's no way to know the height/width of mask here, it won't be - # used anyway when batch size is 0, so just set them to 0. - result.pred_masks = torch.zeros([0, 1, 0, 0], dtype=torch.uint8) - - keypoints_out = tensor_outputs.get("keypoints_out", None) - kps_score = tensor_outputs.get("kps_score", None) - if keypoints_out is not None: - # keypoints_out: [N, 4, #kypoints], where 4 is in order of (x, y, score, prob) - keypoints_tensor = keypoints_out - # NOTE: it's possible that prob is not calculated if "should_output_softmax" - # is set to False in HeatmapMaxKeypoint, so just using raw score, seems - # it doesn't affect mAP. TODO: check more carefully. - keypoint_xyp = keypoints_tensor.transpose(1, 2)[:, :, [0, 1, 2]] - result.pred_keypoints = keypoint_xyp - elif kps_score is not None: - # keypoint heatmap to sparse data structure - pred_keypoint_logits = kps_score - keypoint_head.keypoint_rcnn_inference(pred_keypoint_logits, [result]) - - return results - - -def _cast_to_f32(f64): - return struct.unpack("f", struct.pack("f", f64))[0] - - -def set_caffe2_compatible_tensor_mode(model, enable=True): - def _fn(m): - if isinstance(m, Caffe2Compatible): - m.tensor_mode = enable - - model.apply(_fn) - - -def convert_batched_inputs_to_c2_format(batched_inputs, size_divisibility, device): - """ - See get_caffe2_inputs() below. - """ - assert all(isinstance(x, dict) for x in batched_inputs) - assert all(x["image"].dim() == 3 for x in batched_inputs) - - images = [x["image"] for x in batched_inputs] - images = ImageList.from_tensors(images, size_divisibility) - - im_info = [] - for input_per_image, image_size in zip(batched_inputs, images.image_sizes): - target_height = input_per_image.get("height", image_size[0]) - target_width = input_per_image.get("width", image_size[1]) # noqa - # NOTE: The scale inside im_info is kept as convention and for providing - # post-processing information if further processing is needed. For - # current Caffe2 model definitions that don't include post-processing inside - # the model, this number is not used. - # NOTE: There can be a slight difference between width and height - # scales, using a single number can results in numerical difference - # compared with D2's post-processing. - scale = target_height / image_size[0] - im_info.append([image_size[0], image_size[1], scale]) - im_info = torch.Tensor(im_info) - - return images.tensor.to(device), im_info.to(device) - - -class Caffe2MetaArch(Caffe2Compatible, torch.nn.Module): - """ - Base class for caffe2-compatible implementation of a meta architecture. - The forward is traceable and its traced graph can be converted to caffe2 - graph through ONNX. - """ - - def __init__(self, cfg, torch_model): - """ - Args: - cfg (CfgNode): - torch_model (nn.Module): the detectron2 model (meta_arch) to be - converted. - """ - super().__init__() - self._wrapped_model = torch_model - self.eval() - set_caffe2_compatible_tensor_mode(self, True) - - def get_caffe2_inputs(self, batched_inputs): - """ - Convert pytorch-style structured inputs to caffe2-style inputs that - are tuples of tensors. - - Args: - batched_inputs (list[dict]): inputs to a detectron2 model - in its standard format. Each dict has "image" (CHW tensor), and optionally - "height" and "width". - - Returns: - tuple[Tensor]: - tuple of tensors that will be the inputs to the - :meth:`forward` method. For existing models, the first - is an NCHW tensor (padded and batched); the second is - a im_info Nx3 tensor, where the rows are - (height, width, unused legacy parameter) - """ - return convert_batched_inputs_to_c2_format( - batched_inputs, - self._wrapped_model.backbone.size_divisibility, - self._wrapped_model.device, - ) - - def encode_additional_info(self, predict_net, init_net): - """ - Save extra metadata that will be used by inference in the output protobuf. - """ - pass - - def forward(self, inputs): - """ - Run the forward in caffe2-style. It has to use caffe2-compatible ops - and the method will be used for tracing. - - Args: - inputs (tuple[Tensor]): inputs defined by :meth:`get_caffe2_input`. - They will be the inputs of the converted caffe2 graph. - - Returns: - tuple[Tensor]: output tensors. They will be the outputs of the - converted caffe2 graph. - """ - raise NotImplementedError - - def _caffe2_preprocess_image(self, inputs): - """ - Caffe2 implementation of preprocess_image, which is called inside each MetaArch's forward. - It normalizes the input images, and the final caffe2 graph assumes the - inputs have been batched already. - """ - data, im_info = inputs - data = alias(data, "data") - im_info = alias(im_info, "im_info") - mean, std = self._wrapped_model.pixel_mean, self._wrapped_model.pixel_std - normalized_data = (data - mean) / std - normalized_data = alias(normalized_data, "normalized_data") - - # Pack (data, im_info) into ImageList which is recognized by self.inference. - images = ImageList(tensor=normalized_data, image_sizes=im_info) - return images - - @staticmethod - def get_outputs_converter(predict_net, init_net): - """ - Creates a function that converts outputs of the caffe2 model to - detectron2's standard format. - The function uses information in `predict_net` and `init_net` that are - available at inferene time. Therefore the function logic can be used in inference. - - The returned function has the following signature: - - def convert(batched_inputs, c2_inputs, c2_results) -> detectron2_outputs - - Where - - * batched_inputs (list[dict]): the original input format of the meta arch - * c2_inputs (tuple[Tensor]): the caffe2 inputs. - * c2_results (dict[str, Tensor]): the caffe2 output format, - corresponding to the outputs of the :meth:`forward` function. - * detectron2_outputs: the original output format of the meta arch. - - This function can be used to compare the outputs of the original meta arch and - the converted caffe2 graph. - - Returns: - callable: a callable of the above signature. - """ - raise NotImplementedError - - -class Caffe2GeneralizedRCNN(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.GeneralizedRCNN) - torch_model = patch_generalized_rcnn(torch_model) - super().__init__(cfg, torch_model) - - try: - use_heatmap_max_keypoint = cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT - except AttributeError: - use_heatmap_max_keypoint = False - self.roi_heads_patcher = ROIHeadsPatcher( - self._wrapped_model.roi_heads, use_heatmap_max_keypoint - ) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"GeneralizedRCNN") - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - if not self.tensor_mode: - return self._wrapped_model.inference(inputs) - images = self._caffe2_preprocess_image(inputs) - features = self._wrapped_model.backbone(images.tensor) - proposals, _ = self._wrapped_model.proposal_generator(images, features) - with self.roi_heads_patcher.mock_roi_heads(): - detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals) - return tuple(detector_results[0].flatten()) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - results = assemble_rcnn_outputs_by_name(image_sizes, c2_results) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -class Caffe2RetinaNet(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.RetinaNet) - super().__init__(cfg, torch_model) - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - assert self.tensor_mode - images = self._caffe2_preprocess_image(inputs) - - # explicitly return the images sizes to avoid removing "im_info" by ONNX - # since it's not used in the forward path - return_tensors = [images.image_sizes] - - features = self._wrapped_model.backbone(images.tensor) - features = [features[f] for f in self._wrapped_model.head_in_features] - for i, feature_i in enumerate(features): - features[i] = alias(feature_i, "feature_{}".format(i), is_backward=True) - return_tensors.append(features[i]) - - pred_logits, pred_anchor_deltas = self._wrapped_model.head(features) - for i, (box_cls_i, box_delta_i) in enumerate(zip(pred_logits, pred_anchor_deltas)): - return_tensors.append(alias(box_cls_i, "box_cls_{}".format(i))) - return_tensors.append(alias(box_delta_i, "box_delta_{}".format(i))) - - return tuple(return_tensors) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"RetinaNet") - - # Inference parameters: - check_set_pb_arg( - predict_net, "score_threshold", "f", _cast_to_f32(self._wrapped_model.test_score_thresh) - ) - check_set_pb_arg( - predict_net, "topk_candidates", "i", self._wrapped_model.test_topk_candidates - ) - check_set_pb_arg( - predict_net, "nms_threshold", "f", _cast_to_f32(self._wrapped_model.test_nms_thresh) - ) - check_set_pb_arg( - predict_net, - "max_detections_per_image", - "i", - self._wrapped_model.max_detections_per_image, - ) - - check_set_pb_arg( - predict_net, - "bbox_reg_weights", - "floats", - [_cast_to_f32(w) for w in self._wrapped_model.box2box_transform.weights], - ) - self._encode_anchor_generator_cfg(predict_net) - - def _encode_anchor_generator_cfg(self, predict_net): - # serialize anchor_generator for future use - serialized_anchor_generator = io.BytesIO() - torch.save(self._wrapped_model.anchor_generator, serialized_anchor_generator) - # Ideally we can put anchor generating inside the model, then we don't - # need to store this information. - bytes = serialized_anchor_generator.getvalue() - check_set_pb_arg(predict_net, "serialized_anchor_generator", "s", bytes) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - self = types.SimpleNamespace() - serialized_anchor_generator = io.BytesIO( - get_pb_arg_vals(predict_net, "serialized_anchor_generator", None) - ) - self.anchor_generator = torch.load(serialized_anchor_generator) - bbox_reg_weights = get_pb_arg_floats(predict_net, "bbox_reg_weights", None) - self.box2box_transform = Box2BoxTransform(weights=tuple(bbox_reg_weights)) - self.test_score_thresh = get_pb_arg_valf(predict_net, "score_threshold", None) - self.test_topk_candidates = get_pb_arg_vali(predict_net, "topk_candidates", None) - self.test_nms_thresh = get_pb_arg_valf(predict_net, "nms_threshold", None) - self.max_detections_per_image = get_pb_arg_vali( - predict_net, "max_detections_per_image", None - ) - - # hack to reuse inference code from RetinaNet - for meth in [ - "forward_inference", - "inference_single_image", - "_transpose_dense_predictions", - "_decode_multi_level_predictions", - "_decode_per_level_predictions", - ]: - setattr(self, meth, functools.partial(getattr(meta_arch.RetinaNet, meth), self)) - - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - dummy_images = ImageList( - torch.randn( - ( - len(im_info), - 3, - ) - + tuple(image_sizes[0]) - ), - image_sizes, - ) - - num_features = len([x for x in c2_results.keys() if x.startswith("box_cls_")]) - pred_logits = [c2_results["box_cls_{}".format(i)] for i in range(num_features)] - pred_anchor_deltas = [c2_results["box_delta_{}".format(i)] for i in range(num_features)] - - # For each feature level, feature should have the same batch size and - # spatial dimension as the box_cls and box_delta. - dummy_features = [x.clone()[:, 0:0, :, :] for x in pred_logits] - # self.num_classess can be inferred - self.num_classes = pred_logits[0].shape[1] // (pred_anchor_deltas[0].shape[1] // 4) - - results = self.forward_inference( - dummy_images, dummy_features, [pred_logits, pred_anchor_deltas] - ) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -META_ARCH_CAFFE2_EXPORT_TYPE_MAP = { - "GeneralizedRCNN": Caffe2GeneralizedRCNN, - "RetinaNet": Caffe2RetinaNet, -} diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/squeeze_excitation.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/squeeze_excitation.py deleted file mode 100644 index d1d902bb30c071acbc0fa919a134c80fed86bd6c..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/squeeze_excitation.py +++ /dev/null @@ -1,20 +0,0 @@ -import torch.nn as nn - - -class SELayer(nn.Module): - def __init__(self, channel, reduction=16): - super(SELayer, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction, bias=False), - nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel, bias=False), - nn.Sigmoid() - ) - - def forward(self, x): - b, c, _, _ = x.size() - y = self.avg_pool(x).view(b, c) - y = self.fc(y).view(b, c, 1, 1) - res = x * y.expand_as(x) - return res diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/__init__.py deleted file mode 100644 index bc8709d92c610b36e0bcbd7da20c1eb41dc8cfcf..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# -*- coding: utf-8 -*- -# File : __init__.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d -from .replicate import DataParallelWithCallback, patch_replication_callback diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/pyrender/smpl_render.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/pyrender/smpl_render.py deleted file mode 100644 index 67693925d649480ca9be336736c2853336ea7a0b..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/pyrender/smpl_render.py +++ /dev/null @@ -1,144 +0,0 @@ -import os -import torch -import numpy as np -import cv2 - -import matplotlib.pyplot as plt -import glob -import pickle -import pyrender -import trimesh -import smplx -from pathlib import Path -from shapely import geometry -from smplx import SMPL as _SMPL -from smplx.utils import SMPLOutput as ModelOutput -from scipy.spatial.transform.rotation import Rotation as RRR - -class Renderer: - """ - Renderer used for visualizing the SMPL model - Code adapted from https://github.com/vchoutas/smplify-x - """ - def __init__(self, vertices, focal_length=5000, img_res=(224,224), faces=None): - self.renderer = pyrender.OffscreenRenderer(viewport_width=img_res[0], - viewport_height=img_res[1], - point_size=2.0) - - self.focal_length = focal_length - self.camera_center = [img_res[0] // 2, img_res[1] // 2] - self.faces = faces - - if torch.cuda.is_available(): - self.device = torch.device("cuda") - else: - self.device = torch.device("cpu") - - self.rot = trimesh.transformations.rotation_matrix(np.radians(180), [1, 0, 0]) - - minx, miny, minz = vertices.min(axis=(0, 1)) - maxx, maxy, maxz = vertices.max(axis=(0, 1)) - minx = minx - 0.5 - maxx = maxx + 0.5 - minz = minz - 0.5 - maxz = maxz + 0.5 - - floor = geometry.Polygon([[minx, minz], [minx, maxz], [maxx, maxz], [maxx, minz]]) - self.floor = trimesh.creation.extrude_polygon(floor, 1e-5) - self.floor.visual.face_colors = [0, 0, 0, 0.2] - self.floor.apply_transform(self.rot) - self.floor_pose =np.array([[ 1, 0, 0, 0], - [ 0, np.cos(np.pi / 2), -np.sin(np.pi / 2), miny], - [ 0, np.sin(np.pi / 2), np.cos(np.pi / 2), 0], - [ 0, 0, 0, 1]]) - - c = -np.pi / 6 - self.camera_pose = [[ 1, 0, 0, (minx+maxx)/2], - [ 0, np.cos(c), -np.sin(c), 1.5], - [ 0, np.sin(c), np.cos(c), max(4, minz+(1.5-miny)*2, (maxx-minx))], - [ 0, 0, 0, 1] - ] - - def __call__(self, vertices, camera_translation): - - floor_render = pyrender.Mesh.from_trimesh(self.floor, smooth=False) - - material = pyrender.MetallicRoughnessMaterial( - metallicFactor=0.1, - alphaMode='OPAQUE', - baseColorFactor=(0.658, 0.214, 0.0114, 0.2)) - mesh = trimesh.Trimesh(vertices, self.faces) - mesh.apply_transform(self.rot) - mesh = pyrender.Mesh.from_trimesh(mesh, material=material) - - camera = pyrender.PerspectiveCamera(yfov=(np.pi / 3.0)) - - light = pyrender.DirectionalLight(color=[1,1,1], intensity=350) - spot_l = pyrender.SpotLight(color=np.ones(3), intensity=300.0, - innerConeAngle=np.pi/16, outerConeAngle=np.pi/6) - point_l = pyrender.PointLight(color=np.ones(3), intensity=300.0) - - scene = pyrender.Scene(bg_color=(1.,1.,1.,0.8),ambient_light=(0.4, 0.4, 0.4)) - scene.add(floor_render, pose=self.floor_pose) - scene.add(mesh, 'mesh') - - light_pose = np.eye(4) - light_pose[:3, 3] = np.array([0, -1, 1]) - scene.add(light, pose=light_pose) - - light_pose[:3, 3] = np.array([0, 1, 1]) - scene.add(light, pose=light_pose) - - light_pose[:3, 3] = np.array([1, 1, 2]) - scene.add(light, pose=light_pose) - - scene.add(camera, pose=self.camera_pose) - - flags = pyrender.RenderFlags.RGBA | pyrender.RenderFlags.SHADOWS_DIRECTIONAL - color, rend_depth = self.renderer.render(scene, flags=flags) - - return color - -class SMPLRender(): - def __init__(self, SMPL_MODEL_DIR): - if torch.cuda.is_available(): - self.device = torch.device("cuda") - else: - self.device = torch.device("cpu") - # self.smpl = SMPL(SMPL_MODEL_DIR, batch_size=1, create_transl=False).to(self.device) - self.smpl = smplx.create(Path(SMPL_MODEL_DIR).parent, model_type="smpl", gender="neutral", ext="npz", batch_size=1).to(self.device) - - self.pred_camera_t = [] - self.focal_length = 110 - - def init_renderer(self, res, smpl_param, is_headroot=False): - poses = smpl_param['pred_pose'] - pred_rotmats = [] - for pose in poses: - if pose.size==72: - pose = pose.reshape(-1,3) - pose = RRR.from_rotvec(pose).as_matrix() - pose = pose.reshape(1,24,3,3) - pred_rotmats.append(torch.from_numpy(pose.astype(np.float32)[None]).to(self.device)) - pred_rotmat = torch.cat(pred_rotmats, dim=0) - - pred_betas = torch.from_numpy(smpl_param['pred_shape'].reshape(1, 10).astype(np.float32)).to(self.device) - pred_root = torch.tensor(smpl_param['pred_root'].reshape(-1, 3).astype(np.float32),device=self.device) - smpl_output = self.smpl(betas=pred_betas, body_pose=pred_rotmat[:, 1:],transl=pred_root, global_orient=pred_rotmat[:, :1], pose2rot=False) - - self.vertices = smpl_output.vertices.detach().cpu().numpy() - - pred_root = pred_root[0] - - if is_headroot: - pred_root = pred_root - smpl_output.joints[0,12].detach().cpu().numpy() - - self.pred_camera_t.append(pred_root) - - self.renderer = Renderer(vertices=self.vertices, focal_length=self.focal_length, - img_res=(res[1], res[0]), faces=self.smpl.faces) - - - def render(self, index): - renderImg = self.renderer(self.vertices[index, ...], self.pred_camera_t) - return renderImg diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/trainers/search_train.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/trainers/search_train.py deleted file mode 100644 index 3515892f591c049a23573644c7aa88d0dcc20b5b..0000000000000000000000000000000000000000 --- a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/trainers/search_train.py +++ /dev/null @@ -1,78 +0,0 @@ - -from fake_face_detection.metrics.compute_metrics import compute_metrics -from fake_face_detection.data.collator import fake_face_collator -from transformers import Trainer, TrainingArguments, set_seed -from torch.utils.tensorboard import SummaryWriter -from torch import nn -from typing import * -import numpy as np -import json -import os - -def train(epochs: int, output_dir: str, config: dict, model: nn.Module, trainer, get_datasets: Callable, log_dir: str = "fake_face_logs", metric = 'accuracy', seed: int = 0): - - print("------------------------- Beginning of training") - - set_seed(seed) - - # initialize the model - model = model() - - # reformat the config integer type - for key, value in config.items(): - - if isinstance(value, np.int32): config[key] = int(value) - - pretty = json.dumps(config, indent = 4) - - print(f"Current Config: \n {pretty}") - - print(f"Checkpoints in {output_dir}") - - # recuperate the dataset - train_dataset, test_dataset = get_datasets(config['h_flip_p'], config['v_flip_p'], config['gray_scale_p'], config['rotation']) - - # initialize the arguments of the training - training_args = TrainingArguments(output_dir, - per_device_train_batch_size=config['batch_size'], - evaluation_strategy='epoch', - save_strategy='epoch', - logging_strategy='epoch', - num_train_epochs=epochs, - fp16=True, - save_total_limit=2, - push_to_hub=False, - logging_dir=os.path.join(log_dir, os.path.basename(output_dir)), - load_best_model_at_end=True, - learning_rate=config['lr'] - ) - - # train the model - trainer_ = trainer( - model = model, - args = training_args, - data_collator = fake_face_collator, - compute_metrics = compute_metrics, - train_dataset = train_dataset, - eval_dataset = test_dataset - ) - - # train the model - trainer_.train() - - # evaluate the model and recuperate metrics - metrics = trainer_.evaluate(test_dataset) - - # add metrics and config to the hyperparameter panel of tensorboard - with SummaryWriter(os.path.join(log_dir, 'hparams')) as logger: - - logger.add_hparams( - config, metrics - ) - - print(metrics) - - print("------------------------- End of training") - # recuperate the metric to evaluate - return metrics[f'eval_{metric}'] - diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op/upfirdn2d.py b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op/upfirdn2d.py deleted file mode 100644 index d509eb5e11e8cd01468dded5e5b53f5326057706..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,61 +0,0 @@ -from collections import abc - -import torch -from torch.nn import functional as F - - -def upfirdn2d(inputs, kernel, up=1, down=1, pad=(0, 0)): - if not isinstance(up, abc.Iterable): - up = (up, up) - - if not isinstance(down, abc.Iterable): - down = (down, down) - - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - return upfirdn2d_native(inputs, kernel, *up, *down, *pad) - - -def upfirdn2d_native( - inputs, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = inputs.shape - inputs = inputs.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = inputs.shape - kernel_h, kernel_w = kernel.shape - - out = inputs.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x - - return out.view(-1, channel, out_h, out_w) \ No newline at end of file diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/utils/util.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/utils/util.py deleted file mode 100644 index 127232482bc969d61a49c0d30228c2e554ed8c02..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/utils/util.py +++ /dev/null @@ -1,323 +0,0 @@ -import logging -import math -import os -import random -import sys -import time -from collections import OrderedDict -from datetime import datetime -from shutil import get_terminal_size - -import cv2 -import numpy as np -import torch -import yaml -from torchvision.utils import make_grid - - -try: - from yaml import CDumper as Dumper - from yaml import CLoader as Loader -except ImportError: - from yaml import Dumper, Loader - - -def OrderedYaml(): - """yaml orderedDict support""" - _mapping_tag = yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG - - def dict_representer(dumper, data): - return dumper.represent_dict(data.items()) - - def dict_constructor(loader, node): - return OrderedDict(loader.construct_pairs(node)) - - Dumper.add_representer(OrderedDict, dict_representer) - Loader.add_constructor(_mapping_tag, dict_constructor) - return Loader, Dumper - - -#################### -# miscellaneous -#################### - - -def get_timestamp(): - return datetime.now().strftime("%y%m%d-%H%M%S") - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + "_archived_" + get_timestamp() - print("Path already exists. Rename it to [{:s}]".format(new_name)) - logger = logging.getLogger("base") - logger.info(f"Path already exists. Rename it to {new_name}") - os.rename(path, new_name) - os.makedirs(path) - - -def set_random_seed(seed): - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - -def setup_logger(logger_name, root, phase, level=logging.INFO, screen=False, tofile=False): - """set up logger""" - lg = logging.getLogger(logger_name) - formatter = logging.Formatter("%(asctime)s.%(msecs)03d - %(levelname)s: %(message)s", datefmt="%y-%m-%d %H:%M:%S") - lg.setLevel(level) - if tofile: - log_file = os.path.join(root, phase + "_{}.log".format(get_timestamp())) - fh = logging.FileHandler(log_file, mode="w") - fh.setFormatter(formatter) - lg.addHandler(fh) - if screen: - sh = logging.StreamHandler() - sh.setFormatter(formatter) - lg.addHandler(sh) - - -#################### -# image convert -#################### -def crop_border(img_list, crop_border): - """Crop borders of images - Args: - img_list (list [Numpy]): HWC - crop_border (int): crop border for each end of height and weight - - Returns: - (list [Numpy]): cropped image list - """ - if crop_border == 0: - return img_list - else: - return [v[crop_border:-crop_border, crop_border:-crop_border] for v in img_list] - - -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - """ - Converts a torch Tensor into an image Numpy array - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - """ - - # clamp - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) - - # to range [0,1] - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - f"Only support 4D, 3D and 2D tensor. But received with dimension:\ - {n_dim}" - ) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -def save_img(img, img_path, mode="RGB"): - cv2.imwrite(img_path, img) - - -#################### -# metric -#################### - - -def calculate_psnr(img1, img2): - # img1 and img2 have range [0, 255] - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2) ** 2) - if mse == 0: - return float("inf") - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -def ssim(img1, img2): - C1 = (0.01 * 255) ** 2 - C2 = (0.03 * 255) ** 2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1 ** 2 - mu2_sq = mu2 ** 2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -def calculate_ssim(img1, img2): - """calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - """ - if not img1.shape == img2.shape: - raise ValueError("Input images must have the same dimensions.") - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1, img2)) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError("Wrong input image dimensions.") - - -class ProgressBar(object): - """A progress bar which can print the progress - modified from - https://github.com/hellock/cvbase/blob/master/cvbase/progress.py - """ - - def __init__(self, task_num=0, bar_width=50, start=True): - self.task_num = task_num - max_bar_width = self._get_max_bar_width() - self.bar_width = bar_width if bar_width <= max_bar_width else max_bar_width - self.completed = 0 - if start: - self.start() - - def _get_max_bar_width(self): - terminal_width, _ = get_terminal_size() - max_bar_width = min(int(terminal_width * 0.6), terminal_width - 50) - if max_bar_width < 10: - print( - "terminal width is too small ({}), \ - please consider widen the terminal for better " - "progressbar visualization".format(terminal_width) - ) - max_bar_width = 10 - return max_bar_width - - def start(self): - if self.task_num > 0: - sys.stdout.write( - "[{}] 0/{}, elapsed: 0s, ETA:\n{}\n".format(" " * self.bar_width, self.task_num, "Start...") - ) - else: - sys.stdout.write("completed: 0, elapsed: 0s") - sys.stdout.flush() - self.start_time = time.time() - - def update(self, msg="In progress..."): - self.completed += 1 - elapsed = time.time() - self.start_time - fps = self.completed / elapsed - if self.task_num > 0: - percentage = self.completed / float(self.task_num) - eta = int(elapsed * (1 - percentage) / percentage + 0.5) - mark_width = int(self.bar_width * percentage) - bar_chars = ">" * mark_width + "-" * (self.bar_width - mark_width) - sys.stdout.write("\033[2F") # cursor up 2 lines - - # clean the output (remove extra chars since last display) - sys.stdout.write("\033[J") - sys.stdout.write( - "[{}] {}/{}, {:.1f} task/s, \ - elapsed: {}s, ETA: {:5}s\n{}\n".format( - bar_chars, self.completed, self.task_num, fps, int(elapsed + 0.5), eta, msg - ) - ) - else: - sys.stdout.write( - "completed: {}, elapsed: \ - {}s, {:.1f} tasks/s".format( - self.completed, int(elapsed + 0.5), fps - ) - ) - sys.stdout.flush() - - -def img2tensor(img): - return torch.from_numpy(np.ascontiguousarray(np.transpose(img / 255.0, (2, 0, 1)))).float() - - -def fill_noise(x, noise_type): - """Fills tensor `x` with noise of type `noise_type`.""" - if noise_type == "u": - x.uniform_() - elif noise_type == "n": - x.normal_() - else: - assert False - - -def np_to_torch(img_np): - """Converts image in numpy.array to torch.Tensor. - From C x W x H [0..1] to C x W x H [0..1] - """ - return torch.from_numpy(img_np)[None, :] - - -def get_noise(input_depth, method, spatial_size, noise_type="u", var=1.0 / 10): - """Returns a pytorch.Tensor of size (1 x `input_depth` x `spatial_size[0]` x `spatial_size[1]`) - initialized in a specific way. - Args: - input_depth: number of channels in the tensor - method: `noise` for fillting tensor with noise; `meshgrid` for np.meshgrid - spatial_size: spatial size of the tensor to initialize - noise_type: 'u' for uniform; 'n' for normal - var: a factor, a noise will be multiplicated by. Basically it is standard deviation scaler. - """ - if isinstance(spatial_size, int): - spatial_size = (spatial_size, spatial_size) - if method == "noise": - shape = [1, input_depth, spatial_size[0], spatial_size[1]] - net_input = torch.zeros(shape) - - fill_noise(net_input, noise_type) - net_input *= var - elif method == "meshgrid": - assert input_depth == 2 - X, Y = np.meshgrid( - np.arange(0, spatial_size[1]) / float(spatial_size[1] - 1), - np.arange(0, spatial_size[0]) / float(spatial_size[0] - 1), - ) - meshgrid = np.concatenate([X[None, :], Y[None, :]]) - net_input = np_to_torch(meshgrid) - else: - assert False - - return net_input diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/rw.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/rw.go deleted file mode 100644 index b1134f7ff475b91c798bd6ba683303324e4e6b91..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/rw.go and /dev/null differ diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/badge.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
        - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/Plashkar/diabetes-predict/README.md b/spaces/Plashkar/diabetes-predict/README.md deleted file mode 100644 index 4efc640af4bf821530fd101d305a80138afba026..0000000000000000000000000000000000000000 --- a/spaces/Plashkar/diabetes-predict/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Diabetes Predict -emoji: 📉 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/audiogen.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/audiogen.py deleted file mode 100644 index 6adefb97401c10422c9711d222c0857f5593dceb..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/audiogen.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using AudioGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes -from ..utils.autocast import TorchAutocast - - -class AudioGen: - """AudioGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - max_duration (float, optional): maximum duration the model can produce, - otherwise, inferred from the training params. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: tp.Optional[float] = None): - self.name = name - self.compression_model = compression_model - self.lm = lm - if max_duration is None: - if hasattr(lm, 'cfg'): - max_duration = lm.cfg.dataset.segment_duration # type: ignore - else: - raise ValueError("You must provide max_duration when building directly AudioGen") - assert max_duration is not None - self.max_duration: float = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=5) # 5 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> float: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'facebook/audiogen-medium', device=None): - """Return pretrained model, we provide a single model for now: - - facebook/audiogen-medium (1.5B), text to sound, - # see: https://huggingface.co/facebook/audiogen-medium - """ - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device, sample_rate=16000) - lm = get_debug_lm_model(device) - return AudioGen(name, compression_model, lm, max_duration=10) - - compression_model = load_compression_model(name, device=device) - lm = load_lm_model(name, device=device) - assert 'self_wav' not in lm.condition_provider.conditioners, \ - "AudioGen do not support waveform conditioning for now" - return AudioGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 10.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 2): - """Set the generation parameters for AudioGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 10.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 10 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (list of str, optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (list of ConditioningAttributes): Conditions used for generation (here text). - prompt_tokens (torch.Tensor, optional): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - i = 0 - prompt_list = attributes[0].text['description'] - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - attributes[0].text['description'] = prompt_list[0] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - with self.autocast: - if i >= len(prompt_list): - i = len(prompt_list) - 1 - attributes[0].text['description'] = prompt_list[i] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - i = i + 1 - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio - - def to(self, device: str): - self.compression_model.to(device) - self.lm.to(device) - return self \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/multibanddiffusion.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/multibanddiffusion.py deleted file mode 100644 index 6a2f169d516ed5aaf5da61fb482d94dd142f55e9..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/multibanddiffusion.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Multi Band Diffusion models as described in -"From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion" -(paper link). -""" - -import typing as tp - -import torch -import julius - -from .unet import DiffusionUnet -from ..modules.diffusion_schedule import NoiseSchedule -from .encodec import CompressionModel -from ..solvers.compression import CompressionSolver -from .loaders import load_compression_model, load_diffusion_models - - -class DiffusionProcess: - """Sampling for a diffusion Model. - - Args: - model (DiffusionUnet): Diffusion U-Net model. - noise_schedule (NoiseSchedule): Noise schedule for diffusion process. - """ - def __init__(self, model: DiffusionUnet, noise_schedule: NoiseSchedule) -> None: - """ - """ - self.model = model - self.schedule = noise_schedule - - def generate(self, condition: torch.Tensor, initial_noise: torch.Tensor, - step_list: tp.Optional[tp.List[int]] = None): - """Perform one diffusion process to generate one of the bands. - - Args: - condition (tensor): The embeddings form the compression model. - initial_noise (tensor): The initial noise to start the process/ - """ - return self.schedule.generate_subsampled(model=self.model, initial=initial_noise, step_list=step_list, - condition=condition) - - -class MultiBandDiffusion: - """Sample from multiple diffusion models. - - Args: - DPs (list of DiffusionProcess): Diffusion processes. - codec_model (CompressionModel): Underlying compression model used to obtain discrete tokens. - """ - def __init__(self, DPs: tp.List[DiffusionProcess], codec_model: CompressionModel) -> None: - self.DPs = DPs - self.codec_model = codec_model - self.device = next(self.codec_model.parameters()).device - - @property - def sample_rate(self) -> int: - return self.codec_model.sample_rate - - @staticmethod - def get_mbd_musicgen(device=None): - """Load our diffusion models trained for MusicGen.""" - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - path = 'https://dl.fbaipublicfiles.com/encodec/Diffusion/mbd_musicgen_32khz.th' - name = 'facebook/musicgen-small' - codec_model = load_compression_model(name, device=device) - models, processors, cfgs = load_diffusion_models(path, device=device) - DPs = [] - for i in range(len(models)): - schedule = NoiseSchedule(**cfgs[i].schedule, sample_processor=processors[i]) - DPs.append(DiffusionProcess(model=models[i], noise_schedule=schedule)) - return MultiBandDiffusion(DPs=DPs, codec_model=codec_model) - - @staticmethod - def get_mbd_24khz(bw: float = 3.0, pretrained: bool = True, - device: tp.Optional[tp.Union[torch.device, str]] = None, - n_q: tp.Optional[int] = None): - """Get the pretrained Models for MultibandDiffusion. - - Args: - bw (float): Bandwidth of the compression model. - pretrained (bool): Whether to use / download if necessary the models. - device (torch.device or str, optional): Device on which the models are loaded. - n_q (int, optional): Number of quantizers to use within the compression model. - """ - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - assert bw in [1.5, 3.0, 6.0], f"bandwidth {bw} not available" - if n_q is not None: - assert n_q in [2, 4, 8] - assert {1.5: 2, 3.0: 4, 6.0: 8}[bw] == n_q, \ - f"bandwidth and number of codebooks missmatch to use n_q = {n_q} bw should be {n_q * (1.5 / 2)}" - n_q = {1.5: 2, 3.0: 4, 6.0: 8}[bw] - codec_model = CompressionSolver.model_from_checkpoint( - '//pretrained/facebook/encodec_24khz', device=device) - codec_model.set_num_codebooks(n_q) - codec_model = codec_model.to(device) - path = f'https://dl.fbaipublicfiles.com/encodec/Diffusion/mbd_comp_{n_q}.pt' - models, processors, cfgs = load_diffusion_models(path, device=device) - DPs = [] - for i in range(len(models)): - schedule = NoiseSchedule(**cfgs[i].schedule, sample_processor=processors[i]) - DPs.append(DiffusionProcess(model=models[i], noise_schedule=schedule)) - return MultiBandDiffusion(DPs=DPs, codec_model=codec_model) - - return MultiBandDiffusion(DPs, codec_model) - - @torch.no_grad() - def get_condition(self, wav: torch.Tensor, sample_rate: int) -> torch.Tensor: - """Get the conditioning (i.e. latent reprentatios of the compression model) from a waveform. - Args: - wav (torch.Tensor): The audio that we want to extract the conditioning from - sample_rate (int): sample rate of the audio""" - if sample_rate != self.sample_rate: - wav = julius.resample_frac(wav, sample_rate, self.sample_rate) - codes, scale = self.codec_model.encode(wav) - assert scale is None, "Scaled compression models not supported." - emb = self.get_emb(codes) - return emb - - @torch.no_grad() - def get_emb(self, codes: torch.Tensor): - """Get latent representation from the discrete codes - Argrs: - codes (torch.Tensor): discrete tokens""" - emb = self.codec_model.decode_latent(codes) - return emb - - def generate(self, emb: torch.Tensor, size: tp.Optional[torch.Size] = None, - step_list: tp.Optional[tp.List[int]] = None): - """Generate Wavform audio from the latent embeddings of the compression model - Args: - emb (torch.Tensor): Conditioning embeddinds - size (none torch.Size): size of the output - if None this is computed from the typical upsampling of the model - step_list (optional list[int]): list of Markov chain steps, defaults to 50 linearly spaced step. - """ - if size is None: - upsampling = int(self.codec_model.sample_rate / self.codec_model.frame_rate) - size = torch.Size([emb.size(0), self.codec_model.channels, emb.size(-1) * upsampling]) - assert size[0] == emb.size(0) - out = torch.zeros(size).to(self.device) - for DP in self.DPs: - out += DP.generate(condition=emb, step_list=step_list, initial_noise=torch.randn_like(out)) - return out - - def re_eq(self, wav: torch.Tensor, ref: torch.Tensor, n_bands: int = 32, strictness: float = 1): - """match the eq to the encodec output by matching the standard deviation of some frequency bands - Args: - wav (torch.Tensor): audio to equalize - ref (torch.Tensor):refenrence audio from which we match the spectrogram. - n_bands (int): number of bands of the eq - strictness (float): how strict the the matching. 0 is no matching, 1 is exact matching. - """ - split = julius.SplitBands(n_bands=n_bands, sample_rate=self.codec_model.sample_rate).to(wav.device) - bands = split(wav) - bands_ref = split(ref) - out = torch.zeros_like(ref) - for i in range(n_bands): - out += bands[i] * (bands_ref[i].std() / bands[i].std()) ** strictness - return out - - def regenerate(self, wav: torch.Tensor, sample_rate: int): - """Regenerate a wavform through compression and diffusion regeneration. - Args: - wav (torch.Tensor): Original 'ground truth' audio - sample_rate (int): sample rate of the input (and output) wav - """ - if sample_rate != self.codec_model.sample_rate: - wav = julius.resample_frac(wav, sample_rate, self.codec_model.sample_rate) - emb = self.get_condition(wav, sample_rate=self.codec_model.sample_rate) - size = wav.size() - out = self.generate(emb, size=size) - if sample_rate != self.codec_model.sample_rate: - out = julius.resample_frac(out, self.codec_model.sample_rate, sample_rate) - return out - - def tokens_to_wav(self, tokens: torch.Tensor, n_bands: int = 32): - """Generate Waveform audio with diffusion from the discrete codes. - Args: - tokens (torch.Tensor): discrete codes - n_bands (int): bands for the eq matching. - """ - wav_encodec = self.codec_model.decode(tokens) - condition = self.get_emb(tokens) - wav_diffusion = self.generate(emb=condition, size=wav_encodec.size()) - return self.re_eq(wav=wav_diffusion, ref=wav_encodec, n_bands=n_bands) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/install_egg_info.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/install_egg_info.py deleted file mode 100644 index d5e68a6e47199372c79ec094e0385f49a6600f22..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/install_egg_info.py +++ /dev/null @@ -1,91 +0,0 @@ -""" -distutils.command.install_egg_info - -Implements the Distutils 'install_egg_info' command, for installing -a package's PKG-INFO metadata. -""" - -import os -import sys -import re - -from distutils.cmd import Command -from distutils import log, dir_util - - -class install_egg_info(Command): - """Install an .egg-info file for the package""" - - description = "Install package's PKG-INFO metadata as an .egg-info file" - user_options = [ - ('install-dir=', 'd', "directory to install to"), - ] - - def initialize_options(self): - self.install_dir = None - - @property - def basename(self): - """ - Allow basename to be overridden by child class. - Ref pypa/distutils#2. - """ - return "%s-%s-py%d.%d.egg-info" % ( - to_filename(safe_name(self.distribution.get_name())), - to_filename(safe_version(self.distribution.get_version())), - *sys.version_info[:2], - ) - - def finalize_options(self): - self.set_undefined_options('install_lib', ('install_dir', 'install_dir')) - self.target = os.path.join(self.install_dir, self.basename) - self.outputs = [self.target] - - def run(self): - target = self.target - if os.path.isdir(target) and not os.path.islink(target): - dir_util.remove_tree(target, dry_run=self.dry_run) - elif os.path.exists(target): - self.execute(os.unlink, (self.target,), "Removing " + target) - elif not os.path.isdir(self.install_dir): - self.execute( - os.makedirs, (self.install_dir,), "Creating " + self.install_dir - ) - log.info("Writing %s", target) - if not self.dry_run: - with open(target, 'w', encoding='UTF-8') as f: - self.distribution.metadata.write_pkg_file(f) - - def get_outputs(self): - return self.outputs - - -# The following routines are taken from setuptools' pkg_resources module and -# can be replaced by importing them from pkg_resources once it is included -# in the stdlib. - - -def safe_name(name): - """Convert an arbitrary string to a standard distribution name - - Any runs of non-alphanumeric/. characters are replaced with a single '-'. - """ - return re.sub('[^A-Za-z0-9.]+', '-', name) - - -def safe_version(version): - """Convert an arbitrary string to a standard version string - - Spaces become dots, and all other non-alphanumeric characters become - dashes, with runs of multiple dashes condensed to a single dash. - """ - version = version.replace(' ', '.') - return re.sub('[^A-Za-z0-9.]+', '-', version) - - -def to_filename(name): - """Convert a project or version name to its filename-escaped form - - Any '-' characters are currently replaced with '_'. - """ - return name.replace('-', '_') diff --git a/spaces/ReThGe/Linet/model.py b/spaces/ReThGe/Linet/model.py deleted file mode 100644 index 4de337a409d902cb041b336e550d829ce5c1b7d6..0000000000000000000000000000000000000000 --- a/spaces/ReThGe/Linet/model.py +++ /dev/null @@ -1,27 +0,0 @@ -import torch -from Linet_clf_model import LinetV2 -from torchvision import transforms - -from torch import nn - -def create_v1_5(): - - transform = transforms.Compose([ - transforms.Resize((480, 480)), - transforms.ToTensor(), - ]) - - model = torch.load('v15_8batch_0_001lr_480res_100ep_aug_78acc.pt', map_location=torch.device("cpu")) - - return model, transform - - -def create_v2(): - transform = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - ]) - - model = torch.load('v2_32batch_0.01lr_224res_160ep_aug_98acc.pt', map_location=torch.device("cpu")) - - return model, transform diff --git a/spaces/RealTimeLiveAIForHealth/VoicetoTexttoSentiment/app.py b/spaces/RealTimeLiveAIForHealth/VoicetoTexttoSentiment/app.py deleted file mode 100644 index 6066bc8c5bd0572c67119cf0f910228e3ad05ab6..0000000000000000000000000000000000000000 --- a/spaces/RealTimeLiveAIForHealth/VoicetoTexttoSentiment/app.py +++ /dev/null @@ -1,73 +0,0 @@ -from transformers import pipeline -import gradio as gr - -asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h") -classifier = pipeline("text-classification", "michellejieli/emotion_text_classifier") - -def transcribe(speech, state=""): - text = asr(speech)["text"] - state += text + " " - return text, state - -def speech_to_text(speech): - text = asr(speech)["text"] - return text - -def text_to_sentiment(text): - return classifier(text)[0]["label"] - - -demo = gr.Blocks() -with demo: - - microphone = gr.Audio(source="microphone", type="filepath") - audio_file = gr.Audio(type="filepath") - text = gr.Textbox() - label = gr.Label() - - b0 = gr.Button("Speech From Microphone") - b1 = gr.Button("Recognize Speech") - b2 = gr.Button("Classify Sentiment") - - #b0.click(transcribe, inputs=[microphone, "state"], outputs=[text, "state"], live=True) - b0.click(transcribe, inputs=[microphone], outputs=[text]) - b1.click(speech_to_text, inputs=audio_file, outputs=text) - b2.click(text_to_sentiment, inputs=text, outputs=label) - - gr.Markdown("""References: - -## Building an Asynchronous Real-Time Live Telemedicine System Using AI Pipelines for Smart Communities - -1. **Designing the Telemedicine System** - - Identify the needs and challenges of smart communities and design a telemedicine system that addresses these challenges. - - Choose a platform that allows for asynchronous real-time communication, such as video conferencing or chat-based messaging, to facilitate remote consultations with healthcare providers. - - Design the system to incorporate AI pipelines that can analyze patient data and provide decision support for healthcare providers. - -2. **Implementing the AI Pipelines** - - Identify the relevant AI algorithms and techniques that can be used to analyze patient data, such as machine learning or natural language processing. - - Integrate these AI pipelines into the telemedicine system to provide decision support for healthcare providers during consultations. - - Ensure that the AI algorithms are accurate and reliable by testing them on a large and diverse set of patient data. - -3. **Deploying the Telemedicine System** - - Deploy the telemedicine system in smart communities, ensuring that it is easily accessible and user-friendly for patients and healthcare providers. - - Train healthcare providers on how to use the system effectively and provide ongoing support and feedback to optimize its use. - - Continuously monitor and evaluate the system's performance, making improvements and updates as needed to ensure that it remains effective and efficient in meeting the needs of smart communities. - -**__Asynchronous Telemedicine:__ A Solution to Address Provider Shortages by Offering Remote Care Services.** -([Wikipedia](https://en.wikipedia.org/wiki/Telemedicine)) - - -# 2023's Top 7 Breakthroughs in Medical Technology -1. __Asynchronous Telemedicine:__ A Solution to Address Provider Shortages by Offering Remote Care Services. ([Wikipedia](https://en.wikipedia.org/wiki/Telemedicine)) -2. __Ambient and Emotion AI:__ Empowering Patients with Artificial Intelligence That Shows Empathy and Compassion. ([Wikipedia](https://en.wikipedia.org/wiki/Ambient_intelligence)) -3. __Skin Patch Technology:__ A Convenient Way to Measure Vital Signals such as Blood Pressure and Glucose Levels. ([Wikipedia](https://en.wikipedia.org/wiki/Skin_patch)) -4. __Affordable Vein Scanner:__ A Revolutionary Tool to View Veins Through the Skin. ([Wikipedia](https://en.wikipedia.org/wiki/Vein_matching)) -5. __Synthetic Medical Records:__ Creating Reliable Medical Records Using Generative Adversarial Networks. ([Wikipedia](https://en.wikipedia.org/wiki/Synthetic_data)) -6. __Blood Draw Devices for Clinical Trials:__ Facilitating Remote Participation in Trials with Innovative Technology. ([Wikipedia](https://en.wikipedia.org/wiki/Blood_sampling)) -7. __Smart TVs for Remote Care:__ Enhancing Remote Care Consultations with Video Chat and Recordings. ([Wikipedia](https://en.wikipedia.org/wiki/Smart_television)) - -Reference: [The Medical Futurist](https://www.youtube.com/watch?v=_9DpLD4S2AY&list=PLHgX2IExbFotoMt32SrT3Xynt5BXTGnEP&index=2) - - """) - -demo.launch() \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/model_zoo/__init__.py b/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/model_zoo/__init__.py deleted file mode 100644 index 78901ad4f67e152933af8bb56c5478e3d561f30d..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/model_zoo/__init__.py +++ /dev/null @@ -1,42 +0,0 @@ -weight_urls = { - "DKMv3": { - "outdoor": "https://github.com/Parskatt/storage/releases/download/dkmv3/DKMv3_outdoor.pth", - "indoor": "https://github.com/Parskatt/storage/releases/download/dkmv3/DKMv3_indoor.pth", - }, -} -import torch -from .DKMv3 import DKMv3 - - -def DKMv3_outdoor(path_to_weights=None, device=None): - """ - Loads DKMv3 outdoor weights, uses internal resolution of (540, 720) by default - resolution can be changed by setting model.h_resized, model.w_resized later. - Additionally upsamples preds to fixed resolution of (864, 1152), - can be turned off by model.upsample_preds = False - """ - if device is None: - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - if path_to_weights is not None: - weights = torch.load(path_to_weights, map_location="cpu") - else: - weights = torch.hub.load_state_dict_from_url( - weight_urls["DKMv3"]["outdoor"], map_location="cpu" - ) - return DKMv3(weights, 540, 720, upsample_preds=True, device=device) - - -def DKMv3_indoor(path_to_weights=None, device=None): - """ - Loads DKMv3 indoor weights, uses internal resolution of (480, 640) by default - Resolution can be changed by setting model.h_resized, model.w_resized later. - """ - if device is None: - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - if path_to_weights is not None: - weights = torch.load(path_to_weights, map_location=device) - else: - weights = torch.hub.load_state_dict_from_url( - weight_urls["DKMv3"]["indoor"], map_location=device - ) - return DKMv3(weights, 480, 640, upsample_preds=False, device=device) diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/optimizers/__init__.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/optimizers/__init__.py deleted file mode 100644 index e4e36c22e00217deccacd589f8924b2f74589456..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/optimizers/__init__.py +++ /dev/null @@ -1,55 +0,0 @@ -import torch -from torch.optim.lr_scheduler import MultiStepLR, CosineAnnealingLR, ExponentialLR - - -def build_optimizer(model, config): - name = config.TRAINER.OPTIMIZER - lr = config.TRAINER.TRUE_LR - - if name == "adam": - return torch.optim.Adam( - model.parameters(), lr=lr, weight_decay=config.TRAINER.ADAM_DECAY - ) - elif name == "adamw": - return torch.optim.AdamW( - model.parameters(), lr=lr, weight_decay=config.TRAINER.ADAMW_DECAY - ) - else: - raise ValueError(f"TRAINER.OPTIMIZER = {name} is not a valid optimizer!") - - -def build_scheduler(config, optimizer): - """ - Returns: - scheduler (dict):{ - 'scheduler': lr_scheduler, - 'interval': 'step', # or 'epoch' - 'monitor': 'val_f1', (optional) - 'frequency': x, (optional) - } - """ - scheduler = {"interval": config.TRAINER.SCHEDULER_INTERVAL} - name = config.TRAINER.SCHEDULER - - if name == "MultiStepLR": - scheduler.update( - { - "scheduler": MultiStepLR( - optimizer, - config.TRAINER.MSLR_MILESTONES, - gamma=config.TRAINER.MSLR_GAMMA, - ) - } - ) - elif name == "CosineAnnealing": - scheduler.update( - {"scheduler": CosineAnnealingLR(optimizer, config.TRAINER.COSA_TMAX)} - ) - elif name == "ExponentialLR": - scheduler.update( - {"scheduler": ExponentialLR(optimizer, config.TRAINER.ELR_GAMMA)} - ) - else: - raise NotImplementedError() - - return scheduler diff --git a/spaces/Redgon/bingo/src/components/ui/textarea.tsx b/spaces/Redgon/bingo/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( - -
        -
    - - - - - - - - - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/sep_fcn_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index 3ea198ab8a96919dfb6974fd73b1476aa488aef2..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,51 +0,0 @@ -from annotator.mmpkg.mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to Fast-SCNN paper. - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - """ - - def __init__(self, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) diff --git a/spaces/cozyanduofen/bingo/src/lib/hooks/use-bing.ts b/spaces/cozyanduofen/bingo/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/utils/img_util.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/utils/img_util.py deleted file mode 100644 index d409a132ff216e6943a276fb5d8cd5f410824883..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/utils/img_util.py +++ /dev/null @@ -1,170 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import torch -from torchvision.utils import make_grid - - -def img2tensor(imgs, bgr2rgb=True, float32=True): - """Numpy array to tensor. - - Args: - imgs (list[ndarray] | ndarray): Input images. - bgr2rgb (bool): Whether to change bgr to rgb. - float32 (bool): Whether to change to float32. - - Returns: - list[tensor] | tensor: Tensor images. If returned results only have - one element, just return tensor. - """ - - def _totensor(img, bgr2rgb, float32): - if img.shape[2] == 3 and bgr2rgb: - if img.dtype == 'float64': - img = img.astype('float32') - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = torch.from_numpy(img.transpose(2, 0, 1)) - if float32: - img = img.float() - return img - - if isinstance(imgs, list): - return [_totensor(img, bgr2rgb, float32) for img in imgs] - else: - return _totensor(imgs, bgr2rgb, float32) - - -def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to [min, max], values will be normalized to [0, 1]. - - Args: - tensor (Tensor or list[Tensor]): Accept shapes: - 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); - 2) 3D Tensor of shape (3/1 x H x W); - 3) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - rgb2bgr (bool): Whether to change rgb to bgr. - out_type (numpy type): output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple[int]): min and max values for clamp. - - Returns: - (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of - shape (H x W). The channel order is BGR. - """ - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): - raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') - - if torch.is_tensor(tensor): - tensor = [tensor] - result = [] - for _tensor in tensor: - _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() - img_np = img_np.transpose(1, 2, 0) - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = img_np.transpose(1, 2, 0) - if img_np.shape[2] == 1: # gray image - img_np = np.squeeze(img_np, axis=2) - else: - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise TypeError('Only support 4D, 3D or 2D tensor. ' f'But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - if len(result) == 1: - result = result[0] - return result - - -def tensor2img_fast(tensor, rgb2bgr=True, min_max=(0, 1)): - """This implementation is slightly faster than tensor2img. - It now only supports torch tensor with shape (1, c, h, w). - - Args: - tensor (Tensor): Now only support torch tensor with (1, c, h, w). - rgb2bgr (bool): Whether to change rgb to bgr. Default: True. - min_max (tuple[int]): min and max values for clamp. - """ - output = tensor.squeeze(0).detach().clamp_(*min_max).permute(1, 2, 0) - output = (output - min_max[0]) / (min_max[1] - min_max[0]) * 255 - output = output.type(torch.uint8).cpu().numpy() - if rgb2bgr: - output = cv2.cvtColor(output, cv2.COLOR_RGB2BGR) - return output - - -def imfrombytes(content, flag='color', float32=False): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale` and `unchanged`. - float32 (bool): Whether to change to float32., If True, will also norm - to [0, 1]. Default: False. - - Returns: - ndarray: Loaded image array. - """ - img_np = np.frombuffer(content, np.uint8) - imread_flags = {'color': cv2.IMREAD_COLOR, 'grayscale': cv2.IMREAD_GRAYSCALE, 'unchanged': cv2.IMREAD_UNCHANGED} - img = cv2.imdecode(img_np, imread_flags[flag]) - if float32: - img = img.astype(np.float32) / 255. - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv's :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = os.path.abspath(os.path.dirname(file_path)) - os.makedirs(dir_name, exist_ok=True) - return cv2.imwrite(file_path, img, params) - - -def crop_border(imgs, crop_border): - """Crop borders of images. - - Args: - imgs (list[ndarray] | ndarray): Images with shape (h, w, c). - crop_border (int): Crop border for each end of height and weight. - - Returns: - list[ndarray]: Cropped images. - """ - if crop_border == 0: - return imgs - else: - if isinstance(imgs, list): - return [v[crop_border:-crop_border, crop_border:-crop_border, ...] for v in imgs] - else: - return imgs[crop_border:-crop_border, crop_border:-crop_border, ...] diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/losses/__init__.py b/spaces/cvlab/zero123-live/taming-transformers/taming/modules/losses/__init__.py deleted file mode 100644 index d09caf9eb805f849a517f1b23503e1a4d6ea1ec5..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/losses/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from taming.modules.losses.vqperceptual import DummyLoss - diff --git a/spaces/cymic/VITS-Tokaiteio/train.py b/spaces/cymic/VITS-Tokaiteio/train.py deleted file mode 100644 index 703d30cf9ef2c414d9b35fe65545cc8fefad8821..0000000000000000000000000000000000000000 --- a/spaces/cymic/VITS-Tokaiteio/train.py +++ /dev/null @@ -1,290 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import commons -import utils -from data_utils import ( - TextAudioLoader, - TextAudioCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '80000' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32,300,400,500,600,700,800,900,1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioCollate() - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank==0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(train_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask,\ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths) - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank==0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000) - y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0,:,:y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py deleted file mode 100644 index eb4e0d31f1aedf4590628d394e1606920fefb5c9..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r18" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 25 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/danielpedriniportfolio/AutoDA/pages/02-Data_Type.py b/spaces/danielpedriniportfolio/AutoDA/pages/02-Data_Type.py deleted file mode 100644 index ebb108e9de86c1818bdce56ef498e915c66c364a..0000000000000000000000000000000000000000 --- a/spaces/danielpedriniportfolio/AutoDA/pages/02-Data_Type.py +++ /dev/null @@ -1,46 +0,0 @@ -import pandas as pd -import streamlit as st - -def change_data_type(df, column, data_type): - df[column] = df[column].astype(data_type) - return df - -def reload_data(): - st.write("Reloading data...") - df_original = st.session_state["df_original"] - df = df_original.copy() - st.session_state.df = df - del st.session_state['df_target'] - del st.session_state['best'] - st.experimental_rerun() - -st.set_page_config(layout='wide') -col1, col2, col3 = st.columns([15, 70, 15]) - -with col1: - st.write('') -with col2: - if 'df' not in st.session_state: - st.warning('Please upload a CSV file') - else: - st.header('Data Type') - if st.button('Reload data'): - reload_data() - - df = st.session_state['df'] - data_type = df.dtypes.to_frame(name='Data Type').astype(object) - st.dataframe(data_type, height=38*len(data_type)) - st.subheader('Change data type') - st.write('Select the column and the data type') - column = st.selectbox('Select the column', data_type.index) - data_type_list = ['int64', 'float64', 'object', 'bool', 'datetime64[ns]'] - data_type_list.remove(str(data_type.loc[column, 'Data Type'])) - data_type = st.selectbox('Select the data type', data_type_list) - if st.button('Change data type'): - df = change_data_type(df, column, data_type) - st.dataframe(df.head()) - st.session_state.df = df - st.success('Data type changed') - st.experimental_rerun() -with col3: - st.write('') \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/Jpeg2KImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/Jpeg2KImagePlugin.py deleted file mode 100644 index 9309768bacffcf071dcc3db764285db911d38323..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/Jpeg2KImagePlugin.py +++ /dev/null @@ -1,399 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# JPEG2000 file handling -# -# History: -# 2014-03-12 ajh Created -# 2021-06-30 rogermb Extract dpi information from the 'resc' header box -# -# Copyright (c) 2014 Coriolis Systems Limited -# Copyright (c) 2014 Alastair Houghton -# -# See the README file for information on usage and redistribution. -# -import io -import os -import struct - -from . import Image, ImageFile, _binary - - -class BoxReader: - """ - A small helper class to read fields stored in JPEG2000 header boxes - and to easily step into and read sub-boxes. - """ - - def __init__(self, fp, length=-1): - self.fp = fp - self.has_length = length >= 0 - self.length = length - self.remaining_in_box = -1 - - def _can_read(self, num_bytes): - if self.has_length and self.fp.tell() + num_bytes > self.length: - # Outside box: ensure we don't read past the known file length - return False - if self.remaining_in_box >= 0: - # Inside box contents: ensure read does not go past box boundaries - return num_bytes <= self.remaining_in_box - else: - return True # No length known, just read - - def _read_bytes(self, num_bytes): - if not self._can_read(num_bytes): - msg = "Not enough data in header" - raise SyntaxError(msg) - - data = self.fp.read(num_bytes) - if len(data) < num_bytes: - msg = f"Expected to read {num_bytes} bytes but only got {len(data)}." - raise OSError(msg) - - if self.remaining_in_box > 0: - self.remaining_in_box -= num_bytes - return data - - def read_fields(self, field_format): - size = struct.calcsize(field_format) - data = self._read_bytes(size) - return struct.unpack(field_format, data) - - def read_boxes(self): - size = self.remaining_in_box - data = self._read_bytes(size) - return BoxReader(io.BytesIO(data), size) - - def has_next_box(self): - if self.has_length: - return self.fp.tell() + self.remaining_in_box < self.length - else: - return True - - def next_box_type(self): - # Skip the rest of the box if it has not been read - if self.remaining_in_box > 0: - self.fp.seek(self.remaining_in_box, os.SEEK_CUR) - self.remaining_in_box = -1 - - # Read the length and type of the next box - lbox, tbox = self.read_fields(">I4s") - if lbox == 1: - lbox = self.read_fields(">Q")[0] - hlen = 16 - else: - hlen = 8 - - if lbox < hlen or not self._can_read(lbox - hlen): - msg = "Invalid header length" - raise SyntaxError(msg) - - self.remaining_in_box = lbox - hlen - return tbox - - -def _parse_codestream(fp): - """Parse the JPEG 2000 codestream to extract the size and component - count from the SIZ marker segment, returning a PIL (size, mode) tuple.""" - - hdr = fp.read(2) - lsiz = _binary.i16be(hdr) - siz = hdr + fp.read(lsiz - 2) - lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, _, _, _, _, csiz = struct.unpack_from( - ">HHIIIIIIIIH", siz - ) - ssiz = [None] * csiz - xrsiz = [None] * csiz - yrsiz = [None] * csiz - for i in range(csiz): - ssiz[i], xrsiz[i], yrsiz[i] = struct.unpack_from(">BBB", siz, 36 + 3 * i) - - size = (xsiz - xosiz, ysiz - yosiz) - if csiz == 1: - if (yrsiz[0] & 0x7F) > 8: - mode = "I;16" - else: - mode = "L" - elif csiz == 2: - mode = "LA" - elif csiz == 3: - mode = "RGB" - elif csiz == 4: - mode = "RGBA" - else: - mode = None - - return size, mode - - -def _res_to_dpi(num, denom, exp): - """Convert JPEG2000's (numerator, denominator, exponent-base-10) resolution, - calculated as (num / denom) * 10^exp and stored in dots per meter, - to floating-point dots per inch.""" - if denom != 0: - return (254 * num * (10**exp)) / (10000 * denom) - - -def _parse_jp2_header(fp): - """Parse the JP2 header box to extract size, component count, - color space information, and optionally DPI information, - returning a (size, mode, mimetype, dpi) tuple.""" - - # Find the JP2 header box - reader = BoxReader(fp) - header = None - mimetype = None - while reader.has_next_box(): - tbox = reader.next_box_type() - - if tbox == b"jp2h": - header = reader.read_boxes() - break - elif tbox == b"ftyp": - if reader.read_fields(">4s")[0] == b"jpx ": - mimetype = "image/jpx" - - size = None - mode = None - bpc = None - nc = None - dpi = None # 2-tuple of DPI info, or None - - while header.has_next_box(): - tbox = header.next_box_type() - - if tbox == b"ihdr": - height, width, nc, bpc = header.read_fields(">IIHB") - size = (width, height) - if nc == 1 and (bpc & 0x7F) > 8: - mode = "I;16" - elif nc == 1: - mode = "L" - elif nc == 2: - mode = "LA" - elif nc == 3: - mode = "RGB" - elif nc == 4: - mode = "RGBA" - elif tbox == b"res ": - res = header.read_boxes() - while res.has_next_box(): - tres = res.next_box_type() - if tres == b"resc": - vrcn, vrcd, hrcn, hrcd, vrce, hrce = res.read_fields(">HHHHBB") - hres = _res_to_dpi(hrcn, hrcd, hrce) - vres = _res_to_dpi(vrcn, vrcd, vrce) - if hres is not None and vres is not None: - dpi = (hres, vres) - break - - if size is None or mode is None: - msg = "Malformed JP2 header" - raise SyntaxError(msg) - - return size, mode, mimetype, dpi - - -## -# Image plugin for JPEG2000 images. - - -class Jpeg2KImageFile(ImageFile.ImageFile): - format = "JPEG2000" - format_description = "JPEG 2000 (ISO 15444)" - - def _open(self): - sig = self.fp.read(4) - if sig == b"\xff\x4f\xff\x51": - self.codec = "j2k" - self._size, self.mode = _parse_codestream(self.fp) - else: - sig = sig + self.fp.read(8) - - if sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a": - self.codec = "jp2" - header = _parse_jp2_header(self.fp) - self._size, self.mode, self.custom_mimetype, dpi = header - if dpi is not None: - self.info["dpi"] = dpi - if self.fp.read(12).endswith(b"jp2c\xff\x4f\xff\x51"): - self._parse_comment() - else: - msg = "not a JPEG 2000 file" - raise SyntaxError(msg) - - if self.size is None or self.mode is None: - msg = "unable to determine size/mode" - raise SyntaxError(msg) - - self._reduce = 0 - self.layers = 0 - - fd = -1 - length = -1 - - try: - fd = self.fp.fileno() - length = os.fstat(fd).st_size - except Exception: - fd = -1 - try: - pos = self.fp.tell() - self.fp.seek(0, io.SEEK_END) - length = self.fp.tell() - self.fp.seek(pos) - except Exception: - length = -1 - - self.tile = [ - ( - "jpeg2k", - (0, 0) + self.size, - 0, - (self.codec, self._reduce, self.layers, fd, length), - ) - ] - - def _parse_comment(self): - hdr = self.fp.read(2) - length = _binary.i16be(hdr) - self.fp.seek(length - 2, os.SEEK_CUR) - - while True: - marker = self.fp.read(2) - if not marker: - break - typ = marker[1] - if typ in (0x90, 0xD9): - # Start of tile or end of codestream - break - hdr = self.fp.read(2) - length = _binary.i16be(hdr) - if typ == 0x64: - # Comment - self.info["comment"] = self.fp.read(length - 2)[2:] - break - else: - self.fp.seek(length - 2, os.SEEK_CUR) - - @property - def reduce(self): - # https://github.com/python-pillow/Pillow/issues/4343 found that the - # new Image 'reduce' method was shadowed by this plugin's 'reduce' - # property. This attempts to allow for both scenarios - return self._reduce or super().reduce - - @reduce.setter - def reduce(self, value): - self._reduce = value - - def load(self): - if self.tile and self._reduce: - power = 1 << self._reduce - adjust = power >> 1 - self._size = ( - int((self.size[0] + adjust) / power), - int((self.size[1] + adjust) / power), - ) - - # Update the reduce and layers settings - t = self.tile[0] - t3 = (t[3][0], self._reduce, self.layers, t[3][3], t[3][4]) - self.tile = [(t[0], (0, 0) + self.size, t[2], t3)] - - return ImageFile.ImageFile.load(self) - - -def _accept(prefix): - return ( - prefix[:4] == b"\xff\x4f\xff\x51" - or prefix[:12] == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a" - ) - - -# ------------------------------------------------------------ -# Save support - - -def _save(im, fp, filename): - # Get the keyword arguments - info = im.encoderinfo - - if filename.endswith(".j2k") or info.get("no_jp2", False): - kind = "j2k" - else: - kind = "jp2" - - offset = info.get("offset", None) - tile_offset = info.get("tile_offset", None) - tile_size = info.get("tile_size", None) - quality_mode = info.get("quality_mode", "rates") - quality_layers = info.get("quality_layers", None) - if quality_layers is not None and not ( - isinstance(quality_layers, (list, tuple)) - and all( - [ - isinstance(quality_layer, (int, float)) - for quality_layer in quality_layers - ] - ) - ): - msg = "quality_layers must be a sequence of numbers" - raise ValueError(msg) - - num_resolutions = info.get("num_resolutions", 0) - cblk_size = info.get("codeblock_size", None) - precinct_size = info.get("precinct_size", None) - irreversible = info.get("irreversible", False) - progression = info.get("progression", "LRCP") - cinema_mode = info.get("cinema_mode", "no") - mct = info.get("mct", 0) - signed = info.get("signed", False) - comment = info.get("comment") - if isinstance(comment, str): - comment = comment.encode() - plt = info.get("plt", False) - - fd = -1 - if hasattr(fp, "fileno"): - try: - fd = fp.fileno() - except Exception: - fd = -1 - - im.encoderconfig = ( - offset, - tile_offset, - tile_size, - quality_mode, - quality_layers, - num_resolutions, - cblk_size, - precinct_size, - irreversible, - progression, - cinema_mode, - mct, - signed, - fd, - comment, - plt, - ) - - ImageFile._save(im, fp, [("jpeg2k", (0, 0) + im.size, 0, kind)]) - - -# ------------------------------------------------------------ -# Registry stuff - - -Image.register_open(Jpeg2KImageFile.format, Jpeg2KImageFile, _accept) -Image.register_save(Jpeg2KImageFile.format, _save) - -Image.register_extensions( - Jpeg2KImageFile.format, [".jp2", ".j2k", ".jpc", ".jpf", ".jpx", ".j2c"] -) - -Image.register_mime(Jpeg2KImageFile.format, "image/jp2") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/helpers.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/helpers.py deleted file mode 100644 index 571be44461b0847c9edb8654c9d528abed0b7800..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/helpers.py +++ /dev/null @@ -1,101 +0,0 @@ -from typing import cast, List, Type, Union, ValuesView - -from .._connection import Connection, NEED_DATA, PAUSED -from .._events import ( - ConnectionClosed, - Data, - EndOfMessage, - Event, - InformationalResponse, - Request, - Response, -) -from .._state import CLIENT, CLOSED, DONE, MUST_CLOSE, SERVER -from .._util import Sentinel - -try: - from typing import Literal -except ImportError: - from typing_extensions import Literal # type: ignore - - -def get_all_events(conn: Connection) -> List[Event]: - got_events = [] - while True: - event = conn.next_event() - if event in (NEED_DATA, PAUSED): - break - event = cast(Event, event) - got_events.append(event) - if type(event) is ConnectionClosed: - break - return got_events - - -def receive_and_get(conn: Connection, data: bytes) -> List[Event]: - conn.receive_data(data) - return get_all_events(conn) - - -# Merges adjacent Data events, converts payloads to bytestrings, and removes -# chunk boundaries. -def normalize_data_events(in_events: List[Event]) -> List[Event]: - out_events: List[Event] = [] - for event in in_events: - if type(event) is Data: - event = Data(data=bytes(event.data), chunk_start=False, chunk_end=False) - if out_events and type(out_events[-1]) is type(event) is Data: - out_events[-1] = Data( - data=out_events[-1].data + event.data, - chunk_start=out_events[-1].chunk_start, - chunk_end=out_events[-1].chunk_end, - ) - else: - out_events.append(event) - return out_events - - -# Given that we want to write tests that push some events through a Connection -# and check that its state updates appropriately... we might as make a habit -# of pushing them through two Connections with a fake network link in -# between. -class ConnectionPair: - def __init__(self) -> None: - self.conn = {CLIENT: Connection(CLIENT), SERVER: Connection(SERVER)} - self.other = {CLIENT: SERVER, SERVER: CLIENT} - - @property - def conns(self) -> ValuesView[Connection]: - return self.conn.values() - - # expect="match" if expect=send_events; expect=[...] to say what expected - def send( - self, - role: Type[Sentinel], - send_events: Union[List[Event], Event], - expect: Union[List[Event], Event, Literal["match"]] = "match", - ) -> bytes: - if not isinstance(send_events, list): - send_events = [send_events] - data = b"" - closed = False - for send_event in send_events: - new_data = self.conn[role].send(send_event) - if new_data is None: - closed = True - else: - data += new_data - # send uses b"" to mean b"", and None to mean closed - # receive uses b"" to mean closed, and None to mean "try again" - # so we have to translate between the two conventions - if data: - self.conn[self.other[role]].receive_data(data) - if closed: - self.conn[self.other[role]].receive_data(b"") - got_events = get_all_events(self.conn[self.other[role]]) - if expect == "match": - expect = send_events - if not isinstance(expect, list): - expect = [expect] - assert got_events == expect - return data diff --git a/spaces/dddmiku/vits-uma-genshin-honkai/models.py b/spaces/dddmiku/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 52e15d1b9775038fd6e82b2efe6f95f51c66802d..0000000000000000000000000000000000000000 --- a/spaces/dddmiku/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - device = next(self.parameters()).device # 获取模型所在的设备 - x, m_p, logs_p, x_mask = self.enc_p(x.to(device), x_lengths.to(device)) - if self.n_speakers > 0: - g = self.emb_g(sid.to(device)).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/util/visualizer.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/util/visualizer.py deleted file mode 100644 index 4023a6d4086acba9bc88e079f625194d324d7c9e..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/util/visualizer.py +++ /dev/null @@ -1,227 +0,0 @@ -"""This script defines the visualizer for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import os -import sys -import ntpath -import time -from . import util, html -from subprocess import Popen, PIPE -from torch.utils.tensorboard import SummaryWriter - -def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256): - """Save images to the disk. - - Parameters: - webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details) - visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs - image_path (str) -- the string is used to create image paths - aspect_ratio (float) -- the aspect ratio of saved images - width (int) -- the images will be resized to width x width - - This function will save images stored in 'visuals' to the HTML file specified by 'webpage'. - """ - image_dir = webpage.get_image_dir() - short_path = ntpath.basename(image_path[0]) - name = os.path.splitext(short_path)[0] - - webpage.add_header(name) - ims, txts, links = [], [], [] - - for label, im_data in visuals.items(): - im = util.tensor2im(im_data) - image_name = '%s/%s.png' % (label, name) - os.makedirs(os.path.join(image_dir, label), exist_ok=True) - save_path = os.path.join(image_dir, image_name) - util.save_image(im, save_path, aspect_ratio=aspect_ratio) - ims.append(image_name) - txts.append(label) - links.append(image_name) - webpage.add_images(ims, txts, links, width=width) - - -class Visualizer(): - """This class includes several functions that can display/save images and print/save logging information. - - It uses a Python library tensprboardX for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images. - """ - - def __init__(self, opt): - """Initialize the Visualizer class - - Parameters: - opt -- stores all the experiment flags; needs to be a subclass of BaseOptions - Step 1: Cache the training/test options - Step 2: create a tensorboard writer - Step 3: create an HTML object for saveing HTML filters - Step 4: create a logging file to store training losses - """ - self.opt = opt # cache the option - self.use_html = opt.isTrain and not opt.no_html - self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, 'logs', opt.name)) - self.win_size = opt.display_winsize - self.name = opt.name - self.saved = False - if self.use_html: # create an HTML object at /web/; images will be saved under /web/images/ - self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web') - self.img_dir = os.path.join(self.web_dir, 'images') - print('create web directory %s...' % self.web_dir) - util.mkdirs([self.web_dir, self.img_dir]) - # create a logging file to store training losses - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - def reset(self): - """Reset the self.saved status""" - self.saved = False - - - def display_current_results(self, visuals, total_iters, epoch, save_result): - """Display current results on tensorboad; save current results to an HTML file. - - Parameters: - visuals (OrderedDict) - - dictionary of images to display or save - total_iters (int) -- total iterations - epoch (int) - - the current epoch - save_result (bool) - - if save the current results to an HTML file - """ - for label, image in visuals.items(): - self.writer.add_image(label, util.tensor2im(image), total_iters, dataformats='HWC') - - if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved. - self.saved = True - # save images to the disk - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label)) - util.save_image(image_numpy, img_path) - - # update website - webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=0) - for n in range(epoch, 0, -1): - webpage.add_header('epoch [%d]' % n) - ims, txts, links = [], [], [] - - for label, image_numpy in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = 'epoch%.3d_%s.png' % (n, label) - ims.append(img_path) - txts.append(label) - links.append(img_path) - webpage.add_images(ims, txts, links, width=self.win_size) - webpage.save() - - def plot_current_losses(self, total_iters, losses): - # G_loss_collection = {} - # D_loss_collection = {} - # for name, value in losses.items(): - # if 'G' in name or 'NCE' in name or 'idt' in name: - # G_loss_collection[name] = value - # else: - # D_loss_collection[name] = value - # self.writer.add_scalars('G_collec', G_loss_collection, total_iters) - # self.writer.add_scalars('D_collec', D_loss_collection, total_iters) - for name, value in losses.items(): - self.writer.add_scalar(name, value, total_iters) - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message - - -class MyVisualizer: - def __init__(self, opt): - """Initialize the Visualizer class - - Parameters: - opt -- stores all the experiment flags; needs to be a subclass of BaseOptions - Step 1: Cache the training/test options - Step 2: create a tensorboard writer - Step 3: create an HTML object for saveing HTML filters - Step 4: create a logging file to store training losses - """ - self.opt = opt # cache the optio - self.name = opt.name - self.img_dir = os.path.join(opt.checkpoints_dir, opt.name, 'results') - - if opt.phase != 'test': - self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, opt.name, 'logs')) - # create a logging file to store training losses - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - - def display_current_results(self, visuals, total_iters, epoch, dataset='train', save_results=False, count=0, name=None, - add_image=True): - """Display current results on tensorboad; save current results to an HTML file. - - Parameters: - visuals (OrderedDict) - - dictionary of images to display or save - total_iters (int) -- total iterations - epoch (int) - - the current epoch - dataset (str) - - 'train' or 'val' or 'test' - """ - # if (not add_image) and (not save_results): return - - for label, image in visuals.items(): - for i in range(image.shape[0]): - image_numpy = util.tensor2im(image[i]) - if add_image: - self.writer.add_image(label + '%s_%02d'%(dataset, i + count), - image_numpy, total_iters, dataformats='HWC') - - if save_results: - save_path = os.path.join(self.img_dir, dataset, 'epoch_%s_%06d'%(epoch, total_iters)) - if not os.path.isdir(save_path): - os.makedirs(save_path) - - if name is not None: - img_path = os.path.join(save_path, '%s.png' % name) - else: - img_path = os.path.join(save_path, '%s_%03d.png' % (label, i + count)) - util.save_image(image_numpy, img_path) - - - def plot_current_losses(self, total_iters, losses, dataset='train'): - for name, value in losses.items(): - self.writer.add_scalar(name + '/%s'%dataset, value, total_iters) - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data, dataset='train'): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(dataset: %s, epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % ( - dataset, epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message diff --git a/spaces/deprem-ml/deprem_satellite_test/dataloader.py b/spaces/deprem-ml/deprem_satellite_test/dataloader.py deleted file mode 100644 index 2a300b6444c3439dd8368566e229947ec789a069..0000000000000000000000000000000000000000 --- a/spaces/deprem-ml/deprem_satellite_test/dataloader.py +++ /dev/null @@ -1,55 +0,0 @@ -import albumentations as albu -import numpy as np -import cv2 -import os -os.environ['CUDA_VISIBLE_DEVICES'] = '0' - - -class Dataset: - def __init__( - self, - image_path, - augmentation=None, - preprocessing=None, - ): - self.pil_image = image_path - self.augmentation = augmentation - self.preprocessing = preprocessing - - def get(self): - # pil image > numpy array - image = np.array(self.pil_image) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - - # apply augmentations - if self.augmentation: - sample = self.augmentation(image=image) - image = sample['image'] - - # apply preprocessing - if self.preprocessing: - sample = self.preprocessing(image=image) - image = sample['image'] - - return image - - -def get_validation_augmentation(): - """Add paddings to make image shape divisible by 32""" - test_transform = [ - albu.PadIfNeeded(384, 480) - ] - return albu.Compose(test_transform) - - -def to_tensor(x, **kwargs): - return x.transpose(2, 0, 1).astype('float32') - - -def get_preprocessing(preprocessing_fn): - - _transform = [ - albu.Lambda(image=preprocessing_fn), - albu.Lambda(image=to_tensor), - ] - return albu.Compose(_transform) diff --git a/spaces/diacanFperku/AutoGPT/BonziBUDDY (The Gorilla) Serial Key.md b/spaces/diacanFperku/AutoGPT/BonziBUDDY (The Gorilla) Serial Key.md deleted file mode 100644 index 1a297c505e24b79b97ca2c8f6c0e5ea492103ad1..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/BonziBUDDY (The Gorilla) Serial Key.md +++ /dev/null @@ -1,20 +0,0 @@ -

    BonziBUDDY (The Gorilla) Serial Key


    Download Ziphttps://gohhs.com/2uFUbO



    - -The City of Richardson's Zoning Board of Adjustment (ZBA) unanimously approved a conditional use permit for Tenants in Common at the corner of Cass and Morris Road, back on Tuesday, February 13. It's a huge win for the new neighbors at this neighborhood on the edge of the Village, as well as a larger victory for all homebuyers in the future. - -According to the ZBA's decision, the proposed use of the property will consist of "an 11,000 square foot multi-family residence with 48 units and retail sales to the public." The requested height limit for the structure is 14 stories tall. You can read the board's full decision here. - -It's a big step in the right direction for this particular development, and a win for the people who will live there as well as the business owners and neighbors who came to testify before the ZBA. This is a step in the right direction as more housing options are welcomed in North Richland Hills. - -Image courtesy Tenants in Common - -For more great content, please become a subscriber to stay up to date with all the Acute Living news. - -Subscribe to the Acute Living newsletter to stay up to date on the latest health care news, trends and developments.'Tis the season of holiday parties, and that means it’s time to start planning your New Year’s Eve bash. And with New Year’s Eve falling on a Thursday this year, there are plenty of big parties around the city (as well as a few smaller ones) that will make for a memorable night. Here are the top five best New Year’s Eve parties in Boston, based on a combination of ticket prices, lines, and who’s on the guest list. - -This year’s Night of 100 Santas is back at the Museum of Fine Arts with hundreds of hand-painted Santas. It’s the perfect backdrop for an afternoon of shopping and selfies, and admission is $25 per person. The event also includes a free yoga class and complimentary hors d’oeuvres. - -This Black Friday, Best Buy is throwing its 6th annual “Gift of Light” holiday event at the Coolidge Corner Theater. At the party, visitors can buy up to $1,000 in gift cards for Best Buy, enjoy free in-store shopping and gift wrapping, take a behind-the-scenes tour of the store, and see free performances 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/ESQUEMA DA TV SAMSUNG LN40D550.md b/spaces/diacanFperku/AutoGPT/ESQUEMA DA TV SAMSUNG LN40D550.md deleted file mode 100644 index a38291ab0823180ebc06c6e8e022f2ce0f7536c7..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/ESQUEMA DA TV SAMSUNG LN40D550.md +++ /dev/null @@ -1,6 +0,0 @@ -

    ESQUEMA DA TV SAMSUNG LN40D550


    Download File ☆☆☆☆☆ https://gohhs.com/2uFTrC



    -
    -Encontre Esquema Tv Led Un40d5500 - Eletrnicos, . Placa Da Fonte Tv Led Samsung Un40d5500 Bn44-00458a . Usado - So Paulo . R$ 170.. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/text/__init__.py b/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/dolceschokolade/chatbot-mini/types/storage.ts b/spaces/dolceschokolade/chatbot-mini/types/storage.ts deleted file mode 100644 index 1b93e8bfe5de7259e707bdafae3055e5f0181711..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/types/storage.ts +++ /dev/null @@ -1,21 +0,0 @@ -import { Conversation } from './chat'; -import { FolderInterface } from './folder'; -import { PluginKey } from './plugin'; -import { Prompt } from './prompt'; - -// keep track of local storage schema -export interface LocalStorage { - apiKey: string; - conversationHistory: Conversation[]; - selectedConversation: Conversation; - theme: 'light' | 'dark'; - // added folders (3/23/23) - folders: FolderInterface[]; - // added prompts (3/26/23) - prompts: Prompt[]; - // added showChatbar and showPromptbar (3/26/23) - showChatbar: boolean; - showPromptbar: boolean; - // added plugin keys (4/3/23) - pluginKeys: PluginKey[]; -} diff --git a/spaces/dorkai/singpt/modules/callbacks.py b/spaces/dorkai/singpt/modules/callbacks.py deleted file mode 100644 index faa4a5e9991e1ae711589fed61e7d1f48e28fed3..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt/modules/callbacks.py +++ /dev/null @@ -1,98 +0,0 @@ -import gc -from queue import Queue -from threading import Thread - -import torch -import transformers - -import modules.shared as shared - -# Copied from https://github.com/PygmalionAI/gradio-ui/ -class _SentinelTokenStoppingCriteria(transformers.StoppingCriteria): - - def __init__(self, sentinel_token_ids: torch.LongTensor, - starting_idx: int): - transformers.StoppingCriteria.__init__(self) - self.sentinel_token_ids = sentinel_token_ids - self.starting_idx = starting_idx - - def __call__(self, input_ids: torch.LongTensor, - _scores: torch.FloatTensor) -> bool: - for sample in input_ids: - trimmed_sample = sample[self.starting_idx:] - # Can't unfold, output is still too tiny. Skip. - if trimmed_sample.shape[-1] < self.sentinel_token_ids.shape[-1]: - continue - - for window in trimmed_sample.unfold( - 0, self.sentinel_token_ids.shape[-1], 1): - if torch.all(torch.eq(self.sentinel_token_ids, window)): - return True - return False - -class Stream(transformers.StoppingCriteria): - def __init__(self, callback_func=None): - self.callback_func = callback_func - - def __call__(self, input_ids, scores) -> bool: - if self.callback_func is not None: - self.callback_func(input_ids[0]) - return False - -class Iteratorize: - - """ - Transforms a function that takes a callback - into a lazy iterator (generator). - """ - - def __init__(self, func, kwargs={}, callback=None): - self.mfunc=func - self.c_callback=callback - self.q = Queue() - self.sentinel = object() - self.kwargs = kwargs - self.stop_now = False - - def _callback(val): - if self.stop_now: - raise ValueError - self.q.put(val) - - def gentask(): - try: - ret = self.mfunc(callback=_callback, **self.kwargs) - except ValueError: - pass - clear_torch_cache() - self.q.put(self.sentinel) - if self.c_callback: - self.c_callback(ret) - - self.thread = Thread(target=gentask) - self.thread.start() - - def __iter__(self): - return self - - def __next__(self): - obj = self.q.get(True,None) - if obj is self.sentinel: - raise StopIteration - else: - return obj - - def __del__(self): - clear_torch_cache() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.stop_now = True - clear_torch_cache() - -def clear_torch_cache(): - gc.collect() - if not shared.args.cpu: - torch.cuda.empty_cache() diff --git a/spaces/eatcosmos/hackaprompt/hackaprompt/README.md b/spaces/eatcosmos/hackaprompt/hackaprompt/README.md deleted file mode 100644 index bd2850a0809dee3a95ccaead7b816060c85259d7..0000000000000000000000000000000000000000 --- a/spaces/eatcosmos/hackaprompt/hackaprompt/README.md +++ /dev/null @@ -1 +0,0 @@ -Execute `gradio_app.py` to launch the Gradio space. diff --git a/spaces/elina12/asr_arabic/app.py b/spaces/elina12/asr_arabic/app.py deleted file mode 100644 index 94253449c39fd3bb36dab1f1b497ef4be9bf7f55..0000000000000000000000000000000000000000 --- a/spaces/elina12/asr_arabic/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import pipeline -import gradio as gr -import os -#add - -pipe = pipeline(model="jonatasgrosman/wav2vec2-large-xlsr-53-arabic") - -def transcribe(audio): - text = pipe(audio)["text"] - return text - -iface = gr.Interface( - fn=transcribe, - inputs=gr.Audio(source="upload", type="filepath"), - outputs="text", - -) -iface.launch() \ No newline at end of file diff --git a/spaces/epochs-demos/MedicalImagingApp/README.md b/spaces/epochs-demos/MedicalImagingApp/README.md deleted file mode 100644 index 21bca40fbdf0081ac122ccc23e97f6c004ea29d3..0000000000000000000000000000000000000000 --- a/spaces/epochs-demos/MedicalImagingApp/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MedicalImagingApplication -emoji: 🦀 -colorFrom: pink -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -Credits: VikramSingh178/MedicalImagingApplication ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/facebook/StyleNeRF/gui_utils/imgui_window.py b/spaces/facebook/StyleNeRF/gui_utils/imgui_window.py deleted file mode 100644 index 30d539a1382def526050c83978d1118348ac77ad..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/gui_utils/imgui_window.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import imgui -import imgui.integrations.glfw - -from . import glfw_window -from . import imgui_utils -from . import text_utils - -#---------------------------------------------------------------------------- - -class ImguiWindow(glfw_window.GlfwWindow): - def __init__(self, *, title='ImguiWindow', font=None, font_sizes=range(14,24), **glfw_kwargs): - if font is None: - font = text_utils.get_default_font() - font_sizes = {int(size) for size in font_sizes} - super().__init__(title=title, **glfw_kwargs) - - # Init fields. - self._imgui_context = None - self._imgui_renderer = None - self._imgui_fonts = None - self._cur_font_size = max(font_sizes) - - # Delete leftover imgui.ini to avoid unexpected behavior. - if os.path.isfile('imgui.ini'): - os.remove('imgui.ini') - - # Init ImGui. - self._imgui_context = imgui.create_context() - self._imgui_renderer = _GlfwRenderer(self._glfw_window) - self._attach_glfw_callbacks() - imgui.get_io().ini_saving_rate = 0 # Disable creating imgui.ini at runtime. - imgui.get_io().mouse_drag_threshold = 0 # Improve behavior with imgui_utils.drag_custom(). - self._imgui_fonts = {size: imgui.get_io().fonts.add_font_from_file_ttf(font, size) for size in font_sizes} - self._imgui_renderer.refresh_font_texture() - - def close(self): - self.make_context_current() - self._imgui_fonts = None - if self._imgui_renderer is not None: - self._imgui_renderer.shutdown() - self._imgui_renderer = None - if self._imgui_context is not None: - #imgui.destroy_context(self._imgui_context) # Commented out to avoid creating imgui.ini at the end. - self._imgui_context = None - super().close() - - def _glfw_key_callback(self, *args): - super()._glfw_key_callback(*args) - self._imgui_renderer.keyboard_callback(*args) - - @property - def font_size(self): - return self._cur_font_size - - @property - def spacing(self): - return round(self._cur_font_size * 0.4) - - def set_font_size(self, target): # Applied on next frame. - self._cur_font_size = min((abs(key - target), key) for key in self._imgui_fonts.keys())[1] - - def begin_frame(self): - # Begin glfw frame. - super().begin_frame() - - # Process imgui events. - self._imgui_renderer.mouse_wheel_multiplier = self._cur_font_size / 10 - if self.content_width > 0 and self.content_height > 0: - self._imgui_renderer.process_inputs() - - # Begin imgui frame. - imgui.new_frame() - imgui.push_font(self._imgui_fonts[self._cur_font_size]) - imgui_utils.set_default_style(spacing=self.spacing, indent=self.font_size, scrollbar=self.font_size+4) - - def end_frame(self): - imgui.pop_font() - imgui.render() - imgui.end_frame() - self._imgui_renderer.render(imgui.get_draw_data()) - super().end_frame() - -#---------------------------------------------------------------------------- -# Wrapper class for GlfwRenderer to fix a mouse wheel bug on Linux. - -class _GlfwRenderer(imgui.integrations.glfw.GlfwRenderer): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.mouse_wheel_multiplier = 1 - - def scroll_callback(self, window, x_offset, y_offset): - self.io.mouse_wheel += y_offset * self.mouse_wheel_multiplier - -#---------------------------------------------------------------------------- diff --git a/spaces/falterWliame/Face_Mask_Detection/Fcs Express 4 Flow Cytometry Crack 19 !!INSTALL!!.md b/spaces/falterWliame/Face_Mask_Detection/Fcs Express 4 Flow Cytometry Crack 19 !!INSTALL!!.md deleted file mode 100644 index 6bc43297c41fb4c7ea1c9fd129de414b745cb9a6..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Fcs Express 4 Flow Cytometry Crack 19 !!INSTALL!!.md +++ /dev/null @@ -1,60 +0,0 @@ -

    Fcs Express 4 Flow Cytometry Crack 19


    Download ✵✵✵ https://urlca.com/2uDdL5



    -
    -Getting Started with FC Express - -FC Express is a fully integrated solution for analysis of flow cytometry data. You can simply import your flow data, analyze, and export results to Excel or other formats. FC Express is also compatible with a wide range of FlowJo® software. See below for a list of compatible software products. - -To analyze your data, you can choose to use: - -Single- or Multi-color Analysis - -The choices to select single or multi-color analysis are shown above. This dialog box allows you to select the color scheme to use for your analysis. - -Measurement Type - -FC Express has many measurement types available, including: - -Percent - -Percent of Events: Percentage of events in a population of cells - -Phenotypic - -Phenotype/Cell Population: Evaluation of selected markers in a population of cells (scalar analysis) - -Scatterplot - -Scatterplot: display of two parameters plotted against each other. You can create a scatterplot of different markers of one cell against markers of another cell - -Scatter Plot with Bivariate Data - -A scatter plot of two parameters plotted against each other. You can create a scatterplot of different markers of one cell against markers of another cell. FC Express also has a dialog box that lets you create a bivariate dot plot. - -Summary Plot - -A summary plot is a graphic representation of the distribution of one or more parameters for cells in a population. - -This dialog box allows you to view a summary plot of one or more parameters for cells in a population. You can set the plot type (Histogram, Scatter Plot, Bivariate Plot, or Plot Variance) and the color scheme for the plot. - -Histogram - -A histogram is a graphical representation of the distribution of one or more parameters for cells in a population. - -Histograms can be created using the following three parameters: - -Parameter - -Description - -Number - -Minimum number of events to display on histogram (0 = all events) - -Range - -Range of values for the data. For example, if the data are the fluorescence intensity values of a population of cells, the values for the range can be set to 1-2000. If the data are the frequency of cells that have a particular fluorescent signal, the values for the range can be set to 0-100. - -Automatic 4fefd39f24
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Globesurfer X 1 Firmware Download.rar [CRACKED].md b/spaces/falterWliame/Face_Mask_Detection/Globesurfer X 1 Firmware Download.rar [CRACKED].md deleted file mode 100644 index 674fab03035a833a884ad8440f18136909dc8bd1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Globesurfer X 1 Firmware Download.rar [CRACKED].md +++ /dev/null @@ -1,12 +0,0 @@ -

    globesurfer x 1 firmware download.rar


    Download Zip ……… https://urlca.com/2uDcYa



    -
    -clarmae ​​b7f02f1a74 martpriy 01/31/2022. -430 00:24:39,560 00:24:41,929 That could be invented by someone -431 00:24:41,929 00:24:43,863 who does not care what the other teams do. -432 00:24:43,863 00:24:45,481 Oh yes! -433 00:24:56,630 00:24:59,265 It's time to eat. -434 00:25:00,835 00:25:03,421 Oh, it's not that quick. -435 00:25:04,506 00:25:07,408 Well, it's never easy to eat the same food 8a78ff9644
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Pavtube Blu Ray Video Converter Ulti) UPD.md b/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Pavtube Blu Ray Video Converter Ulti) UPD.md deleted file mode 100644 index dd00703e46fba7a28ea3ef94e066abbc372c4e96..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Pavtube Blu Ray Video Converter Ulti) UPD.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (Pavtube Blu Ray Video Converter Ulti)


    DOWNLOADhttps://urlca.com/2uDcbt



    -
    -Comparison between Pavtube Video Converter Ultimate & Wondershare Video Converter Ultimate ... With the help of Pavtube Video Converter Ultimate, you can convert your Blu-rays/DVDs/videos to almost any video ... HD video formats ... install the Wondershare Player on your device which will occupy your device space. 1fdad05405
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/Aproveite o Traffic Rider Dinheiro Infinito e Corra em Pistas Movimentadas com Motos Reais.md b/spaces/fatiXbelha/sd/Aproveite o Traffic Rider Dinheiro Infinito e Corra em Pistas Movimentadas com Motos Reais.md deleted file mode 100644 index 4d1e50232fc5d6d05c40306452ab30a70a25f1b9..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Aproveite o Traffic Rider Dinheiro Infinito e Corra em Pistas Movimentadas com Motos Reais.md +++ /dev/null @@ -1,112 +0,0 @@ - -

    Traffic Rider Dinheiro Infinito APK Download: Como Baixar e Jogar o Jogo de Moto Mais Emocionante do Android e IOS

    -

    Você gosta de jogos de corrida de moto? Você quer sentir a adrenalina de pilotar uma moto potente em pistas movimentadas, ultrapassando carros e evitando acidentes? Você quer ter acesso a todas as motos do jogo, sem precisar gastar dinheiro real? Então você precisa conhecer o Traffic Rider Dinheiro Infinito APK, um mod do jogo original que oferece dinheiro ilimitado para você comprar e melhorar todas as motos que quiser. Neste artigo, vamos explicar o que é Traffic Rider, o que é Traffic Rider Dinheiro Infinito APK, como baixar e instalar o mod, e como jogar o jogo de moto mais emocionante do Android e IOS. Fique ligado!

    -

    O que é Traffic Rider?

    -

    Traffic Rider é um jogo para Android e IOS que foi lançado em 2016 pela Soner Kara, uma desenvolvedora turca de jogos. O jogo é um dos mais populares do gênero de corrida de moto, tendo mais de 100 milhões de downloads na Google Play Store e na App Store. O jogo se destaca por vários motivos, como:

    -

    traffic rider dinheiro infinito apk download


    Download ••• https://urllie.com/2uNAFF



    -

    Um jogo de corrida de moto em primeira pessoa

    -

    Traffic Rider é um dos poucos jogos de corrida de moto que usa a câmera em primeira pessoa, ou seja, você vê a pista pelos olhos do piloto. Isso aumenta a sensação de imersão e realismo do jogo, fazendo você se sentir como se estivesse realmente pilotando uma moto.

    -

    Um jogo com gráficos e sons realistas

    -

    Traffic Rider tem gráficos muito detalhados e bem desenhados, que mostram as pistas, os carros, as motos, e os cenários com muita qualidade. O jogo também tem um som detalhado do motor das motos, que foi gravado a partir de modelos da vida real. Além disso, o jogo tem uma mudança de hora do dia, que altera a iluminação e o clima das pistas.

    -

    Um jogo com vários modos e missões

    -

    Traffic Rider tem um modo carreira com mais de 40 missões, que vão desde corridas simples até desafios mais complexos. O jogo também tem outros modos, como o modo livre, o modo sem fim, o modo contra o tempo, e o modo contra o tráfego. Cada modo tem suas próprias características e objetivos, tornando o jogo mais variado e divertido.

    -

    O que é Traffic Rider Dinheiro Infinito APK?

    -

    T

    Traffic Rider Dinheiro Infinito APK é um mod do jogo original que oferece dinheiro ilimitado para os jogadores. Um mod é uma modificação feita por terceiros que altera alguns aspectos do jogo, como recursos, gráficos, sons, etc. O mod de Traffic Rider Dinheiro Infinito APK tem as seguintes vantagens:

    -

    Um mod que oferece dinheiro ilimitado

    -

    O mod de Traffic Rider Dinheiro Infinito APK permite que você tenha dinheiro infinito no jogo, ou seja, você pode comprar e melhorar todas as motos que quiser, sem se preocupar com o custo. Isso torna o jogo mais fácil e divertido, pois você pode experimentar todas as motos disponíveis, desde as mais simples até as mais potentes e velozes.

    -

    Um mod que permite comprar e melhorar todas as motos

    -

    O mod de Traffic Rider Dinheiro Infinito APK também permite que você compre e melhore todas as motos do jogo, sem precisar cumprir os requisitos mínimos. Isso significa que você pode ter acesso a motos que só seriam liberadas depois de completar certas missões ou alcançar certos níveis. Além disso, você pode melhorar os atributos das motos, como velocidade, aceleração, freio, e manuseio, tornando-as mais eficientes e competitivas.

    -

    Um mod que funciona em dispositivos Android e IOS

    -

    O mod de Traffic Rider Dinheiro Infinito APK funciona tanto em dispositivos Android quanto em IOS, ou seja, você pode baixar e instalar o mod em qualquer smartphone ou tablet que tenha o sistema operacional Android ou IOS. Isso é uma vantagem, pois muitos mods só funcionam em um tipo de dispositivo ou sistema operacional.

    -

    Como baixar e instalar Traffic Rider Dinheiro Infinito APK?

    -

    Para baixar e instalar o mod de Traffic Rider Dinheiro Infinito APK, você precisa seguir alguns passos simples, mas também tomar algumas precauções para evitar problemas com o mod. Veja a seguir:

    -

    traffic rider mod apk dinheiro infinito
    -baixar traffic rider dinheiro infinito
    -traffic rider hack dinheiro infinito
    -traffic rider apk mod dinheiro infinito 2023
    -como ter dinheiro infinito no traffic rider
    -traffic rider dinheiro infinito atualizado
    -traffic rider dinheiro infinito e chaves
    -download traffic rider dinheiro infinito
    -traffic rider apk dinheiro infinito 2022
    -traffic rider com dinheiro infinito
    -traffic rider dinheiro infinito e desbloqueado
    -instalar traffic rider dinheiro infinito
    -traffic rider dinheiro infinito ios
    -jogar traffic rider com dinheiro infinito
    -traffic rider dinheiro infinito mediafıre
    -traffic rider dinheiro infinito para android
    -traffic rider dinheiro infinito sem root
    -traffic rider apk mod dinheiro infinito e chaves
    -baixar jogo traffic rider dinheiro infinito
    -como baixar traffic rider com dinheiro infinito
    -como instalar traffic rider com dinheiro infinito
    -como jogar traffic rider com dinheiro infinito
    -download do traffic rider com dinheiro infinito
    -hack para traffic rider dinheiro infinito
    -jogo de moto traffic rider dinheiro infinito
    -link para baixar traffic rider dinheiro infinito
    -melhor site para baixar traffic rider dinheiro infinito
    -novo traffic rider com dinheiro infinito
    -onde baixar traffic rider com dinheiro infinito
    -site para baixar traffic rider dinheiro infinito
    -tutorial de como baixar traffic rider dinheiro infinito
    -ultima versão do traffic rider com dinheiro infinito
    -versão atualizada do traffic rider com dinheiro infinito
    -versão mais recente do traffic rider com dinheiro infinito
    -como atualizar o traffic rider com dinheiro infinito
    -como conseguir chaves no traffic rider com dinheiro infinito
    -como desbloquear todas as motos no traffic rider com dinheiro infinito
    -como fazer download do traffic rider com dinheiro infinito
    -como ganhar mais pontos no traffic rider com dinheiro infinito
    -como passar de fase no traffic rider com dinheiro infinito

    -

    Os requisitos para baixar o mod

    -

    Para baixar o mod de Traffic Rider Dinheiro Infinito APK, você precisa ter um dispositivo Android ou IOS com pelo menos 2 GB de memória RAM e 100 MB de espaço livre. Você também precisa ter uma conexão estável com a internet para fazer o download do arquivo do mod. Além disso, você precisa ter o jogo original instalado no seu dispositivo, pois o mod é uma extensão do jogo original.

    -

    Os passos para baixar e instalar o mod

    -

    Para baixar e instalar o mod de Traffic Rider Dinheiro Infinito APK, você precisa seguir os seguintes passos:

    -
      -
    1. Acesse um site confiável que ofereça o download do mod de Traffic Rider Dinheiro Infinito APK. Você pode pesquisar na internet por sites que tenham boas avaliações e comentários dos usuários.
    2. -
    3. Clique no botão de download do mod e aguarde até que o arquivo seja baixado no seu dispositivo.
    4. -
    5. Antes de instalar o mod, você precisa habilitar a opção de fontes desconhecidas no seu dispositivo. Isso permite que você instale aplicativos que não são da Google Play Store ou da App Store. Para fazer isso, vá em Configurações > Segurança > Fontes desconhecidas e ative a opção.
    6. -
    7. Agora, abra o arquivo do mod que você baixou e clique em instalar. Aguarde até que o processo seja concluído.
    8. -
    9. Pronto! Agora você pode abrir o jogo e aproveitar o mod de Traffic Rider Dinheiro Infinito APK.
    10. -
    -

    As precauções para evitar problemas com o mod

    -

    Para evitar problemas com o mod de Traffic Rider Dinheiro Infinito APK, você precisa tomar algumas precauções, como:

    -
      -
    • Baixe o mod apenas de sites confiáveis e seguros, pois alguns sites podem conter vírus ou malware que podem danificar o seu dispositivo ou roubar os seus dados.
    • -
    • Não atualize o jogo original depois de instalar o mod, pois isso pode fazer com que o mod pare de funcionar ou cause conflitos com o jogo original.
    • -
    • Não use o mod para jogar online ou competir com outros jogadores, pois isso pode ser considerado uma trapaça e pode resultar em banimento ou penalidades.
    • -
    • Faça um backup dos seus dados do jogo original antes de instalar o mod, pois isso pode evitar que você perca o seu progresso caso algo dê errado com o mod.
    • -
    -

    Como jogar Traffic Rider Dinheiro Infinito APK?

    -

    Depois de baixar e instalar o mod de Traffic Rider Dinheiro Infinito APK, você pode jogar o jogo normalmente, mas com algumas vantagens. Veja a seguir:

    -

    As opções de controle e configuração do jogo

    -

    Traffic Rider Dinheiro Infinito APK tem as mesmas opções de controle e configuração do jogo original. Você pode escolher entre dois tipos de controle: o acelerômetro, que usa o sensor de movimento do seu dispositivo para inclinar a moto, ou o toque, que usa botões virtuais na tela para controlar a moto. Você também pode ajustar a sensibilidade do controle, o volume do som, a qualidade dos gráficos, e outras opções nas configurações do jogo.

    -

    Os modos e missões disponíveis no jogo

    -

    Traffic Rider Dinheiro Infinito APK tem os mesmos modos e missões do jogo original. Você pode jogar o modo carreira, que tem mais de 40 missões com diferentes objetivos e dificuldades. Você também pode jogar os outros modos, como o modo livre, o modo sem fim, o modo contra o tempo, e o modo contra o tráfego. Cada modo tem suas próprias regras e desafios, que podem variar de acordo com a pista, a moto, o tráfego, e o clima.

    -

    As dicas e truques para se dar bem no jogo

    -

    Traffic Rider Dinheiro Infinito APK é um jogo divertido e emocionante, mas também requer habilidade e atenção para se dar bem. Aqui vão algumas dicas e truques para você se tornar um mestre das motos:

    -
      -
    • Aproveite o dinheiro infinito para comprar e melhorar as melhores motos do jogo. Quanto mais rápida e potente for a sua moto, mais fácil será ultrapassar os carros e completar as missões.
    • -
    • Use o freio com moderação e evite bater nos carros. Se você bater nos carros ou nas laterais da pista, você vai perder velocidade e tempo. Além disso, se você bater muito forte, você vai cair da moto e perder a missão.
    • -
    • Ultrapasse os carros pelo lado correto. Se você ultrapassar os carros pelo lado esquerdo, você vai ganhar mais pontos e dinheiro. Se você ultrapassar os carros pelo lado direito, você vai ganhar menos pontos e dinheiro.
    • -
    • Faça manobras arriscadas para aumentar o seu multiplicador de pontos. Se você dirigir em alta velocidade, ultrapassar os carros bem de perto, dirigir na contramão, ou fazer wheelies (levantar a roda dianteira da moto), você vai aumentar o seu multiplicador de pontos, que vai multiplicar os seus pontos e dinheiro ao final da missão.
    • -
    • Use o nitro para ganhar um impulso de velocidade. O nitro é um recurso que permite que você acelere mais rápido por um curto período de tempo. Você pode usar o nitro quando quiser, mas ele se recarrega mais rápido quando você dirige em alta velocidade ou faz manobras arriscadas.
    • -
    -

    Conclusão

    -

    Traffic Rider é um dos melhores jogos de corrida de moto para Android e IOS, que oferece uma experiência realista e emocionante de pilotar uma moto em pistas movimentadas. Traffic Rider Dinheiro Infinito APK é um mod do jogo original que oferece dinheiro ilimitado para os jogadores, permitindo que eles comprem e melhorem todas as motos do jogo. Para baixar e instalar o mod, é preciso seguir alguns passos simples, mas também tomar algumas precauções para evitar problemas com o mod. Para jogar o mod, é preciso seguir as mesmas regras do jogo original, mas também aproveitar as vantagens do dinheiro infinito e seguir algumas dicas e truques para se dar bem no jogo.

    Se você é um fã de jogos de corrida de moto, você não pode deixar de baixar e instalar o mod de Traffic Rider Dinheiro Infinito APK, que vai tornar o seu jogo mais fácil e divertido. Você vai poder pilotar as motos mais incríveis do jogo, sem se preocupar com o dinheiro. Você também vai poder desfrutar de um jogo com gráficos e sons realistas, vários modos e missões, e uma jogabilidade viciante. Não perca tempo e baixe já o mod de Traffic Rider Dinheiro Infinito APK!

    -

    Perguntas Frequentes

    -

    Aqui estão algumas perguntas frequentes sobre o mod de Traffic Rider Dinheiro Infinito APK:

    -

    O mod de Traffic Rider Dinheiro Infinito APK é seguro?

    -

    Sim, o mod de Traffic Rider Dinheiro Infinito APK é seguro, desde que você baixe o arquivo do mod de um site confiável e seguro, que não contenha vírus ou malware. Você também deve tomar as precauções necessárias para evitar problemas com o mod, como não atualizar o jogo original, não usar o mod para jogar online, e fazer um backup dos seus dados do jogo original.

    -

    O mod de Traffic Rider Dinheiro Infinito APK é legal?

    -

    Não, o mod de Traffic Rider Dinheiro Infinito APK não é legal, pois ele viola os termos e condições do jogo original. O mod é uma modificação feita por terceiros que altera alguns aspectos do jogo, como recursos, gráficos, sons, etc. Isso pode ser considerado uma trapaça ou uma pirataria, que pode resultar em banimento ou penalidades. Portanto, use o mod por sua própria conta e risco.

    -

    O mod de Traffic Rider Dinheiro Infinito APK precisa de root ou jailbreak?

    -

    Não, o mod de Traffic Rider Dinheiro Infinito APK não precisa de root ou jailbreak para funcionar. O root ou jailbreak são processos que permitem que você tenha acesso total ao sistema operacional do seu dispositivo, podendo modificar ou remover aplicativos e arquivos. No entanto, esses processos podem danificar o seu dispositivo ou invalidar a sua garantia. Por isso, o mod de Traffic Rider Dinheiro Infinito APK não requer root ou jailbreak para funcionar.

    -

    O mod de Traffic Rider Dinheiro Infinito APK funciona em todos os dispositivos?

    -

    Sim, o mod de Traffic Rider Dinheiro Infinito APK funciona em todos os dispositivos que tenham o sistema operacional Android ou IOS. No entanto, você precisa ter um dispositivo com pelo menos 2 GB de memória RAM e 100 MB de espaço livre para baixar e instalar o mod. Você também precisa ter uma conexão estável com a internet para fazer o download do arquivo do mod.

    -

    Onde posso baixar o mod de Traffic Rider Dinheiro Infinito APK?

    -

    Você pode baixar o mod de Traffic Rider Dinheiro Infinito APK em vários sites da internet que oferecem o download do arquivo do mod. No entanto, você deve escolher um site confiável e seguro, que não contenha vírus ou malware. Você pode pesquisar na internet por sites que tenham boas avaliações e comentários dos usuários.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Score! Hero 2 APK and Enjoy the New Infinite Hero Mode.md b/spaces/fatiXbelha/sd/Download Score! Hero 2 APK and Enjoy the New Infinite Hero Mode.md deleted file mode 100644 index 9e8b0cd81fee34e020b329ab74aad84f594dc923..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Score! Hero 2 APK and Enjoy the New Infinite Hero Mode.md +++ /dev/null @@ -1,74 +0,0 @@ - - - -
    -

    Score! Hero 2 Review: A Fun but Challenging Soccer Game

    -

    Introduction

    -

    If you are a fan of soccer games on mobile devices, you might have heard of Score! Hero, a popular game that lets you control a single player's career from his humble beginnings to his rise to fame. Now, there is a sequel to this game, called Score! Hero 2, which promises to offer more fun and excitement for soccer lovers.

    -

    Score! Hero 2 is a free-to-play soccer simulation game that is available for Android and iOS devices. The game features over 90 licensed teams from around the world, realistic graphics and animations, and a new endless hero mode where you can test your skills in unlimited scenarios. The game also allows you to customize your hero's appearance and sync your progress across devices via Facebook.

    -

    score hero 2 apk download uptodown


    Downloadhttps://urllie.com/2uNIja



    -

    But is Score! Hero 2 worth playing? How does it compare to its predecessor? And what are some of the pros and cons of this game? In this review, we will try to answer these questions and more. We will look at the game's gameplay, graphics, sound, difficulty, replay value, and more. We will also provide some tips and tricks for playing the game better.

    Gameplay -

    The gameplay of Score! Hero 2 is similar to that of Score! Hero, but with some new additions and improvements. The game consists of levels where you have to score goals or assist your teammates, using simple swipe controls to draw paths for the ball. You can also curve the ball, chip it over the defenders, or lob it to your teammates. The game gives you a limited number of moves per level, and you have to complete the level's objectives within those moves. If you fail, you can either restart the level or use a rewind, which costs money or watching an ad.

    -

    The game's gameplay is fun and engaging, as it lets you control your own hero and his destiny. You can choose which team to play for, which position to play in, and how to score or assist. The game also has a realistic physics engine that makes the ball's movement and interaction with the players and the environment believable and satisfying. The game also has a variety of scenarios and challenges that keep you on your toes and test your skills.

    -

    However, the game's gameplay also has some flaws and difficulties that might frustrate some players. For one thing, the game's AI players are not very consistent, and they can move either too slow or too fast, making it hard to pass or shoot accurately. Sometimes, they can also make illogical decisions or ignore your commands. Another thing is that the game requires a lot of precision and patience, as you will have to repeat the entire level if you make a mistake. This can be annoying and tedious, especially if the level is long or complex. The game can also be quite hard to earn money for rewinds, which allow you to undo your last move. You will either have to watch ads, complete offers, or buy them with real money.

    -

    Score! Hero 2 is similar to Score! Hero in terms of gameplay, but it also has some new additions and improvements. For example, the game now has real-life teams and leagues, such as the Premier League, La Liga, Bundesliga, Serie A, and more. This adds variety and authenticity to the game, as you can play with or against your favorite teams and players. You can also customize your hero's appearance, such as his hair style, skin color, facial features, and clothing. You can also sync your progress across devices via Facebook, so you don't lose your data.

    Graphics and Sound

    -

    The graphics and sound of Score! Hero 2 are impressive and immersive, as they create a realistic and thrilling soccer experience. The game's graphics and animations are detailed and smooth, and they capture the emotions and expressions of the players and the crowd. The game also has different weather effects, such as rain, snow, fog, and wind, that affect the gameplay and the visuals. The game also has different camera angles and slow-motion replays that enhance the drama and excitement of the game.

    -

    Score! Hero 2 free mobile soccer game
    -Score! Hero 2023 new story and career mode
    -Score! Hero 2 APK download for Android
    -Score! Hero 2 APKCombo free download
    -Score! Hero 2 latest version and update
    -Score! Hero 2 official licensed teams and leagues
    -Score! Hero 2 amazing goals and passes
    -Score! Hero 2 infinite hero mode and customization
    -Score! Hero 2 Softonic review and rating
    -Score! Hero 2 install from Google Play
    -Score! Hero 2 alternatives and similar games
    -Score! Hero 2 gameplay and features
    -Score! Hero 2 tips and tricks
    -Score! Hero 2 mod apk unlimited money
    -Score! Hero 2 hack apk no root
    -Score! Hero 2 cheats and codes
    -Score! Hero 2 online and offline mode
    -Score! Hero 2 multiplayer and social mode
    -Score! Hero 2 best teams and players
    -Score! Hero 2 graphics and animation
    -Score! Hero 2 performance and optimization
    -Score! Hero 2 bugs and issues
    -Score! Hero 2 support and feedback
    -Score! Hero 2 Arlo White commentary
    -Score! Hero 2 First Touch Games developer
    -Score! Hero 2023 vs Score! Hero comparison
    -Score! Hero 2023 for iPhone and iPad
    -Score! Hero 2023 for Windows PC and Mac
    -Score! Hero 2023 for other platforms and devices
    -Score! Hero 2023 in other languages and regions

    -

    The game's sound effects and commentaries are also well-done and atmospheric, as they create a sense of being in a real soccer match. The game features the voice of Arlo White, a top soccer commentator who provides insightful and lively commentary on the game's events. The game also has realistic sound effects of the ball, the players, the crowd, and the referee. The game also has a catchy and upbeat soundtrack that matches the mood and tempo of the game.

    -

    Difficulty and Replay Value

    -

    The difficulty and replay value of Score! Hero 2 are high and challenging, as they offer a lot of fun and satisfaction for soccer fans. The game's difficulty level is not easy, as you will have to master the swipe controls and the ball physics to score or assist. You will also have to deal with the game's AI players, who can be unpredictable and frustrating. You will also have to complete the level's objectives within a limited number of moves, which can be hard to achieve. The game also has a new scoring system that rewards you for completing levels with style and flair. You can earn up to three stars per level, depending on how well you perform. You can also earn bonus points for scoring spectacular goals, such as volleys, headers, free kicks, or long shots.

    -

    The game's replay value is also high, as you can play the game over and over again to improve your skills and your score. The game has over 90 levels to complete, each with different scenarios and challenges. You can also play the game's endless hero mode, where you can create your own levels and play them in an infinite loop. You can also compete with your friends and other players online via leaderboards and achievements.

    Conclusion

    -

    Score! Hero 2 is a fun and engaging soccer game that lets you control your own hero and his destiny. The game has realistic graphics and sound, licensed teams and leagues, and a new endless hero mode. The game also allows you to customize your hero's appearance and sync your progress across devices. However, the game also has some flaws and difficulties that might turn off some players. The game's AI players are not very consistent, the game requires a lot of precision and patience, and the game can be hard to earn money for rewinds. The game is also similar to its predecessor, Score! Hero, in terms of gameplay, but with some new additions and improvements.

    -

    Overall, Score! Hero 2 is a quality and appealing soccer game that will appeal to soccer fans and casual gamers alike. The game is free to download and play, but it also offers in-app purchases for extra features and benefits. If you are looking for a soccer game that lets you control your own hero, Score! Hero 2 might be the game for you. But be prepared to face some challenges and frustrations along the way.

    -

    Here are some tips and tricks for playing Score! Hero 2 better:

    -
      -
    • Practice your swipe controls and learn how to curve, chip, or lob the ball.
    • -
    • Pay attention to the level's objectives and try to complete them within the given moves.
    • -
    • Use rewinds wisely and only when necessary. You can earn more rewinds by watching ads, completing offers, or buying them with real money.
    • -
    • Try to score or assist with style and flair to earn more stars and bonus points.
    • -
    • Customize your hero's appearance and choose a team that suits your preferences.
    • -
    -

    FAQs

    -
      -
    1. What is Score! Hero 2?
    2. -

      Score! Hero 2 is a free-to-play soccer simulation game that is available for Android and iOS devices. The game lets you control a single player's career from his beginnings as a young prospect to his rise to stardom in the world of soccer.

      -
    3. How do I play Score! Hero 2?
    4. -

      You play Score! Hero 2 by completing levels where you have to score goals or assist your teammates, using simple swipe controls to draw paths for the ball. You can also curve the ball, chip it over the defenders, or lob it to your teammates. You have a limited number of moves per level, and you have to complete the level's objectives within those moves.

      -
    5. What are the features of Score! Hero 2?
    6. -

      Score! Hero 2 features over 90 licensed teams from around the world, realistic graphics and animations, and a new endless hero mode where you can test your skills in unlimited scenarios. The game also allows you to customize your hero's appearance and sync your progress across devices via Facebook.

      -
    7. What are the pros and cons of Score! Hero 2?
    8. -

      The pros of Score! Hero 2 are that it is fun and engaging, it has realistic graphics and sound, it has licensed teams and leagues, and it has a new endless hero mode. The cons of Score! Hero 2 are that it has inconsistent AI players, it requires a lot of precision and patience, and it can be hard to earn money for rewinds.

      -
    9. How does Score! Hero 2 compare to Score! Hero?
    10. -

      Score! Hero 2 is similar to Score! Hero in terms of gameplay, but it also has some new additions and improvements. For example, the game now has real-life teams and leagues, as well as commentaries from Arlo White, a top soccer commentator. You can also customize your hero's appearance and sync your progress across devices via Facebook. The game also has a new scoring system that rewards you for completing levels with style and flair.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/models.py b/spaces/fb700/chatglm-fitness-RLHF/models.py deleted file mode 100644 index 46b8aacb1bef18f6fad4c20c968b19125626799c..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/models.py +++ /dev/null @@ -1,351 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - use_spk, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.use_spk = use_spk - - self.enc_p = Encoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if not self.use_spk: - self.enc_spk = SpeakerEncoder(model_hidden_size=gin_channels, model_embedding_size=gin_channels) - - def forward(self, c, spec, g=None, mel=None, c_lengths=None, spec_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if spec_lengths == None: - spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device) - - if not self.use_spk: - g = self.enc_spk(mel.transpose(1,2)) - g = g.unsqueeze(-1) - - _, m_p, logs_p, _ = self.enc_p(c, c_lengths) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - z_p = self.flow(z, spec_mask, g=g) - - z_slice, ids_slice = commons.rand_slice_segments(z, spec_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, c, g=None, mel=None, c_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if not self.use_spk: - g = self.enc_spk.embed_utterance(mel.transpose(1,2)) - g = g.unsqueeze(-1) - - z_p, m_p, logs_p, c_mask = self.enc_p(c, c_lengths) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g) - - return o diff --git a/spaces/fclong/summary/fengshen/examples/classification/finetune_classification_zen1-base_tnews.sh b/spaces/fclong/summary/fengshen/examples/classification/finetune_classification_zen1-base_tnews.sh deleted file mode 100644 index eaa50ddac4376c8e86000852da138d0d4779126d..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/classification/finetune_classification_zen1-base_tnews.sh +++ /dev/null @@ -1,150 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=afqmc-bart-base # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=2 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:2 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id) - -export CUDA_VISIBLE_DEVICES='5' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=fengshen-zen1 - -TASK=tnews -TEXTA_NAME=sentence -LABEL_NAME=label -ID_NAME=id - - -BATCH_SIZE=8 -VAL_BATCH_SIZE=32 -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/classification_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/yangping/data/ChineseCLUE_DATA/${TASK}_public/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/ZEN_pretrain_base_v0.1.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - - -config_json="${ROOT_DIR}/ds_config.json" -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -# reduce_bucket_size: hidden_size*hidden_size -# stage3_prefetch_bucket_size: 0.9 * hidden_size * hidden_size -# stage3_param_persistence_threshold: 10 * hidden_size - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": $BATCH_SIZE, - "steps_per_print": 100, - "gradient_clipping": 0.1, - "zero_optimization": { - "stage": ${ZERO_STAGE} - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 2e-5, - "eps": 1e-12, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 2e-8, - "warmup_max_lr": 2e-5, - "warmup_num_steps": 400, - "warmup_type": "linear" - } - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json - - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.json \ - --valid_data dev.json \ - --test_data test1.1.json \ - --train_batchsize $BATCH_SIZE \ - --valid_batchsize $VAL_BATCH_SIZE \ - --max_length 128 \ - --texta_name $TEXTA_NAME \ - --label_name $LABEL_NAME \ - --id_name $ID_NAME \ - " - -MODEL_ARGS="\ - --learning_rate 1e-5 \ - --weight_decay 1e-2 \ - --warmup 0.01 \ - --num_labels 15 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_acc \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 200 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_acc:.4f} \ - " - - -TRAINER_ARGS="\ - --max_epochs 7 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy $STRATEGY \ - --gradient_clip_val 1.0 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 1.0 \ - --default_root_dir $ROOT_DIR \ - " - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --output_save_path $OUTPUT_PATH \ - --model_type $MODEL_NAME \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/classification/finetune_classification.py - -# python3 $SCRIPT_PATH $options -source activate base -singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Badminton League Versi Lama APK Game Bulutangkis Seru dan Realistis.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Badminton League Versi Lama APK Game Bulutangkis Seru dan Realistis.md deleted file mode 100644 index adcb43b2452fc72fdf6dadffe87ca2b449c30c78..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Badminton League Versi Lama APK Game Bulutangkis Seru dan Realistis.md +++ /dev/null @@ -1,119 +0,0 @@ - -

    How to Download Badminton League Versi Lama

    -

    Badminton League is one of the most competitive and realistic badminton games ever. You can create your own character, customize your outfit, and level up your skills to smash, hit, and jump like a badminton star. You can also play with your friends in local mode, or challenge players from around the world in tournament mode.

    -

    download badminton league versi lama


    Download Zip 🌟 https://gohhs.com/2uPu2A



    -

    But what if you want to play an older version of the game, such as Badminton League Versi Lama? This is a modified version of the game that has some advantages over the official one, such as unlimited coins, unlocked outfits, and more. In this article, we will show you how to download and install Badminton League Versi Lama on your Android device, as well as some tips and tricks to help you win more matches.

    -

    What is Badminton League Versi Lama?

    -

    Badminton League Versi Lama is a modified version of the original Badminton League game that was released by RedFish Games. It has some features that are not available in the official game, such as:

    -
      -
    • Unlimited coins that you can use to buy new gear and upgrade your character.
    • -
    • Unlocked outfits that you can wear to customize your appearance.
    • -
    • No ads that interrupt your gameplay.
    • -
    • No need to connect to the internet to play.
    • -
    -

    However, there are also some drawbacks of playing Badminton League Versi Lama, such as:

    -
      -
    • Possible security risks from downloading an unofficial APK file.
    • -
    • Possible compatibility issues with newer Android devices and updates.
    • -
    • No support or updates from the developer.
    • -
    • No access to online features such as multiplayer mode and leaderboards.
    • -
    -

    Why Download Badminton League Versi Lama?

    -

    If you are a fan of badminton games, you might want to download Badminton League Versi Lama for several reasons, such as:

    -
      -
    • You want to enjoy the game without spending any money on coins or outfits.
    • -
    • You want to try out different outfits and styles that are not available in the official game.
    • -
    • You want to play offline without worrying about internet connection or data usage.
    • -
    • You want to experience the game as it was before it was updated or changed by the developer.
    • -
    -

    How to Download Badminton League Versi Lama?

    -

    If you have decided to download Badminton League Versi Lama, you will need to follow these steps:

    -

    Download Bulutangkis Liga MOD APK Versi Terbaru dan Lama
    -Badminton League MOD APK Unlimited Money and Gems
    -Cara Download Badminton League Versi Lama di Android
    -Game Bulutangkis Offline Terbaik 2023
    -Badminton League 5.51.5081.0 APK Download for Android
    -How to Install Badminton League MOD APK on PC
    -Badminton League Versi Lama Mod Apk No Root
    -Tips and Tricks to Win Badminton League Tournaments
    -Download Game Badminton League 3D Offline
    -Review Game Badminton League Versi Lama dan Baru
    -Download Badminton League Hack Apk Latest Version
    -Badminton League Mod Apk Unlock All Rackets and Costumes
    -Download Badminton League Versi Lama Tanpa Iklan
    -Game Badminton League Versi Lama vs Versi Baru
    -Download Badminton League Mod Apk Anti Banned
    -Badminton League Gameplay and Features
    -Download Badminton League Versi Lama Gratis
    -Badminton League Mod Apk Free Shopping and Upgrade
    -Download Badminton League Versi Lama Full Version
    -Game Badminton League Versi Lama Online atau Offline
    -Download Badminton League Mod Apk Unlimited Coins and Diamonds
    -Badminton League Mod Apk Support Multiplayer Mode
    -Download Badminton League Versi Lama APKPure
    -Game Badminton League Versi Lama HD Graphics
    -Download Badminton League Mod Apk High Damage and Speed
    -Badminton League Mod Apk Easy Win and Level Up
    -Download Badminton League Versi Lama Uptodown
    -Game Badminton League Versi Lama Realistic Physics
    -Download Badminton League Mod Apk All Characters Unlocked
    -Badminton League Mod Apk No Ads and No Lag
    -Download Badminton League Versi Lama APKMirror
    -Game Badminton League Versi Lama Fun and Addictive
    -Download Badminton League Mod Apk Unlimited Stamina and Energy
    -Badminton League Mod Apk Customizable Characters and Rackets
    -Download Badminton League Versi Lama MediaFire
    -Game Badminton Liga Versi Lama Bahasa Indonesia
    -Download Badminton League Mod Apk Unlimited Skills and Power-ups
    -Badminton League Mod Apk Challenge Your Friends and Players Worldwide
    -Download Badminton League Versi Lama Mega.nz
    -Game Badminton Liga Versi Lama Dua Dimensi dan Kontrol Intuitif

    -

    Step 1: Find a Reliable Source

    -

    The first thing you need to do is find a reliable source where you can download the APK file of Badminton League Versi Lama. An APK file is an Android application package that contains all the files and data needed to install an app on your device. However, not all APK files are safe or trustworthy, so you need to be careful where you get them from.

    -

    One way to find a reliable source is to use a search engine such as Bing. You can type in keywords such as "badminton league versi lama apk" or "badminton league mod apk" and look for results that have positive reviews and ratings from other users. You

    You can also use a trusted website that offers APK files for various apps and games, such as APKPure, APKMirror, or APKMonk. These websites usually have a large collection of APK files that are verified and updated regularly. You can browse through their categories or search for the app you want to download.

    -

    Step 2: Download the APK File

    -

    Once you have found a reliable source, you can download the APK file of Badminton League Versi Lama to your device. To do this, you need to follow these steps:

    -
      -
    1. Tap on the download link or button on the website.
    2. -
    3. Wait for the download to start and finish.
    4. -
    5. Check the download folder on your device to find the APK file.
    6. -
    -

    Note: You might need to enable the option to install apps from unknown sources on your device settings. This will allow you to install apps that are not from the official Google Play Store. To do this, you need to follow these steps:

    -
      -
    1. Go to your device settings and look for security or privacy options.
    2. -
    3. Find the option to allow installation of apps from unknown sources and toggle it on.
    4. -
    5. Confirm your choice if prompted.
    6. -
    -

    Step 3: Install the APK File

    -

    After you have downloaded the APK file, you can install it on your device. To do this, you need to follow these steps:

    -
      -
    1. Tap on the APK file in your download folder or file manager.
    2. -
    3. Tap on install and wait for the installation to complete.
    4. -
    5. Tap on open or launch to start the app.
    6. -
    -

    Step 4: Enjoy the Game

    -

    Congratulations! You have successfully installed Badminton League Versi Lama on your device. You can now enjoy playing the game with unlimited coins, unlocked outfits, and no ads. You can also play offline without needing an internet connection. Have fun smashing, hitting, and jumping like a badminton star!

    -

    Tips and Tricks for Playing Badminton League Versi Lama

    -

    If you want to improve your skills and win more matches in Badminton League Versi Lama, here are some tips and tricks that you can use:

    -

    Tip 1: Keep It Simple

    -

    One of the most important things to remember when playing badminton is to keep it simple. Don't try to do fancy shots or moves that are beyond your skill level. Instead, focus on the basics such as timing, positioning, and accuracy. Try to hit the shuttlecock as close to the net as possible, and aim for the corners or edges of the court. Avoid hitting the shuttlecock too high or too low, as this will give your opponent an easy chance to smash or drop it back.

    -

    Tip 2: Upgrade Your Character

    -

    Another way to improve your performance in Badminton League Versi Lama is to upgrade your character. You can use the coins that you earn from playing matches or get from Badminton League Versi Lama mod apk to buy new gear and level up your skills. You can choose from different categories such as racket, shoes, clothes, hair, and face. Each category has different effects on your character's attributes such as power, speed, stamina, and agility. You can also upgrade your skills such as smash, hit, jump, serve, and net play. The higher your skills are, the more powerful and accurate your shots will be.

    -

    Tip 3: Get New Special Skills

    -

    Besides upgrading your character's skills, you can also get new special skills that will give you an edge over your opponents. These special skills are unlocked by completing certain achievements or tasks in the game. For example, you can unlock the fireball skill by winning 10 matches in a row, or the lightning skill by winning 20 matches in a row. These special skills will allow you to unleash powerful shots that will stun or damage your opponent's racket. However, you can only use them once per match, so use them wisely.

    -

    Conclusion

    -

    In conclusion, Badminton League Versi Lama is a fun and addictive badminton game that you can play on your Android device. It has some features that are not available in the official game, such as unlimited coins, unlocked outfits, and no ads. However, it also has some drawbacks such as possible security risks, compatibility issues, and no online features. If you want to download Badminton League Versi Lama mod apk, you need to find a reliable source, download the APK file, install it on your device, and enjoy the game.

    We hope that this article has helped you learn how to download Badminton League Versi Lama and enjoy playing it on your device. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Badminton League Versi Lama:

    -

    Q: Is Badminton League Versi Lama safe to download and install?

    -

    A: Badminton League Versi Lama is not an official app from the developer, so there is always a risk of downloading and installing an unofficial APK file. However, if you follow the steps in this article and use a reliable source, you can minimize the risk of getting malware or viruses on your device. You should also scan the APK file with an antivirus app before installing it.

    -

    Q: Is Badminton League Versi Lama compatible with my device?

    -

    A: Badminton League Versi Lama is designed for Android devices that run on Android 4.0.3 or higher. However, since it is a modified version of the original game, it may not work well on newer devices or updates. You may experience some glitches, crashes, or errors while playing the game. If this happens, you can try to uninstall and reinstall the app, or look for a newer version of the mod apk.

    -

    Q: Can I play Badminton League Versi Lama online with other players?

    -

    A: No, you cannot play Badminton League Versi Lama online with other players. The mod apk does not support online features such as multiplayer mode and leaderboards. You can only play offline with your friends in local mode, or challenge the AI in tournament mode.

    -

    Q: Can I update Badminton League Versi Lama to the latest version?

    -

    A: No, you cannot update Badminton League Versi Lama to the latest version. The mod apk is based on an older version of the game, so it will not receive any updates or patches from the developer. If you want to play the latest version of the game, you will need to download and install the official app from the Google Play Store.

    -

    Q: Can I transfer my progress and data from Badminton League Versi Lama to the official game?

    -

    A: No, you cannot transfer your progress and data from Badminton League Versi Lama to the official game. The mod apk has a different data structure and format than the official game, so they are not compatible with each other. You will need to start from scratch if you switch to the official game.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Messenger Today and Join the Millions of Users on Google Play.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Messenger Today and Join the Millions of Users on Google Play.md deleted file mode 100644 index 9edf96977dc0ff1b62b067f01d4f27e74885b5e8..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Messenger Today and Join the Millions of Users on Google Play.md +++ /dev/null @@ -1,118 +0,0 @@ - -

    How to Download Messenger from Google Play

    -

    Messenger is a free all-in-one communication app that allows you to send text, voice, and video messages, make voice and video calls, share files and photos, watch videos together, and chat with businesses. It is one of the most popular messaging apps in the world, with over 5 billion downloads on Google Play. If you want to stay in touch with your friends and family across different apps and devices, Messenger is a great choice for you.

    -

    In this article, we will show you how to download Messenger from Google Play and what features and benefits it offers. We will also compare it with some of the best alternatives to Messenger that you can try if you are looking for more privacy, security, or functionality.

    -

    download messenger google play


    Download Zip ✔✔✔ https://gohhs.com/2uPtNJ



    -

    How to Download Messenger from Google Play

    -

    Downloading Messenger from Google Play is very easy and fast. Just follow these simple steps:

    -
      -
    1. Open the Google Play app on your Android device.
    2. -
    3. Search for "Messenger" in the search bar.
    4. -
    5. Tap on the app icon that says "Messenger" by Meta Platforms, Inc.
    6. -
    7. Tap on the green "Install" button.
    8. -
    9. Wait for the app to download and install on your device.
    10. -
    11. Open the app and sign in with your Facebook account or phone number.
    12. -
    -

    Congratulations! You have successfully downloaded Messenger from Google Play. Now you can enjoy all the features and benefits that it offers.

    -

    Messenger Features and Benefits

    -

    Messenger has a lot of features and benefits that make it a great communication app. Here are some of them:

    -

    download messenger google play store
    -download messenger google play app
    -download messenger google play for pc
    -download messenger google play apk
    -download messenger google play on laptop
    -download messenger google play for android
    -download messenger google play for windows 10
    -download messenger google play for mac
    -download messenger google play latest version
    -download messenger google play update
    -how to download messenger google play
    -how to download messenger google play on iphone
    -how to download messenger google play on ipad
    -how to download messenger google play on chromebook
    -how to download messenger google play on fire tablet
    -why can't i download messenger google play
    -why won't messenger download google play
    -why is messenger not downloading google play
    -why does messenger take so long to download google play
    -why does messenger need to update google play services
    -download facebook messenger google play
    -download whatsapp messenger google play
    -download telegram messenger google play
    -download signal messenger google play
    -download viber messenger google play
    -download facebook messenger lite google play
    -download whatsapp messenger business google play
    -download telegram x messenger google play
    -download signal private messenger google play
    -download viber lite chat and video call google play
    -free download messenger google play
    -free download facebook messenger google play
    -free download whatsapp messenger google play
    -free download telegram messenger google play
    -free download signal messenger google play
    -free download viber messenger google play
    -free download facebook lite and facebook messenger lite from the official site of the developer or from the official page in the Google Play Store.
    -free download whatsapp business and whatsapp business api from the official site of the developer or from the official page in the Google Play Store.
    -free download telegram x and telegram desktop from the official site of the developer or from the official page in the Google Play Store.
    -free download signal private and signal desktop from the official site of the developer or from the official page in the Google Play Store.

    -

    Cross-app Messaging and Calling

    -

    You can connect with your Instagram friends right from Messenger. Simply search for them by name or username to message or call them. You can also chat with your Facebook friends without switching apps.

    -

    Privacy Settings and Custom Reactions

    -

    You can choose who can reach you and where your messages are delivered. You can also block or mute unwanted contacts or conversations. You can customize your reactions with lots more emojis to choose from, such as ? or ?.

    -

    Chat Themes and Watch Together

    -

    You can choose from fun themes and colors, such as Tie-Dye or Love, to make your chats more personal. You can also watch videos, TV shows, and movies with your friends over Messenger Video Chat and Rooms when you can't be together.

    -

    Free Video Calls and Text Messages

    -

    You can keep your friends and family close with unlimited live video chatting. You can host group video calls with up to 8 people, with high-quality audio, high definition video, and interactive video features like face filters. You can also skip exchanging phone numbers and simply send a message to your Facebook friends, even if they are across the world.

    -

    Dark Mode and Voice and Video Messages

    -

    You can give your eyes some rest with a sleek new look that darkens the colors of the chat interface. You can also record and send voice and video messages when text just won't cut it.

    -

    Stickers, GIFs, and Emojis

    -

    You can express yourself with custom stickers, GIFs, and emojis. You can even add effects and filters to video calls.

    -

    File Sharing and Group Chats

    -

    You can share any number of files, photos, and videos with your friends. You can also create group chats to stay in touch with your friends and family.

    -

    Money Transfer and Business Chat

    -

    You

    You can send and receive money securely and easily with Facebook Pay, right in the app. You can also chat with businesses to get customer service, find deals, and more.

    -

    Messenger Alternatives and Comparison

    -

    While Messenger is a great app, it is not the only one. There are many other messaging apps that you can try if you are looking for more privacy, security, or functionality. Here are some of the best alternatives to Messenger and how they compare:

    - - - - - - - - - - - - - - - - - - - - - - - - - - -
    AppProsCons
    Signal- Open source and end-to-end encrypted
    - No ads or trackers
    - Supports voice and video calls, group chats, stickers, and disappearing messages
    - Allows you to blur faces and lock the app with a passcode or biometric authentication
    - Less popular than Messenger
    - Fewer features and customization options
    - No file sharing or money transfer
    Telegram- Fast and cloud-based
    - Supports voice and video calls, group chats, bots, channels, stickers, GIFs, and polls
    - Allows you to edit and delete messages, create secret chats, and use multiple accounts
    - Has a desktop version and a web version
    - Not fully end-to-end encrypted by default
    - No video chat or watch together feature
    - No money transfer or business chat
    Friendly- Battery-saving and ad-free
    - Supports multiple accounts for Facebook, Instagram, Twitter, and more
    - Allows you to download videos and photos from Facebook and Instagram
    - Has a night mode and a fingerprint lock
    - Not a standalone messaging app
    - Requires Messenger to be installed for some features
    - No cross-app messaging or calling feature
    Metal- Lightweight and efficient
    - Supports Facebook and Twitter in one app
    - Allows you to access Facebook messages without Messenger
    - Has a dark mode and a material design
    - Not a standalone messaging app
    - Requires Messenger to be installed for some features
    - No cross-app messaging or calling feature
    -

    Conclusion

    -

    Messenger is a free all-in-one communication app that allows you to send text, voice, and video messages, make voice and video calls, share files and photos, watch videos together, and chat with businesses. It is one of the most popular messaging apps in the world, with over 5 billion downloads on Google Play. You can download it easily from Google Play and enjoy all the features and benefits that it offers.

    -

    However, if you are looking for more privacy, security, or functionality, you can also try some of the best alternatives to Messenger, such as Signal, Telegram, Friendly, or Metal. Each of them has its own pros and cons that you should consider before choosing the best app for you.

    -

    We hope this article has helped you learn how to download Messenger from Google Play and what features and benefits it offers. We also hope you have found some of the best alternatives to Messenger that you can try if you are looking for more privacy, security, or functionality. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Messenger and its alternatives:

    -
      -
    1. Is Messenger safe?
      Messenger is safe as long as you use it responsibly. You should always check your privacy settings, block or mute unwanted contacts or conversations, and avoid clicking on suspicious links or attachments. You should also use a strong password and enable two-factor authentication for your Facebook account.
    2. -
    3. How do I update Messenger?
      You can update Messenger from Google Play by following these steps:
      - Open the Google Play app on your Android device.
      - Tap on the menu icon (three horizontal lines) on the top left corner.
      - Tap on "My apps & games".
      - Find "Messenger" in the list of apps.
      - Tap on the "Update" button.
      - Wait for the app to update and install on your device.
    4. -
    5. How do I delete Messenger?
      You can delete Messenger from your Android device by following these steps:
      - Go to your device's settings.
      - Tap on "Apps" or "Application manager".
      - Find "Messenger" in the list of apps.
      - Tap on "Uninstall".
      - Confirm your action.
    6. -
    7. What is the difference between Messenger and WhatsApp?
      Messenger and WhatsApp are both messaging apps that allow you to send text, voice, and video messages, make voice and video calls, share files and photos, and chat with groups. However, there are some differences between them, such as:
      - Messenger is owned by Meta Platforms, Inc. (formerly Facebook), while WhatsApp is owned by Facebook, Inc.
      - Messenger requires a Facebook account or a phone number to sign up, while WhatsApp requires only a phone number.
      - Messenger has more features and customization options, such as cross-app messaging and calling, chat themes and watch together, stickers, GIFs, and emojis, money transfer and business chat, etc., while WhatsApp has fewer features and customization options.
      - WhatsApp is fully end-to-end encrypted by default, meaning that only the sender and the receiver can read the messages, while Messenger is not fully end-to-end encrypted by default, meaning that Meta Platforms, Inc. can access the messages unless you enable the secret conversations feature.
    8. -
    9. Which is the best alternative to Messenger?
      There is no definitive answer to this question, as different apps have different advantages and disadvantages. The best alternative to Messenger depends on your personal preferences and needs. However, some of the most popular and highly rated alternatives to Messenger are Signal, Telegram, Friendly, and Metal. You can try them out and see which one suits you best.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/Image-Caption-2-Shap-E/app_image_to_3d.py b/spaces/fffiloni/Image-Caption-2-Shap-E/app_image_to_3d.py deleted file mode 100644 index 4e6891f811f3bbbb3cb4a8837ff9f0f46c8921fb..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-Caption-2-Shap-E/app_image_to_3d.py +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env python - -import pathlib -import shlex -import subprocess - -import gradio as gr - -from model import Model -from settings import CACHE_EXAMPLES, MAX_SEED -from utils import randomize_seed_fn - - -def create_demo(model: Model) -> gr.Blocks: - if not pathlib.Path('corgi.png').exists(): - subprocess.run( - shlex.split( - 'wget https://raw.githubusercontent.com/openai/shap-e/d99cedaea18e0989e340163dbaeb4b109fa9e8ec/shap_e/examples/example_data/corgi.png -O corgi.png' - )) - examples = ['corgi.png'] - - def process_example_fn(image_path: str) -> str: - return model.run_image(image_path) - - with gr.Blocks() as demo: - with gr.Box(): - image = gr.Image(label='Input image', - show_label=False, - type='filepath') - run_button = gr.Button('Run') - result = gr.Model3D(label='Result', show_label=False) - with gr.Accordion('Advanced options', open=False): - seed = gr.Slider(label='Seed', - minimum=0, - maximum=MAX_SEED, - step=1, - value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=1, - maximum=20, - step=0.1, - value=3.0) - num_inference_steps = gr.Slider( - label='Number of inference steps', - minimum=1, - maximum=100, - step=1, - value=64) - - gr.Examples(examples=examples, - inputs=image, - outputs=result, - fn=process_example_fn, - cache_examples=CACHE_EXAMPLES) - - inputs = [ - image, - seed, - guidance_scale, - num_inference_steps, - ] - - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=model.run_image, - inputs=inputs, - outputs=result, - api_name='image-to-3d', - ) - return demo diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/client-dist/socket.io.min.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/client-dist/socket.io.min.js deleted file mode 100644 index f65edd256e855441e2de1662bf6690335d551a1b..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/client-dist/socket.io.min.js +++ /dev/null @@ -1,7 +0,0 @@ -/*! - * Socket.IO v4.6.1 - * (c) 2014-2023 Guillermo Rauch - * Released under the MIT License. - */ -!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).io=e()}(this,(function(){"use strict";function t(e){return t="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t},t(e)}function e(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}function n(t,e){for(var n=0;nt.length)&&(e=t.length);for(var n=0,r=new Array(e);n=t.length?{done:!0}:{done:!1,value:t[r++]}},e:function(t){throw t},f:i}}throw new TypeError("Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}var o,s=!0,a=!1;return{s:function(){n=n.call(t)},n:function(){var t=n.next();return s=t.done,t},e:function(t){a=!0,o=t},f:function(){try{s||null==n.return||n.return()}finally{if(a)throw o}}}}var m=Object.create(null);m.open="0",m.close="1",m.ping="2",m.pong="3",m.message="4",m.upgrade="5",m.noop="6";var k=Object.create(null);Object.keys(m).forEach((function(t){k[m[t]]=t}));for(var b={type:"error",data:"parser error"},w="function"==typeof Blob||"undefined"!=typeof Blob&&"[object BlobConstructor]"===Object.prototype.toString.call(Blob),_="function"==typeof ArrayBuffer,E=function(t,e,n){var r,i=t.type,o=t.data;return w&&o instanceof Blob?e?n(o):O(o,n):_&&(o instanceof ArrayBuffer||(r=o,"function"==typeof ArrayBuffer.isView?ArrayBuffer.isView(r):r&&r.buffer instanceof ArrayBuffer))?e?n(o):O(new Blob([o]),n):n(m[i]+(o||""))},O=function(t,e){var n=new FileReader;return n.onload=function(){var t=n.result.split(",")[1];e("b"+t)},n.readAsDataURL(t)},A="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",R="undefined"==typeof Uint8Array?[]:new Uint8Array(256),T=0;T1?{type:k[n],data:t.substring(1)}:{type:k[n]}:b},S=function(t,e){if(C){var n=function(t){var e,n,r,i,o,s=.75*t.length,a=t.length,c=0;"="===t[t.length-1]&&(s--,"="===t[t.length-2]&&s--);var u=new ArrayBuffer(s),h=new Uint8Array(u);for(e=0;e>4,h[c++]=(15&r)<<4|i>>2,h[c++]=(3&i)<<6|63&o;return u}(t);return N(n,e)}return{base64:!0,data:t}},N=function(t,e){return"blob"===e&&t instanceof ArrayBuffer?new Blob([t]):t},x=String.fromCharCode(30);function L(t){if(t)return function(t){for(var e in L.prototype)t[e]=L.prototype[e];return t}(t)}L.prototype.on=L.prototype.addEventListener=function(t,e){return this._callbacks=this._callbacks||{},(this._callbacks["$"+t]=this._callbacks["$"+t]||[]).push(e),this},L.prototype.once=function(t,e){function n(){this.off(t,n),e.apply(this,arguments)}return n.fn=e,this.on(t,n),this},L.prototype.off=L.prototype.removeListener=L.prototype.removeAllListeners=L.prototype.removeEventListener=function(t,e){if(this._callbacks=this._callbacks||{},0==arguments.length)return this._callbacks={},this;var n,r=this._callbacks["$"+t];if(!r)return this;if(1==arguments.length)return delete this._callbacks["$"+t],this;for(var i=0;i1?e-1:0),r=1;r0);return e}function W(){var t=z(+new Date);return t!==F?(K=0,F=t):t+"."+z(K++)}for(;Y<64;Y++)H[V[Y]]=Y;function $(t){var e="";for(var n in t)t.hasOwnProperty(n)&&(e.length&&(e+="&"),e+=encodeURIComponent(n)+"="+encodeURIComponent(t[n]));return e}function J(t){for(var e={},n=t.split("&"),r=0,i=n.length;r0&&void 0!==arguments[0]?arguments[0]:{};return i(t,{xd:this.xd,xs:this.xs},this.opts),new nt(this.uri(),t)}},{key:"doWrite",value:function(t,e){var n=this,r=this.request({method:"POST",data:t});r.on("success",e),r.on("error",(function(t,e){n.onError("xhr post error",t,e)}))}},{key:"doPoll",value:function(){var t=this,e=this.request();e.on("data",this.onData.bind(this)),e.on("error",(function(e,n){t.onError("xhr poll error",e,n)})),this.pollXhr=e}}]),s}(U),nt=function(t){o(i,t);var n=p(i);function i(t,r){var o;return e(this,i),D(f(o=n.call(this)),r),o.opts=r,o.method=r.method||"GET",o.uri=t,o.async=!1!==r.async,o.data=void 0!==r.data?r.data:null,o.create(),o}return r(i,[{key:"create",value:function(){var t=this,e=j(this.opts,"agent","pfx","key","passphrase","cert","ca","ciphers","rejectUnauthorized","autoUnref");e.xdomain=!!this.opts.xd,e.xscheme=!!this.opts.xs;var n=this.xhr=new G(e);try{n.open(this.method,this.uri,this.async);try{if(this.opts.extraHeaders)for(var r in n.setDisableHeaderCheck&&n.setDisableHeaderCheck(!0),this.opts.extraHeaders)this.opts.extraHeaders.hasOwnProperty(r)&&n.setRequestHeader(r,this.opts.extraHeaders[r])}catch(t){}if("POST"===this.method)try{n.setRequestHeader("Content-type","text/plain;charset=UTF-8")}catch(t){}try{n.setRequestHeader("Accept","*/*")}catch(t){}"withCredentials"in n&&(n.withCredentials=this.opts.withCredentials),this.opts.requestTimeout&&(n.timeout=this.opts.requestTimeout),n.onreadystatechange=function(){4===n.readyState&&(200===n.status||1223===n.status?t.onLoad():t.setTimeoutFn((function(){t.onError("number"==typeof n.status?n.status:0)}),0))},n.send(this.data)}catch(e){return void this.setTimeoutFn((function(){t.onError(e)}),0)}"undefined"!=typeof document&&(this.index=i.requestsCount++,i.requests[this.index]=this)}},{key:"onError",value:function(t){this.emitReserved("error",t,this.xhr),this.cleanup(!0)}},{key:"cleanup",value:function(t){if(void 0!==this.xhr&&null!==this.xhr){if(this.xhr.onreadystatechange=Z,t)try{this.xhr.abort()}catch(t){}"undefined"!=typeof document&&delete i.requests[this.index],this.xhr=null}}},{key:"onLoad",value:function(){var t=this.xhr.responseText;null!==t&&(this.emitReserved("data",t),this.emitReserved("success"),this.cleanup())}},{key:"abort",value:function(){this.cleanup()}}]),i}(L);if(nt.requestsCount=0,nt.requests={},"undefined"!=typeof document)if("function"==typeof attachEvent)attachEvent("onunload",rt);else if("function"==typeof addEventListener){addEventListener("onpagehide"in P?"pagehide":"unload",rt,!1)}function rt(){for(var t in nt.requests)nt.requests.hasOwnProperty(t)&&nt.requests[t].abort()}var it="function"==typeof Promise&&"function"==typeof Promise.resolve?function(t){return Promise.resolve().then(t)}:function(t,e){return e(t,0)},ot=P.WebSocket||P.MozWebSocket,st="undefined"!=typeof navigator&&"string"==typeof navigator.product&&"reactnative"===navigator.product.toLowerCase(),at=function(t){o(i,t);var n=p(i);function i(t){var r;return e(this,i),(r=n.call(this,t)).supportsBinary=!t.forceBase64,r}return r(i,[{key:"name",get:function(){return"websocket"}},{key:"doOpen",value:function(){if(this.check()){var t=this.uri(),e=this.opts.protocols,n=st?{}:j(this.opts,"agent","perMessageDeflate","pfx","key","passphrase","cert","ca","ciphers","rejectUnauthorized","localAddress","protocolVersion","origin","maxPayload","family","checkServerIdentity");this.opts.extraHeaders&&(n.headers=this.opts.extraHeaders);try{this.ws=st?new ot(t,e,n):e?new ot(t,e):new ot(t)}catch(t){return this.emitReserved("error",t)}this.ws.binaryType=this.socket.binaryType||"arraybuffer",this.addEventListeners()}}},{key:"addEventListeners",value:function(){var t=this;this.ws.onopen=function(){t.opts.autoUnref&&t.ws._socket.unref(),t.onOpen()},this.ws.onclose=function(e){return t.onClose({description:"websocket connection closed",context:e})},this.ws.onmessage=function(e){return t.onData(e.data)},this.ws.onerror=function(e){return t.onError("websocket error",e)}}},{key:"write",value:function(t){var e=this;this.writable=!1;for(var n=function(n){var r=t[n],i=n===t.length-1;E(r,e.supportsBinary,(function(t){try{e.ws.send(t)}catch(t){}i&&it((function(){e.writable=!0,e.emitReserved("drain")}),e.setTimeoutFn)}))},r=0;r1&&void 0!==arguments[1]?arguments[1]:{};return e(this,a),(r=s.call(this)).writeBuffer=[],n&&"object"===t(n)&&(o=n,n=null),n?(n=ft(n),o.hostname=n.host,o.secure="https"===n.protocol||"wss"===n.protocol,o.port=n.port,n.query&&(o.query=n.query)):o.host&&(o.hostname=ft(o.host).host),D(f(r),o),r.secure=null!=o.secure?o.secure:"undefined"!=typeof location&&"https:"===location.protocol,o.hostname&&!o.port&&(o.port=r.secure?"443":"80"),r.hostname=o.hostname||("undefined"!=typeof location?location.hostname:"localhost"),r.port=o.port||("undefined"!=typeof location&&location.port?location.port:r.secure?"443":"80"),r.transports=o.transports||["polling","websocket"],r.writeBuffer=[],r.prevBufferLen=0,r.opts=i({path:"/engine.io",agent:!1,withCredentials:!1,upgrade:!0,timestampParam:"t",rememberUpgrade:!1,addTrailingSlash:!0,rejectUnauthorized:!0,perMessageDeflate:{threshold:1024},transportOptions:{},closeOnBeforeunload:!0},o),r.opts.path=r.opts.path.replace(/\/$/,"")+(r.opts.addTrailingSlash?"/":""),"string"==typeof r.opts.query&&(r.opts.query=J(r.opts.query)),r.id=null,r.upgrades=null,r.pingInterval=null,r.pingTimeout=null,r.pingTimeoutTimer=null,"function"==typeof addEventListener&&(r.opts.closeOnBeforeunload&&(r.beforeunloadEventListener=function(){r.transport&&(r.transport.removeAllListeners(),r.transport.close())},addEventListener("beforeunload",r.beforeunloadEventListener,!1)),"localhost"!==r.hostname&&(r.offlineEventListener=function(){r.onClose("transport close",{description:"network connection lost"})},addEventListener("offline",r.offlineEventListener,!1))),r.open(),r}return r(a,[{key:"createTransport",value:function(t){var e=i({},this.opts.query);e.EIO=4,e.transport=t,this.id&&(e.sid=this.id);var n=i({},this.opts.transportOptions[t],this.opts,{query:e,socket:this,hostname:this.hostname,secure:this.secure,port:this.port});return new ct[t](n)}},{key:"open",value:function(){var t,e=this;if(this.opts.rememberUpgrade&&a.priorWebsocketSuccess&&-1!==this.transports.indexOf("websocket"))t="websocket";else{if(0===this.transports.length)return void this.setTimeoutFn((function(){e.emitReserved("error","No transports available")}),0);t=this.transports[0]}this.readyState="opening";try{t=this.createTransport(t)}catch(t){return this.transports.shift(),void this.open()}t.open(),this.setTransport(t)}},{key:"setTransport",value:function(t){var e=this;this.transport&&this.transport.removeAllListeners(),this.transport=t,t.on("drain",this.onDrain.bind(this)).on("packet",this.onPacket.bind(this)).on("error",this.onError.bind(this)).on("close",(function(t){return e.onClose("transport close",t)}))}},{key:"probe",value:function(t){var e=this,n=this.createTransport(t),r=!1;a.priorWebsocketSuccess=!1;var i=function(){r||(n.send([{type:"ping",data:"probe"}]),n.once("packet",(function(t){if(!r)if("pong"===t.type&&"probe"===t.data){if(e.upgrading=!0,e.emitReserved("upgrading",n),!n)return;a.priorWebsocketSuccess="websocket"===n.name,e.transport.pause((function(){r||"closed"!==e.readyState&&(f(),e.setTransport(n),n.send([{type:"upgrade"}]),e.emitReserved("upgrade",n),n=null,e.upgrading=!1,e.flush())}))}else{var i=new Error("probe error");i.transport=n.name,e.emitReserved("upgradeError",i)}})))};function o(){r||(r=!0,f(),n.close(),n=null)}var s=function(t){var r=new Error("probe error: "+t);r.transport=n.name,o(),e.emitReserved("upgradeError",r)};function c(){s("transport closed")}function u(){s("socket closed")}function h(t){n&&t.name!==n.name&&o()}var f=function(){n.removeListener("open",i),n.removeListener("error",s),n.removeListener("close",c),e.off("close",u),e.off("upgrading",h)};n.once("open",i),n.once("error",s),n.once("close",c),this.once("close",u),this.once("upgrading",h),n.open()}},{key:"onOpen",value:function(){if(this.readyState="open",a.priorWebsocketSuccess="websocket"===this.transport.name,this.emitReserved("open"),this.flush(),"open"===this.readyState&&this.opts.upgrade)for(var t=0,e=this.upgrades.length;t1))return this.writeBuffer;for(var t,e=1,n=0;n=57344?n+=3:(r++,n+=4);return n}(t):Math.ceil(1.33*(t.byteLength||t.size))),n>0&&e>this.maxPayload)return this.writeBuffer.slice(0,n);e+=2}return this.writeBuffer}},{key:"write",value:function(t,e,n){return this.sendPacket("message",t,e,n),this}},{key:"send",value:function(t,e,n){return this.sendPacket("message",t,e,n),this}},{key:"sendPacket",value:function(t,e,n,r){if("function"==typeof e&&(r=e,e=void 0),"function"==typeof n&&(r=n,n=null),"closing"!==this.readyState&&"closed"!==this.readyState){(n=n||{}).compress=!1!==n.compress;var i={type:t,data:e,options:n};this.emitReserved("packetCreate",i),this.writeBuffer.push(i),r&&this.once("flush",r),this.flush()}}},{key:"close",value:function(){var t=this,e=function(){t.onClose("forced close"),t.transport.close()},n=function n(){t.off("upgrade",n),t.off("upgradeError",n),e()},r=function(){t.once("upgrade",n),t.once("upgradeError",n)};return"opening"!==this.readyState&&"open"!==this.readyState||(this.readyState="closing",this.writeBuffer.length?this.once("drain",(function(){t.upgrading?r():e()})):this.upgrading?r():e()),this}},{key:"onError",value:function(t){a.priorWebsocketSuccess=!1,this.emitReserved("error",t),this.onClose("transport error",t)}},{key:"onClose",value:function(t,e){"opening"!==this.readyState&&"open"!==this.readyState&&"closing"!==this.readyState||(this.clearTimeoutFn(this.pingTimeoutTimer),this.transport.removeAllListeners("close"),this.transport.close(),this.transport.removeAllListeners(),"function"==typeof removeEventListener&&(removeEventListener("beforeunload",this.beforeunloadEventListener,!1),removeEventListener("offline",this.offlineEventListener,!1)),this.readyState="closed",this.id=null,this.emitReserved("close",t,e),this.writeBuffer=[],this.prevBufferLen=0)}},{key:"filterUpgrades",value:function(t){for(var e=[],n=0,r=t.length;n=0&&e.num0;case Et.ACK:case Et.BINARY_ACK:return Array.isArray(n)}}}]),a}(L),Rt=function(){function t(n){e(this,t),this.packet=n,this.buffers=[],this.reconPack=n}return r(t,[{key:"takeBinaryData",value:function(t){if(this.buffers.push(t),this.buffers.length===this.reconPack.attachments){var e=wt(this.reconPack,this.buffers);return this.finishedReconstruction(),e}return null}},{key:"finishedReconstruction",value:function(){this.reconPack=null,this.buffers=[]}}]),t}(),Tt=Object.freeze({__proto__:null,protocol:5,get PacketType(){return Et},Encoder:Ot,Decoder:At});function Ct(t,e,n){return t.on(e,n),function(){t.off(e,n)}}var Bt=Object.freeze({connect:1,connect_error:1,disconnect:1,disconnecting:1,newListener:1,removeListener:1}),St=function(t){o(a,t);var n=p(a);function a(t,r,o){var s;return e(this,a),(s=n.call(this)).connected=!1,s.recovered=!1,s.receiveBuffer=[],s.sendBuffer=[],s._queue=[],s._queueSeq=0,s.ids=0,s.acks={},s.flags={},s.io=t,s.nsp=r,o&&o.auth&&(s.auth=o.auth),s._opts=i({},o),s.io._autoConnect&&s.open(),s}return r(a,[{key:"disconnected",get:function(){return!this.connected}},{key:"subEvents",value:function(){if(!this.subs){var t=this.io;this.subs=[Ct(t,"open",this.onopen.bind(this)),Ct(t,"packet",this.onpacket.bind(this)),Ct(t,"error",this.onerror.bind(this)),Ct(t,"close",this.onclose.bind(this))]}}},{key:"active",get:function(){return!!this.subs}},{key:"connect",value:function(){return this.connected||(this.subEvents(),this.io._reconnecting||this.io.open(),"open"===this.io._readyState&&this.onopen()),this}},{key:"open",value:function(){return this.connect()}},{key:"send",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n1?e-1:0),r=1;r1?n-1:0),i=1;in._opts.retries&&(n._queue.shift(),e&&e(t));else if(n._queue.shift(),e){for(var o=arguments.length,s=new Array(o>1?o-1:0),a=1;a0&&void 0!==arguments[0]&&arguments[0];if(this.connected&&0!==this._queue.length){var e=this._queue[0];e.pending&&!t||(e.pending=!0,e.tryCount++,this.flags=e.flags,this.emit.apply(this,e.args))}}},{key:"packet",value:function(t){t.nsp=this.nsp,this.io._packet(t)}},{key:"onopen",value:function(){var t=this;"function"==typeof this.auth?this.auth((function(e){t._sendConnectPacket(e)})):this._sendConnectPacket(this.auth)}},{key:"_sendConnectPacket",value:function(t){this.packet({type:Et.CONNECT,data:this._pid?i({pid:this._pid,offset:this._lastOffset},t):t})}},{key:"onerror",value:function(t){this.connected||this.emitReserved("connect_error",t)}},{key:"onclose",value:function(t,e){this.connected=!1,delete this.id,this.emitReserved("disconnect",t,e)}},{key:"onpacket",value:function(t){if(t.nsp===this.nsp)switch(t.type){case Et.CONNECT:t.data&&t.data.sid?this.onconnect(t.data.sid,t.data.pid):this.emitReserved("connect_error",new Error("It seems you are trying to reach a Socket.IO server in v2.x with a v3.x client, but they are not compatible (more information here: https://socket.io/docs/v3/migrating-from-2-x-to-3-0/)"));break;case Et.EVENT:case Et.BINARY_EVENT:this.onevent(t);break;case Et.ACK:case Et.BINARY_ACK:this.onack(t);break;case Et.DISCONNECT:this.ondisconnect();break;case Et.CONNECT_ERROR:this.destroy();var e=new Error(t.data.message);e.data=t.data.data,this.emitReserved("connect_error",e)}}},{key:"onevent",value:function(t){var e=t.data||[];null!=t.id&&e.push(this.ack(t.id)),this.connected?this.emitEvent(e):this.receiveBuffer.push(Object.freeze(e))}},{key:"emitEvent",value:function(t){if(this._anyListeners&&this._anyListeners.length){var e,n=g(this._anyListeners.slice());try{for(n.s();!(e=n.n()).done;){e.value.apply(this,t)}}catch(t){n.e(t)}finally{n.f()}}y(s(a.prototype),"emit",this).apply(this,t),this._pid&&t.length&&"string"==typeof t[t.length-1]&&(this._lastOffset=t[t.length-1])}},{key:"ack",value:function(t){var e=this,n=!1;return function(){if(!n){n=!0;for(var r=arguments.length,i=new Array(r),o=0;o0&&t.jitter<=1?t.jitter:0,this.attempts=0}Nt.prototype.duration=function(){var t=this.ms*Math.pow(this.factor,this.attempts++);if(this.jitter){var e=Math.random(),n=Math.floor(e*this.jitter*t);t=0==(1&Math.floor(10*e))?t-n:t+n}return 0|Math.min(t,this.max)},Nt.prototype.reset=function(){this.attempts=0},Nt.prototype.setMin=function(t){this.ms=t},Nt.prototype.setMax=function(t){this.max=t},Nt.prototype.setJitter=function(t){this.jitter=t};var xt=function(n){o(s,n);var i=p(s);function s(n,r){var o,a;e(this,s),(o=i.call(this)).nsps={},o.subs=[],n&&"object"===t(n)&&(r=n,n=void 0),(r=r||{}).path=r.path||"/socket.io",o.opts=r,D(f(o),r),o.reconnection(!1!==r.reconnection),o.reconnectionAttempts(r.reconnectionAttempts||1/0),o.reconnectionDelay(r.reconnectionDelay||1e3),o.reconnectionDelayMax(r.reconnectionDelayMax||5e3),o.randomizationFactor(null!==(a=r.randomizationFactor)&&void 0!==a?a:.5),o.backoff=new Nt({min:o.reconnectionDelay(),max:o.reconnectionDelayMax(),jitter:o.randomizationFactor()}),o.timeout(null==r.timeout?2e4:r.timeout),o._readyState="closed",o.uri=n;var c=r.parser||Tt;return o.encoder=new c.Encoder,o.decoder=new c.Decoder,o._autoConnect=!1!==r.autoConnect,o._autoConnect&&o.open(),o}return r(s,[{key:"reconnection",value:function(t){return arguments.length?(this._reconnection=!!t,this):this._reconnection}},{key:"reconnectionAttempts",value:function(t){return void 0===t?this._reconnectionAttempts:(this._reconnectionAttempts=t,this)}},{key:"reconnectionDelay",value:function(t){var e;return void 0===t?this._reconnectionDelay:(this._reconnectionDelay=t,null===(e=this.backoff)||void 0===e||e.setMin(t),this)}},{key:"randomizationFactor",value:function(t){var e;return void 0===t?this._randomizationFactor:(this._randomizationFactor=t,null===(e=this.backoff)||void 0===e||e.setJitter(t),this)}},{key:"reconnectionDelayMax",value:function(t){var e;return void 0===t?this._reconnectionDelayMax:(this._reconnectionDelayMax=t,null===(e=this.backoff)||void 0===e||e.setMax(t),this)}},{key:"timeout",value:function(t){return arguments.length?(this._timeout=t,this):this._timeout}},{key:"maybeReconnectOnOpen",value:function(){!this._reconnecting&&this._reconnection&&0===this.backoff.attempts&&this.reconnect()}},{key:"open",value:function(t){var e=this;if(~this._readyState.indexOf("open"))return this;this.engine=new lt(this.uri,this.opts);var n=this.engine,r=this;this._readyState="opening",this.skipReconnect=!1;var i=Ct(n,"open",(function(){r.onopen(),t&&t()})),o=Ct(n,"error",(function(n){r.cleanup(),r._readyState="closed",e.emitReserved("error",n),t?t(n):r.maybeReconnectOnOpen()}));if(!1!==this._timeout){var s=this._timeout;0===s&&i();var a=this.setTimeoutFn((function(){i(),n.close(),n.emit("error",new Error("timeout"))}),s);this.opts.autoUnref&&a.unref(),this.subs.push((function(){clearTimeout(a)}))}return this.subs.push(i),this.subs.push(o),this}},{key:"connect",value:function(t){return this.open(t)}},{key:"onopen",value:function(){this.cleanup(),this._readyState="open",this.emitReserved("open");var t=this.engine;this.subs.push(Ct(t,"ping",this.onping.bind(this)),Ct(t,"data",this.ondata.bind(this)),Ct(t,"error",this.onerror.bind(this)),Ct(t,"close",this.onclose.bind(this)),Ct(this.decoder,"decoded",this.ondecoded.bind(this)))}},{key:"onping",value:function(){this.emitReserved("ping")}},{key:"ondata",value:function(t){try{this.decoder.add(t)}catch(t){this.onclose("parse error",t)}}},{key:"ondecoded",value:function(t){var e=this;it((function(){e.emitReserved("packet",t)}),this.setTimeoutFn)}},{key:"onerror",value:function(t){this.emitReserved("error",t)}},{key:"socket",value:function(t,e){var n=this.nsps[t];return n?this._autoConnect&&!n.active&&n.connect():(n=new St(this,t,e),this.nsps[t]=n),n}},{key:"_destroy",value:function(t){for(var e=0,n=Object.keys(this.nsps);e=this._reconnectionAttempts)this.backoff.reset(),this.emitReserved("reconnect_failed"),this._reconnecting=!1;else{var n=this.backoff.duration();this._reconnecting=!0;var r=this.setTimeoutFn((function(){e.skipReconnect||(t.emitReserved("reconnect_attempt",e.backoff.attempts),e.skipReconnect||e.open((function(n){n?(e._reconnecting=!1,e.reconnect(),t.emitReserved("reconnect_error",n)):e.onreconnect()})))}),n);this.opts.autoUnref&&r.unref(),this.subs.push((function(){clearTimeout(r)}))}}},{key:"onreconnect",value:function(){var t=this.backoff.attempts;this._reconnecting=!1,this.backoff.reset(),this.emitReserved("reconnect",t)}}]),s}(L),Lt={};function Pt(e,n){"object"===t(e)&&(n=e,e=void 0);var r,i=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"",n=arguments.length>2?arguments[2]:void 0,r=t;n=n||"undefined"!=typeof location&&location,null==t&&(t=n.protocol+"//"+n.host),"string"==typeof t&&("/"===t.charAt(0)&&(t="/"===t.charAt(1)?n.protocol+t:n.host+t),/^(https?|wss?):\/\//.test(t)||(t=void 0!==n?n.protocol+"//"+t:"https://"+t),r=ft(t)),r.port||(/^(http|ws)$/.test(r.protocol)?r.port="80":/^(http|ws)s$/.test(r.protocol)&&(r.port="443")),r.path=r.path||"/";var i=-1!==r.host.indexOf(":")?"["+r.host+"]":r.host;return r.id=r.protocol+"://"+i+":"+r.port+e,r.href=r.protocol+"://"+i+(n&&n.port===r.port?"":":"+r.port),r}(e,(n=n||{}).path||"/socket.io"),o=i.source,s=i.id,a=i.path,c=Lt[s]&&a in Lt[s].nsps;return n.forceNew||n["force new connection"]||!1===n.multiplex||c?r=new xt(o,n):(Lt[s]||(Lt[s]=new xt(o,n)),r=Lt[s]),i.query&&!n.query&&(n.query=i.queryKey),r.socket(i.path,n)}return i(Pt,{Manager:xt,Socket:St,io:Pt,connect:Pt}),Pt})); -//# sourceMappingURL=socket.io.min.js.map diff --git a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/distributed.py b/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/distributed.py deleted file mode 100644 index c3d890e28fd2b9e044bdd9494de4a43ad2471eed..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/distributed.py +++ /dev/null @@ -1,58 +0,0 @@ -import math -import torch -from .sampler import Sampler -from torch.distributed import get_world_size, get_rank - - -class DistributedSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - It is especially useful in conjunction with - :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each - process can pass a DistributedSampler instance as a DataLoader sampler, - and load a subset of the original dataset that is exclusive to it. - - .. note:: - Dataset is assumed to be of constant size. - - Arguments: - dataset: Dataset used for sampling. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - """ - - def __init__(self, dataset, num_replicas=None, rank=None): - if num_replicas is None: - num_replicas = get_world_size() - if rank is None: - rank = get_rank() - self.dataset = dataset - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas)) - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - indices = list(torch.randperm(len(self.dataset), generator=g)) - - # add extra samples to make it evenly divisible - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/spaces/firsk/ai_otto/preprocess_text.py b/spaces/firsk/ai_otto/preprocess_text.py deleted file mode 100644 index d7b5320c650d7ac1fafaac71ebe2718a115a54a0..0000000000000000000000000000000000000000 --- a/spaces/firsk/ai_otto/preprocess_text.py +++ /dev/null @@ -1,105 +0,0 @@ -import json -from collections import defaultdict -from random import shuffle -from typing import Optional - -from tqdm import tqdm -import click -from text.cleaner import clean_text - - -@click.command() -@click.option( - "--transcription-path", - default="filelists/otto.list", - type=click.Path(exists=True, file_okay=True, dir_okay=False), -) -@click.option("--cleaned-path", default=None) -@click.option("--train-path", default="filelists/train.list") -@click.option("--val-path", default="filelists/val.list") -@click.option( - "--config-path", - default="configs/config.json", - type=click.Path(exists=True, file_okay=True, dir_okay=False), -) -@click.option("--val-per-spk", default=4) -@click.option("--max-val-total", default=8) -@click.option("--clean/--no-clean", default=True) -def main( - transcription_path: str, - cleaned_path: Optional[str], - train_path: str, - val_path: str, - config_path: str, - val_per_spk: int, - max_val_total: int, - clean: bool, -): - if cleaned_path is None: - cleaned_path = transcription_path + ".cleaned" - - if clean: - out_file = open(cleaned_path, "w", encoding="utf-8") - for line in tqdm(open(transcription_path, encoding="utf-8").readlines()): - try: - utt, spk, language, text = line.strip().split("|") - norm_text, phones, tones, word2ph = clean_text(text, language) - out_file.write( - "{}|{}|{}|{}|{}|{}|{}\n".format( - utt, - spk, - language, - norm_text, - " ".join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]), - ) - ) - except Exception as error: - print("err!", line, error) - - out_file.close() - - transcription_path = cleaned_path - - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open(transcription_path, encoding="utf-8") as f: - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split("|") - spk_utt_map[spk].append(line) - - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - - train_list = [] - val_list = [] - - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list += utts[:val_per_spk] - train_list += utts[val_per_spk:] - - if len(val_list) > max_val_total: - train_list += val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open(train_path, "w", encoding="utf-8") as f: - for line in train_list: - f.write(line) - - with open(val_path, "w", encoding="utf-8") as f: - for line in val_list: - f.write(line) - - config = json.load(open(config_path, encoding="utf-8")) - config["data"]["spk2id"] = spk_id_map - with open(config_path, "w", encoding="utf-8") as f: - json.dump(config, f, indent=2, ensure_ascii=False) - - -if __name__ == "__main__": - main() diff --git a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/controlnet_pipeline.py b/spaces/flatindo/generate2/diffusion_webui/diffusion_models/controlnet_pipeline.py deleted file mode 100644 index 12f1f46e5a3f24f36ef3573eeea3ad6c35f947b6..0000000000000000000000000000000000000000 --- a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/controlnet_pipeline.py +++ /dev/null @@ -1,262 +0,0 @@ -import gradio as gr -import torch -import cv2 -from diffusers import ControlNetModel, StableDiffusionControlNetPipeline -from PIL import Image - -from diffusion_webui.diffusion_models.base_controlnet_pipeline import ( - ControlnetPipeline, -) -from diffusion_webui.utils.model_list import ( - controlnet_model_list, - stable_model_list, -) -from diffusion_webui.utils.preprocces_utils import PREPROCCES_DICT -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_MAPPING, - get_scheduler, -) - - -stable_model_list = [ - "runwayml/stable-diffusion-v1-5", - "dreamlike-art/dreamlike-diffusion-1.0", - "kadirnar/maturemalemix_v0", - "kadirnar/DreamShaper_v6" -] - -stable_inpiant_model_list = [ - "stabilityai/stable-diffusion-2-inpainting", - "runwayml/stable-diffusion-inpainting", - "saik0s/realistic_vision_inpainting", -] - -controlnet_model_list = [ - "lllyasviel/control_v11p_sd15_canny", - "lllyasviel/control_v11f1p_sd15_depth", - "lllyasviel/control_v11p_sd15_openpose", - "lllyasviel/control_v11p_sd15_scribble", - "lllyasviel/control_v11p_sd15_mlsd", - "lllyasviel/control_v11e_sd15_shuffle", - "lllyasviel/control_v11e_sd15_ip2p", - "lllyasviel/control_v11p_sd15_lineart", - "lllyasviel/control_v11p_sd15s2_lineart_anime", - "lllyasviel/control_v11p_sd15_softedge", -] - -class StableDiffusionControlNetGenerator(ControlnetPipeline): - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, controlnet_model_path, scheduler): - if self.pipe is None or self.pipe.model_name != stable_model_path or self.pipe.scheduler_name != scheduler: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - self.pipe.model_name = stable_model_path - self.pipe.scheduler_name = scheduler - - self.pipe = get_scheduler(pipe=self.pipe, scheduler=scheduler) - self.pipe.scheduler_name = scheduler - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - - def controlnet_preprocces( - self, - read_image: str, - preprocces_type: str, - ): - processed_image = PREPROCCES_DICT[preprocces_type](read_image) - return processed_image - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_model_path: str, - height: int, - width: int, - guess_mode: bool, - controlnet_conditioning_scale: int, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - scheduler: str, - seed_generator: int, - preprocces_type: str, - ): - pipe = self.load_model( - stable_model_path=stable_model_path, - controlnet_model_path=controlnet_model_path, - scheduler=scheduler, - ) - if preprocces_type== "ScribbleXDOG": - read_image = cv2.imread(image_path) - controlnet_image = self.controlnet_preprocces(read_image=read_image, preprocces_type=preprocces_type)[0] - controlnet_image = Image.fromarray(controlnet_image) - - elif preprocces_type== "None": - controlnet_image = self.controlnet_preprocces(read_image=image_path, preprocces_type=preprocces_type) - else: - read_image = Image.open(image_path) - controlnet_image = self.controlnet_preprocces(read_image=read_image, preprocces_type=preprocces_type) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - - output = pipe( - prompt=prompt, - height=height, - width=width, - controlnet_conditioning_scale=float(controlnet_conditioning_scale), - guess_mode=guess_mode, - image=controlnet_image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_image_path = gr.Image( - type="filepath", label="Image" - ).style(height=260) - controlnet_prompt = gr.Textbox( - lines=1, placeholder="Prompt", show_label=False - ) - controlnet_negative_prompt = gr.Textbox( - lines=1, placeholder="Negative Prompt", show_label=False - ) - - with gr.Row(): - with gr.Column(): - controlnet_stable_model_path = gr.Dropdown( - choices=stable_model_list, - value=stable_model_list[0], - label="Stable Model Path", - ) - controlnet_preprocces_type = gr.Dropdown( - choices=list(PREPROCCES_DICT.keys()), - value=list(PREPROCCES_DICT.keys())[0], - label="Preprocess Type", - ) - controlnet_conditioning_scale = gr.Slider( - minimum=0.0, - maximum=1.0, - step=0.1, - value=1.0, - label="ControlNet Conditioning Scale", - ) - controlnet_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - controlnet_height = gr.Slider( - minimum=128, - maximum=1280, - step=32, - value=512, - label="Height", - ) - controlnet_width = gr.Slider( - minimum=128, - maximum=1280, - step=32, - value=512, - label="Width", - ) - - with gr.Row(): - with gr.Column(): - controlnet_model_path = gr.Dropdown( - choices=controlnet_model_list, - value=controlnet_model_list[0], - label="ControlNet Model Path", - ) - controlnet_scheduler = gr.Dropdown( - choices=list(SCHEDULER_MAPPING.keys()), - value=list(SCHEDULER_MAPPING.keys())[0], - label="Scheduler", - ) - controlnet_num_inference_step = gr.Slider( - minimum=1, - maximum=150, - step=1, - value=30, - label="Num Inference Step", - ) - - controlnet_num_images_per_prompt = gr.Slider( - minimum=1, - maximum=4, - step=1, - value=1, - label="Number Of Images", - ) - controlnet_seed_generator = gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed(0 for random)", - ) - controlnet_guess_mode = gr.Checkbox( - label="Guess Mode" - ) - - # Button to generate the image - predict_button = gr.Button(value="Generate Image") - - with gr.Column(): - # Gallery to display the generated images - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - predict_button.click( - fn=StableDiffusionControlNetGenerator().generate_image, - inputs=[ - controlnet_image_path, - controlnet_stable_model_path, - controlnet_model_path, - controlnet_height, - controlnet_width, - controlnet_guess_mode, - controlnet_conditioning_scale, - controlnet_prompt, - controlnet_negative_prompt, - controlnet_num_images_per_prompt, - controlnet_guidance_scale, - controlnet_num_inference_step, - controlnet_scheduler, - controlnet_seed_generator, - controlnet_preprocces_type, - ], - outputs=[output_image], - ) diff --git a/spaces/flatindo/generate2/diffusion_webui/utils/__init__.py b/spaces/flatindo/generate2/diffusion_webui/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fun-research/FC-CLIP/INSTALL.md b/spaces/fun-research/FC-CLIP/INSTALL.md deleted file mode 100644 index e0bbead06e431aca3ce622ffd8a0fa9cc5b7a3a0..0000000000000000000000000000000000000000 --- a/spaces/fun-research/FC-CLIP/INSTALL.md +++ /dev/null @@ -1,48 +0,0 @@ -## Installation - -### Requirements -- Linux or macOS with Python ≥ 3.6 -- PyTorch ≥ 1.9 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. - Install them together at [pytorch.org](https://pytorch.org) to make sure of this. Note, please check - PyTorch version matches that is required by Detectron2. -- Detectron2: follow [Detectron2 installation instructions](https://detectron2.readthedocs.io/tutorials/install.html). -- OpenCV is optional but needed by demo and visualization -- `pip install -r requirements.txt` - -### CUDA kernel for MSDeformAttn -After preparing the required environment, run the following command to compile CUDA kernel for MSDeformAttn: - -`CUDA_HOME` must be defined and points to the directory of the installed CUDA toolkit. - -```bash -cd mask2former/modeling/pixel_decoder/ops -sh make.sh -``` - -#### Building on another system -To build on a system that does not have a GPU device but provide the drivers: -```bash -TORCH_CUDA_ARCH_LIST='8.0' FORCE_CUDA=1 python setup.py build install -``` - -### Example conda environment setup -```bash -conda create --name mask2former python=3.8 -y -conda activate mask2former -conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia -pip install -U opencv-python - -# under your working directory -git clone git@github.com:facebookresearch/detectron2.git -cd detectron2 -pip install -e . -pip install git+https://github.com/cocodataset/panopticapi.git -pip install git+https://github.com/mcordts/cityscapesScripts.git - -cd .. -git clone git@github.com:facebookresearch/Mask2Former.git -cd Mask2Former -pip install -r requirements.txt -cd mask2former/modeling/pixel_decoder/ops -sh make.sh -``` diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/utils/up_conv_block.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/utils/up_conv_block.py deleted file mode 100644 index 378469da76cb7bff6a639e7877b3c275d50490fb..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/utils/up_conv_block.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, build_upsample_layer - - -class UpConvBlock(nn.Module): - """Upsample convolution block in decoder for UNet. - - This upsample convolution block consists of one upsample module - followed by one convolution block. The upsample module expands the - high-level low-resolution feature map and the convolution block fuses - the upsampled high-level low-resolution feature map and the low-level - high-resolution feature map from encoder. - - Args: - conv_block (nn.Sequential): Sequential of convolutional layers. - in_channels (int): Number of input channels of the high-level - skip_channels (int): Number of input channels of the low-level - high-resolution feature map from encoder. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers in the conv_block. - Default: 2. - stride (int): Stride of convolutional layer in conv_block. Default: 1. - dilation (int): Dilation rate of convolutional layer in conv_block. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). If the size of - high-level feature map is the same as that of skip feature map - (low-level feature map from encoder), it does not need upsample the - high-level feature map and the upsample_cfg is None. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - conv_block, - in_channels, - skip_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - dcn=None, - plugins=None): - super(UpConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.conv_block = conv_block( - in_channels=2 * skip_channels, - out_channels=out_channels, - num_convs=num_convs, - stride=stride, - dilation=dilation, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None) - if upsample_cfg is not None: - self.upsample = build_upsample_layer( - cfg=upsample_cfg, - in_channels=in_channels, - out_channels=skip_channels, - with_cp=with_cp, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.upsample = ConvModule( - in_channels, - skip_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, skip, x): - """Forward function.""" - - x = self.upsample(x) - out = torch.cat([skip, x], dim=1) - out = self.conv_block(out) - - return out diff --git a/spaces/glyszt/vt/vtoonify/model/stylegan/model.py b/spaces/glyszt/vt/vtoonify/model/stylegan/model.py deleted file mode 100644 index 7a4b00e52902d850b78dea3736324198eb32e075..0000000000000000000000000000000000000000 --- a/spaces/glyszt/vt/vtoonify/model/stylegan/model.py +++ /dev/null @@ -1,719 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from model.stylegan.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True, dilation=1 ## modified - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - self.dilation = dilation ## modified - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = conv2d_gradfix.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, ## modified - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding}, dilation={self.dilation})" ## modified - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - fused=True, - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - self.fused = fused - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style, externalweight=None): - batch, in_channel, height, width = input.shape - - if not self.fused: - weight = self.scale * self.weight.squeeze(0) - style = self.modulation(style) - - if self.demodulate: - w = weight.unsqueeze(0) * style.view(batch, 1, in_channel, 1, 1) - dcoefs = (w.square().sum((2, 3, 4)) + 1e-8).rsqrt() - - input = input * style.reshape(batch, in_channel, 1, 1) - - if self.upsample: - weight = weight.transpose(0, 1) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2 - ) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - out = conv2d_gradfix.conv2d(input, weight, padding=0, stride=2) - - else: - out = conv2d_gradfix.conv2d(input, weight, padding=self.padding) - - if self.demodulate: - out = out * dcoefs.view(batch, -1, 1, 1) - - return out - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - if externalweight is None: - weight = self.scale * self.weight * style - else: - weight = self.scale * (self.weight + externalweight) * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=self.padding, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu" - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - z_plus_latent=False, - return_feature_ind=999, - ): - if not input_is_latent: - if not z_plus_latent: - styles = [self.style(s) for s in styles] - else: - styles_ = [] - for s in styles: - style_ = [] - for i in range(s.shape[1]): - style_.append(self.style(s[:,i]).unsqueeze(1)) - styles_.append(torch.cat(style_,dim=1)) - styles = styles_ - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f"noise_{i}") for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - else: - latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - if i > return_feature_ind: - return out, skip - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - dilation=1, ## modified - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 + dilation-1 ## modified - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - dilation=dilation, ## modified - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/How to get crack ots av dj pro 19 with keygen and license.md b/spaces/gotiQspiryo/whisper-ui/examples/How to get crack ots av dj pro 19 with keygen and license.md deleted file mode 100644 index ee7db7cc2d9382323660a2d8595c9123de94cbbb..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/How to get crack ots av dj pro 19 with keygen and license.md +++ /dev/null @@ -1,6 +0,0 @@ -

    crack ots av dj pro 19


    DOWNLOAD --->>> https://urlgoal.com/2uyLGV



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/IntelliJ IDEA 2019.3 Crack Activation Key Free Download BEST.md b/spaces/gotiQspiryo/whisper-ui/examples/IntelliJ IDEA 2019.3 Crack Activation Key Free Download BEST.md deleted file mode 100644 index 50ff7217173435cdac78f995a05d1562827cd391..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/IntelliJ IDEA 2019.3 Crack Activation Key Free Download BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

    IntelliJ IDEA 2019.3 Crack Activation Key Free Download


    Downloadhttps://urlgoal.com/2uyMEJ



    -
    -Crack with Serial Keygen Full EAP Windows Mac Free Download 2019. ... IntelliJ. IDEA 2019.3.1 Crack. Activation Code Updated Version License Key . 1fdad05405
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Kitab Rohmatul Ummah Pdf Download.md b/spaces/gotiQspiryo/whisper-ui/examples/Kitab Rohmatul Ummah Pdf Download.md deleted file mode 100644 index 51c939f57718c696ac8ccbec3ded7b5a42bc0514..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Kitab Rohmatul Ummah Pdf Download.md +++ /dev/null @@ -1,17 +0,0 @@ - -

    Kitab Rohmatul Ummah: A Book of Fiqh Differences Among the Scholars

    -

    Kitab Rohmatul Ummah is a book written by Sheikh Abu Abdullah Muhammad bin Abdur Rahman al Dimashqi al Uthmani al Shafi'i, a prominent scholar who lived in the eighth century Hijri. The book discusses the differences of opinions among the scholars of fiqh (Islamic jurisprudence) on various issues related to worship, transactions, family, and social matters. The book aims to show that the diversity of views among the scholars is a mercy for the ummah (Muslim community), as it allows them to choose the most suitable opinion according to their circumstances and preferences.

    -

    kitab rohmatul ummah pdf download


    Download Zip ✑ ✑ ✑ https://urlgoal.com/2uyMKo



    -

    The book is divided into 20 chapters, each covering a specific topic of fiqh, such as prayer, fasting, zakat, hajj, marriage, divorce, inheritance, trade, riba (interest), jihad, oaths, vows, etc. In each chapter, the author presents the opinions of the four major schools of fiqh (Hanafi, Maliki, Shafi'i, and Hanbali), as well as some other scholars from different regions and eras. He also provides the evidences and arguments for each opinion from the Quran, the Sunnah (traditions of the Prophet Muhammad), and the consensus of the scholars. He also mentions some of the benefits and wisdoms behind the differences of opinions, and advises the readers to respect and tolerate each other's views.

    -

    Kitab Rohmatul Ummah is a valuable source of knowledge for anyone who wants to learn more about fiqh and its diversity. It is also a useful reference for students and teachers of fiqh who want to compare and contrast the opinions of different scholars on various issues. The book is written in a clear and concise language that is easy to understand and follow. It is also enriched with many examples and anecdotes that illustrate the practical applications of fiqh.

    -

    If you are interested in reading Kitab Rohmatul Ummah, you can download it in PDF format from several websites that offer Islamic books for free. Some of these websites are:

    -

    -
      -
    • Terjemahkitab.com, which provides a translation of Kitab Rohmatul Ummah in Indonesian language[^1^].
    • -
    • Shepangaro Pustaka, which offers a PDF file of Kitab Rohmatul Ummah without any translation or commentary[^2^].
    • -
    • Vacrisevil, which has a ZIP file of Kitab Rohmatul Ummah that can be downloaded and extracted[^3^].
    • -
    • Kalakaary, which has a link to another website that claims to have Kitab Rohmatul Ummah in PDF format[^4^]. However, this link may not be reliable or safe, so proceed with caution.
    • -
    -

    We hope that this article has given you some useful information about Kitab Rohmatul Ummah and how to download it in PDF format. We also hope that you will benefit from reading this book and learning more about fiqh and its diversity. May Allah guide us all to the truth and grant us His mercy.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/scripts/mean_pool.py b/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/scripts/mean_pool.py deleted file mode 100644 index 4eea048ef3455cb3c897e74c18778c78fdc9fcbf..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/scripts/mean_pool.py +++ /dev/null @@ -1,99 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import math -import numpy as np -import tqdm -import torch -import torch.nn.functional as F -from shutil import copyfile - -from npy_append_array import NpyAppendArray - - -def get_parser(): - parser = argparse.ArgumentParser( - description="mean pools representations by compressing uniform splits of the data" - ) - # fmt: off - parser.add_argument('source', help='directory with features') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--subsample-rate', type=float, default=0.5, help='size to subsample data to') - - parser.add_argument('--remove-extra', action='store_true', help='if true, removes extra states that cant be pooled, otherwise pads with 0s') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - source_path = osp.join(args.source, args.split) - - print(f"data path: {source_path}") - - features = np.load(source_path + ".npy", mmap_mode="r") - - os.makedirs(args.save_dir, exist_ok=True) - save_path = osp.join(args.save_dir, args.split) - - copyfile(source_path + ".tsv", save_path + ".tsv") - - if os.path.exists(source_path + ".phn"): - copyfile(source_path + ".phn", save_path + ".phn") - if os.path.exists(source_path + ".wrd"): - copyfile(source_path + ".wrd", save_path + ".wrd") - - if os.path.exists(osp.join(args.source, "dict.phn.txt")): - copyfile( - osp.join(args.source, "dict.phn.txt"), - osp.join(args.save_dir, "dict.phn.txt"), - ) - - if osp.exists(save_path + ".npy"): - os.remove(save_path + ".npy") - npaa = NpyAppendArray(save_path + ".npy") - - with open(source_path + ".lengths", "r") as lf: - lengths = lf.readlines() - - fsz = features.shape[-1] - start = 0 - with torch.no_grad(): - with open(save_path + ".lengths", "w") as lengths_out: - for length in tqdm.tqdm(lengths): - length = int(length) - end = start + length - feats = features[start:end] - start += length - x = torch.from_numpy(feats).cuda() - target_num = math.ceil(length * args.subsample_rate) - rem = length % target_num - - if rem > 0: - if args.remove_extra: - to_rem = target_num - rem - target_num -= 1 - x = x[:-to_rem] - else: - to_add = target_num - rem - x = F.pad(x, [0, 0, 0, to_add]) - x[-to_add:] = x[-to_add - 1] - - x = x.view(target_num, -1, fsz) - x = x.mean(dim=-2) - print(target_num, file=lengths_out) - npaa.append(x.cpu().numpy()) - - -if __name__ == "__main__": - main() diff --git a/spaces/gradio/HuBERT/fairseq/modules/same_pad.py b/spaces/gradio/HuBERT/fairseq/modules/same_pad.py deleted file mode 100644 index 4c04990ea6fdb291f162ee8ac3d17a92483daf8e..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/same_pad.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from torch import nn - - -class SamePad(nn.Module): - def __init__(self, kernel_size, causal=False): - super().__init__() - if causal: - self.remove = kernel_size - 1 - else: - self.remove = 1 if kernel_size % 2 == 0 else 0 - - def forward(self, x): - if self.remove > 0: - x = x[:, :, : -self.remove] - return x diff --git a/spaces/gradio/image_segmentation/DESCRIPTION.md b/spaces/gradio/image_segmentation/DESCRIPTION.md deleted file mode 100644 index dbba2ae29e4ed391bfd1681fb2fe7d0efcb34222..0000000000000000000000000000000000000000 --- a/spaces/gradio/image_segmentation/DESCRIPTION.md +++ /dev/null @@ -1 +0,0 @@ -Simple image segmentation using gradio's AnnotatedImage component. \ No newline at end of file diff --git a/spaces/haakohu/deep_privacy2/dp2/detection/base.py b/spaces/haakohu/deep_privacy2/dp2/detection/base.py deleted file mode 100644 index 6ab8b20c9474cbb6074b66a694d1a1a05df0c12c..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/detection/base.py +++ /dev/null @@ -1,42 +0,0 @@ -import pickle -import torch -import lzma -from pathlib import Path -from tops import logger - - -class BaseDetector: - - def __init__(self, cache_directory: str) -> None: - if cache_directory is not None: - self.cache_directory = Path(cache_directory, str(self.__class__.__name__)) - self.cache_directory.mkdir(exist_ok=True, parents=True) - - def save_to_cache(self, detection, cache_path: Path, after_preprocess=True): - logger.log(f"Caching detection to: {cache_path}") - with lzma.open(cache_path, "wb") as fp: - torch.save( - [det.state_dict(after_preprocess=after_preprocess) for det in detection], fp, - pickle_protocol=pickle.HIGHEST_PROTOCOL) - - def load_from_cache(self, cache_path: Path): - logger.log(f"Loading detection from cache path: {cache_path}") - with lzma.open(cache_path, "rb") as fp: - state_dict = torch.load(fp) - return [ - state["cls"].from_state_dict(state_dict=state) for state in state_dict - ] - - def forward_and_cache(self, im: torch.Tensor, cache_id: str, load_cache: bool): - if cache_id is None: - return self.forward(im) - cache_path = self.cache_directory.joinpath(cache_id + ".torch") - if cache_path.is_file() and load_cache: - try: - return self.load_from_cache(cache_path) - except Exception as e: - logger.warn(f"The cache file was corrupted: {cache_path}") - exit() - detections = self.forward(im) - self.save_to_cache(detections, cache_path) - return detections diff --git a/spaces/haakohu/deep_privacy2_face/configs/fdf/stylegan.py b/spaces/haakohu/deep_privacy2_face/configs/fdf/stylegan.py deleted file mode 100644 index a4da2c3ad76d3d1fb6e1d91e832cde5c735bf32a..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/configs/fdf/stylegan.py +++ /dev/null @@ -1,14 +0,0 @@ -from ..generators.stylegan_unet import generator -from ..datasets.fdf256 import data -from ..discriminators.sg2_discriminator import discriminator, G_optim, D_optim, loss_fnc -from ..defaults import train, common, EMA - -train.max_images_to_train = int(35e6) -G_optim.lr = 0.002 -D_optim.lr = 0.002 -generator.input_cse = False -loss_fnc.r1_opts.lambd = 1 -train.ims_per_val = int(2e6) - -common.model_url = "https://api.loke.aws.unit.no/dlr-gui-backend-resources-content/v2/contents/links/89660f04-5c11-4dbf-adac-cbe2f11b0aeea25cbf78-7558-475a-b3c7-03f5c10b7934646b0720-ca0a-4d53-aded-daddbfa45c9e" -common.model_md5sum = "e8e32190528af2ed75f0cb792b7f2b07" \ No newline at end of file diff --git "a/spaces/hands012/gpt-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT.py" "b/spaces/hands012/gpt-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT.py" deleted file mode 100644 index 6a7d118b4439605db6e10b9a416a2e725b99a672..0000000000000000000000000000000000000000 --- "a/spaces/hands012/gpt-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT.py" +++ /dev/null @@ -1,102 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping -import requests -from bs4 import BeautifulSoup -from request_llm.bridge_all import model_info - -def google(query, proxies): - query = query # 在此处替换您要搜索的关键词 - url = f"https://www.google.com/search?q={query}" - headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'} - response = requests.get(url, headers=headers, proxies=proxies) - soup = BeautifulSoup(response.content, 'html.parser') - results = [] - for g in soup.find_all('div', class_='g'): - anchors = g.find_all('a') - if anchors: - link = anchors[0]['href'] - if link.startswith('/url?q='): - link = link[7:] - if not link.startswith('http'): - continue - title = g.find('h3').text - item = {'title': title, 'link': link} - results.append(item) - - for r in results: - print(r['link']) - return results - -def scrape_text(url, proxies) -> str: - """Scrape text from a webpage - - Args: - url (str): The URL to scrape text from - - Returns: - str: The scraped text - """ - headers = { - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36', - 'Content-Type': 'text/plain', - } - try: - response = requests.get(url, headers=headers, proxies=proxies, timeout=8) - if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding - except: - return "无法连接到该网页" - soup = BeautifulSoup(response.text, "html.parser") - for script in soup(["script", "style"]): - script.extract() - text = soup.get_text() - lines = (line.strip() for line in text.splitlines()) - chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) - text = "\n".join(chunk for chunk in chunks if chunk) - return text - -@CatchException -def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((f"请结合互联网信息回答以下问题:{txt}", - "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # ------------- < 第1步:爬取搜索引擎的结果 > ------------- - from toolbox import get_conf - proxies, = get_conf('proxies') - urls = google(txt, proxies) - history = [] - - # ------------- < 第2步:依次访问网页 > ------------- - max_search_result = 5 # 最多收纳多少个网页的结果 - for index, url in enumerate(urls[:max_search_result]): - res = scrape_text(url['link'], proxies) - history.extend([f"第{index}份搜索结果:", res]) - chatbot.append([f"第{index}份搜索结果:", res[:500]+"......"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # ------------- < 第3步:ChatGPT综合 > ------------- - i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}" - i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token - inputs=i_say, - history=history, - max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4 - ) - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。" - ) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - diff --git a/spaces/haofeixu/unimatch/unimatch/geometry.py b/spaces/haofeixu/unimatch/unimatch/geometry.py deleted file mode 100644 index 775a95783aeee66a44e6290525de94909af648df..0000000000000000000000000000000000000000 --- a/spaces/haofeixu/unimatch/unimatch/geometry.py +++ /dev/null @@ -1,195 +0,0 @@ -import torch -import torch.nn.functional as F - - -def coords_grid(b, h, w, homogeneous=False, device=None): - y, x = torch.meshgrid(torch.arange(h), torch.arange(w)) # [H, W] - - stacks = [x, y] - - if homogeneous: - ones = torch.ones_like(x) # [H, W] - stacks.append(ones) - - grid = torch.stack(stacks, dim=0).float() # [2, H, W] or [3, H, W] - - grid = grid[None].repeat(b, 1, 1, 1) # [B, 2, H, W] or [B, 3, H, W] - - if device is not None: - grid = grid.to(device) - - return grid - - -def generate_window_grid(h_min, h_max, w_min, w_max, len_h, len_w, device=None): - assert device is not None - - x, y = torch.meshgrid([torch.linspace(w_min, w_max, len_w, device=device), - torch.linspace(h_min, h_max, len_h, device=device)], - ) - grid = torch.stack((x, y), -1).transpose(0, 1).float() # [H, W, 2] - - return grid - - -def normalize_coords(coords, h, w): - # coords: [B, H, W, 2] - c = torch.Tensor([(w - 1) / 2., (h - 1) / 2.]).float().to(coords.device) - return (coords - c) / c # [-1, 1] - - -def bilinear_sample(img, sample_coords, mode='bilinear', padding_mode='zeros', return_mask=False): - # img: [B, C, H, W] - # sample_coords: [B, 2, H, W] in image scale - if sample_coords.size(1) != 2: # [B, H, W, 2] - sample_coords = sample_coords.permute(0, 3, 1, 2) - - b, _, h, w = sample_coords.shape - - # Normalize to [-1, 1] - x_grid = 2 * sample_coords[:, 0] / (w - 1) - 1 - y_grid = 2 * sample_coords[:, 1] / (h - 1) - 1 - - grid = torch.stack([x_grid, y_grid], dim=-1) # [B, H, W, 2] - - img = F.grid_sample(img, grid, mode=mode, padding_mode=padding_mode, align_corners=True) - - if return_mask: - mask = (x_grid >= -1) & (y_grid >= -1) & (x_grid <= 1) & (y_grid <= 1) # [B, H, W] - - return img, mask - - return img - - -def flow_warp(feature, flow, mask=False, padding_mode='zeros'): - b, c, h, w = feature.size() - assert flow.size(1) == 2 - - grid = coords_grid(b, h, w).to(flow.device) + flow # [B, 2, H, W] - - return bilinear_sample(feature, grid, padding_mode=padding_mode, - return_mask=mask) - - -def forward_backward_consistency_check(fwd_flow, bwd_flow, - alpha=0.01, - beta=0.5 - ): - # fwd_flow, bwd_flow: [B, 2, H, W] - # alpha and beta values are following UnFlow (https://arxiv.org/abs/1711.07837) - assert fwd_flow.dim() == 4 and bwd_flow.dim() == 4 - assert fwd_flow.size(1) == 2 and bwd_flow.size(1) == 2 - flow_mag = torch.norm(fwd_flow, dim=1) + torch.norm(bwd_flow, dim=1) # [B, H, W] - - warped_bwd_flow = flow_warp(bwd_flow, fwd_flow) # [B, 2, H, W] - warped_fwd_flow = flow_warp(fwd_flow, bwd_flow) # [B, 2, H, W] - - diff_fwd = torch.norm(fwd_flow + warped_bwd_flow, dim=1) # [B, H, W] - diff_bwd = torch.norm(bwd_flow + warped_fwd_flow, dim=1) - - threshold = alpha * flow_mag + beta - - fwd_occ = (diff_fwd > threshold).float() # [B, H, W] - bwd_occ = (diff_bwd > threshold).float() - - return fwd_occ, bwd_occ - - -def back_project(depth, intrinsics): - # Back project 2D pixel coords to 3D points - # depth: [B, H, W] - # intrinsics: [B, 3, 3] - b, h, w = depth.shape - grid = coords_grid(b, h, w, homogeneous=True, device=depth.device) # [B, 3, H, W] - - intrinsics_inv = torch.inverse(intrinsics) # [B, 3, 3] - - points = intrinsics_inv.bmm(grid.view(b, 3, -1)).view(b, 3, h, w) * depth.unsqueeze(1) # [B, 3, H, W] - - return points - - -def camera_transform(points_ref, extrinsics_ref=None, extrinsics_tgt=None, extrinsics_rel=None): - # Transform 3D points from reference camera to target camera - # points_ref: [B, 3, H, W] - # extrinsics_ref: [B, 4, 4] - # extrinsics_tgt: [B, 4, 4] - # extrinsics_rel: [B, 4, 4], relative pose transform - b, _, h, w = points_ref.shape - - if extrinsics_rel is None: - extrinsics_rel = torch.bmm(extrinsics_tgt, torch.inverse(extrinsics_ref)) # [B, 4, 4] - - points_tgt = torch.bmm(extrinsics_rel[:, :3, :3], - points_ref.view(b, 3, -1)) + extrinsics_rel[:, :3, -1:] # [B, 3, H*W] - - points_tgt = points_tgt.view(b, 3, h, w) # [B, 3, H, W] - - return points_tgt - - -def reproject(points_tgt, intrinsics, return_mask=False): - # reproject to target view - # points_tgt: [B, 3, H, W] - # intrinsics: [B, 3, 3] - - b, _, h, w = points_tgt.shape - - proj_points = torch.bmm(intrinsics, points_tgt.view(b, 3, -1)).view(b, 3, h, w) # [B, 3, H, W] - - X = proj_points[:, 0] - Y = proj_points[:, 1] - Z = proj_points[:, 2].clamp(min=1e-3) - - pixel_coords = torch.stack([X / Z, Y / Z], dim=1).view(b, 2, h, w) # [B, 2, H, W] in image scale - - if return_mask: - # valid mask in pixel space - mask = (pixel_coords[:, 0] >= 0) & (pixel_coords[:, 0] <= (w - 1)) & ( - pixel_coords[:, 1] >= 0) & (pixel_coords[:, 1] <= (h - 1)) # [B, H, W] - - return pixel_coords, mask - - return pixel_coords - - -def reproject_coords(depth_ref, intrinsics, extrinsics_ref=None, extrinsics_tgt=None, extrinsics_rel=None, - return_mask=False): - # Compute reprojection sample coords - points_ref = back_project(depth_ref, intrinsics) # [B, 3, H, W] - points_tgt = camera_transform(points_ref, extrinsics_ref, extrinsics_tgt, extrinsics_rel=extrinsics_rel) - - if return_mask: - reproj_coords, mask = reproject(points_tgt, intrinsics, - return_mask=return_mask) # [B, 2, H, W] in image scale - - return reproj_coords, mask - - reproj_coords = reproject(points_tgt, intrinsics, - return_mask=return_mask) # [B, 2, H, W] in image scale - - return reproj_coords - - -def compute_flow_with_depth_pose(depth_ref, intrinsics, - extrinsics_ref=None, extrinsics_tgt=None, extrinsics_rel=None, - return_mask=False): - b, h, w = depth_ref.shape - coords_init = coords_grid(b, h, w, device=depth_ref.device) # [B, 2, H, W] - - if return_mask: - reproj_coords, mask = reproject_coords(depth_ref, intrinsics, extrinsics_ref, extrinsics_tgt, - extrinsics_rel=extrinsics_rel, - return_mask=return_mask) # [B, 2, H, W] - rigid_flow = reproj_coords - coords_init - - return rigid_flow, mask - - reproj_coords = reproject_coords(depth_ref, intrinsics, extrinsics_ref, extrinsics_tgt, - extrinsics_rel=extrinsics_rel, - return_mask=return_mask) # [B, 2, H, W] - - rigid_flow = reproj_coords - coords_init - - return rigid_flow diff --git "a/spaces/hbestm/gpt-academic-play/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" "b/spaces/hbestm/gpt-academic-play/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" deleted file mode 100644 index c299e59d3894b7ac2d33df1502746adaef4a47b8..0000000000000000000000000000000000000000 --- "a/spaces/hbestm/gpt-academic-play/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" +++ /dev/null @@ -1,175 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = False - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - - print('Segmentation: done') - -def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - - # <-------- 读取Latex文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 定义注释的正则表达式 - comment_pattern = r'%.*' - # 使用正则表达式查找注释,并替换为空字符串 - clean_tex_content = re.sub(comment_pattern, '', file_content) - # 记录删除注释后的文本 - pfg.file_paths.append(fp) - pfg.file_contents.append(clean_tex_content) - - # <-------- 拆分过长的latex文件 ----------> - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - # <-------- 抽取摘要 ----------> - # if language == 'en': - # abs_extract_inputs = f"Please write an abstract for this paper" - - # # 单线,获取文章meta信息 - # paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( - # inputs=abs_extract_inputs, - # inputs_show_user=f"正在抽取摘要信息。", - # llm_kwargs=llm_kwargs, - # chatbot=chatbot, history=[], - # sys_prompt="Your job is to collect information from materials。", - # ) - - # <-------- 多线程润色开始 ----------> - if language == 'en': - inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)] - elif language == 'zh': - inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag] - sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)] - - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # 并行任务数量限制,最多同时执行5个,其他的排队等待 - scroller_max_len = 80 - ) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en') - - - - - - -@CatchException -def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh') \ No newline at end of file diff --git a/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/__init__.py b/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/__init__.py deleted file mode 100644 index 1ad075593152cf94d30a903d8add28d8200badbb..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import os -import sys - -try: - from .version import __version__ # noqa -except ImportError: - version_txt = os.path.join(os.path.dirname(__file__), "version.txt") - with open(version_txt) as f: - __version__ = f.read().strip() - diff --git a/spaces/hylee/AnimeGANv2/app.py b/spaces/hylee/AnimeGANv2/app.py deleted file mode 100644 index f73255379570c55b4c434e749211f183c1ccfd7d..0000000000000000000000000000000000000000 --- a/spaces/hylee/AnimeGANv2/app.py +++ /dev/null @@ -1,99 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable - - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -sys.path.insert(0, 'animeganv2') - -import test1 as test - -ORIGINAL_REPO_URL = 'https://github.com/TachibanaYoshino/AnimeGANv2' -TITLE = 'TachibanaYoshino/AnimeGANv2' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - - -def run( - image, -) -> tuple[PIL.Image.Image]: - curPath = os.path.abspath(os.path.dirname(__file__)) - out = test.test(checkpoint_dir=os.path.join(curPath,'animeganv2/checkpoint/generator_Shinkai_weight'), - style_name='Shinkai', test_file=image.name, if_adjust_brightness=True) - - return PIL.Image.open(out) - - -def main(): - gr.close_all() - - args = parse_args() - - #curPath = os.path.abspath(os.path.dirname(__file__)) - #init - #shinkai = ImportGraph(checkpoint_dir=os.path.join(curPath,'animeganv2/checkpoint/generator_Shinkai_weight')) - #hayao = ImportGraph(checkpoint_dir=os.path.join(curPath,'animeganv2/checkpoint/generator_Hayao_weight')) - #paprika = ImportGraph(checkpoint_dir=os.path.join(curPath,'animeganv2/checkpoint/generator_Paprika_weight')) - - func = functools.partial(run) - func = functools.update_wrapper(func, run) - - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - - ], - #examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/hyxue/HiFiFace-inference-demo/app.py b/spaces/hyxue/HiFiFace-inference-demo/app.py deleted file mode 100644 index c907d9aa68ba085cbe831c6e43330f289ae8a907..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/app.py +++ /dev/null @@ -1,134 +0,0 @@ -import argparse - -import gradio as gr - -from benchmark.app_image import ImageSwap -from benchmark.app_video import VideoSwap -from configs.train_config import TrainConfig -from models.model import HifiFace - - -class ConfigPath: - face_detector_weights = "/checkpoints/face_detector/face_detector_scrfd_10g_bnkps.onnx" - model_path = "" - model_idx = 80000 - ffmpeg_device = "cuda" - device = "cuda" - - -def main(): - cfg = ConfigPath() - parser = argparse.ArgumentParser( - prog="benchmark", description="What the program does", epilog="Text at the bottom of help" - ) - parser.add_argument("-m", "--model_path", default="/checkpoints/hififace_pretrained/standard_model") - parser.add_argument("-i", "--model_idx", default="320000") - parser.add_argument("-f", "--ffmpeg_device", default="cpu") - parser.add_argument("-d", "--device", default="cpu") - - args = parser.parse_args() - - cfg.model_path = args.model_path - cfg.model_idx = int(args.model_idx) - cfg.ffmpeg_device = args.ffmpeg_device - cfg.device = args.device - opt = TrainConfig() - checkpoint = (cfg.model_path, cfg.model_idx) - model_path_1 = "/checkpoints/hififace_pretrained/with_gaze_and_mouth" - checkpoint1 = ("/checkpoints/hififace_pretrained/with_gaze_and_mouth", "190000") - model = HifiFace(opt.identity_extractor_config, is_training=False, device=cfg.device, load_checkpoint=checkpoint) - - model1 = HifiFace(opt.identity_extractor_config, is_training=False, device=cfg.device, load_checkpoint=checkpoint1) - image_infer = ImageSwap(cfg, model) - image_infer1 = ImageSwap(cfg, model1) - def inference_image(source_face, target_face, shape_rate, id_rate, iterations): - return image_infer.inference(source_face, target_face, shape_rate, id_rate, int(iterations)) - - def inference_image1(source_face, target_face, shape_rate, id_rate, iterations): - return image_infer1.inference(source_face, target_face, shape_rate, id_rate, int(iterations)) - - model_name = cfg.model_path.split("/")[-1] + ":" + f"{cfg.model_idx}" - model_name1 = model_path_1.split("/")[-1] + ":" + "190000" - with gr.Blocks(title="FaceSwap") as demo: - gr.Markdown( - f""" - ### standard model: {model_name} \n - ### model with eye and mouth hm loss: {model_name1} - """ - ) - with gr.Tab("Image swap with standard model"): - with gr.Row(): - source_image = gr.Image(shape=None, label="source image") - target_image = gr.Image(shape=None, label="target image") - with gr.Row(): - with gr.Column(): - structure_sim = gr.Slider( - minimum=0.0, - maximum=1.0, - value=1.0, - step=0.1, - label="3d similarity", - ) - id_sim = gr.Slider( - minimum=0.0, - maximum=1.0, - value=1.0, - step=0.1, - label="id similarity", - ) - iters = gr.Slider( - minimum=1, - maximum=10, - value=1, - step=1, - label="iters", - ) - image_btn = gr.Button("image swap") - output_image = gr.Image(shape=None, label="Result") - - image_btn.click( - fn=inference_image, - inputs=[source_image, target_image, structure_sim, id_sim, iters], - outputs=output_image, - ) - - with gr.Tab("Image swap with eye&mouth hm loss model"): - with gr.Row(): - source_image = gr.Image(shape=None, label="source image") - target_image = gr.Image(shape=None, label="target image") - with gr.Row(): - with gr.Column(): - structure_sim = gr.Slider( - minimum=0.0, - maximum=1.0, - value=1.0, - step=0.1, - label="3d similarity", - ) - id_sim = gr.Slider( - minimum=0.0, - maximum=1.0, - value=1.0, - step=0.1, - label="id similarity", - ) - iters = gr.Slider( - minimum=1, - maximum=10, - value=1, - step=1, - label="iters", - ) - image_btn = gr.Button("image swap") - output_image = gr.Image(shape=None, label="Result") - - image_btn.click( - fn=inference_image1, - inputs=[source_image, target_image, structure_sim, id_sim, iters], - outputs=output_image, - ) - demo.launch(server_name="0.0.0.0", server_port=7860) - - -if __name__ == "__main__": - main() diff --git a/spaces/hzwluoye/gpt4/client/css/options.css b/spaces/hzwluoye/gpt4/client/css/options.css deleted file mode 100644 index fb015a54e0a7f7ac521517357d812c994621592e..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/client/css/options.css +++ /dev/null @@ -1,10 +0,0 @@ -.options-container { - display: flex; - flex-wrap: wrap; -} - -@media screen and (max-width: 990px) { - .options-container { - justify-content: space-between; - } -} diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (kal Ho Na Ho 720p Kickass Torrents).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (kal Ho Na Ho 720p Kickass Torrents).md deleted file mode 100644 index 9045fbb830c801c4a2e4dacee7c4cbe3ba146503..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (kal Ho Na Ho 720p Kickass Torrents).md +++ /dev/null @@ -1,10 +0,0 @@ -

    HD Online Player (kal ho na ho 720p kickass torrents)


    Download ⚹⚹⚹ https://urlin.us/2uEwiG



    - -Aashiqui 2; Daddy Ki Lagi; Rockstar: Baar Baar Dekho; Khoobsurat; Humko Dev ChandAashiqui 3; Mere Baap Pehle; Daddy Ki Lagi; Rockstar: Ek Villain; Om Shanti OmYeh Jawaani Hai Deewani; Taxi No. 9211, Yaar Mera Dil; Rockstar: Hum Tum; Raavan; Taare Zameen Par; Jawani Diwani; AashiquiDil Dhadakne Do; Baar Baar Dekho; Om Shanti Om - -Lucky Ali Shah; Dostana; UncleGandhigiri; Rockstar; Kyaa Kool Hain Hum; Rockstar: Hazaar Ek Aarambh Hai; Rockstar: Rockstar Dhaakad; Om Shanti OmCulcutta Express; Rockstar; Dhadkan; Kabhi Khushi Kabhie Gham; Rockstar; Khoobsurat; Rockstar: Raja Ko Rani; Rockstar: Om Shanti Om - -Naayaab; K.A.D.; Rockstar: Kyaa Kool Hai Hum; Rockstar: Rockstar; Rockstar: Rockstar The Show; Rockstar: Jab We MetMere Pandey Dutt Ki Kahaniyaan; Rockstar; Kyaa Kool Hain Hum; Rockstar; Om Shanti Om; Kabhi Khushi Kabhie Gham; Rockstar; Yuddh vihi khadki hain hum; Rockstar: Shaan; Kyaa kool Hain Hum; Rockstar: Raja Ko Rani; Kabhi Khushi Kabhie Gham; Rockstar: Jab We Met; Rockstar: Om Shanti Om; Aashiqui; Kaun Hai Toh Milaahi; Rockstar: Bobby; Kabhi Khushi Kabhie Gham; Rockstar: Hum Tum; Kabhi Khushi Kabhie Gham; Kabhi Khushi Kabhie Gham; Rockstar: O Hum Tum; Rockstar: Jo Hum Tumhe; Kabhi Khushi Kabhie Gham; Rockstar: Mere Brother Ki Dulhan; Rockstar: Yuddh; Rockstar: Yaar Mera Dil; Rockstar: Om Shanti Om; Rockstar: Rockstar The Show; Rockstar: Dhadkan; 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Depocam 11 Crack.md b/spaces/inreVtussa/clothingai/Examples/Depocam 11 Crack.md deleted file mode 100644 index 6f0b3c63e7b05b51c040d5db50b82ce6f79d7b70..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Depocam 11 Crack.md +++ /dev/null @@ -1,23 +0,0 @@ - -

    Depocam 11: A Powerful and User-Friendly CAM Software for Mold and Die Making

    -

    Depocam 11 is a CAM software that specializes in mold and die making. It is designed to help you create high-quality toolpaths for complex geometries and surfaces, with features such as:

    -
      -
    • Automatic feature recognition and machining strategies
    • -
    • Advanced 3D simulation and verification
    • -
    • Integrated CAD functionality and data exchange
    • -
    • Customizable postprocessors and macros
    • -
    • Optimized machining cycles and tool management
    • -
    -

    Depocam 11 is compatible with Windows 10 64-bit and requires a user ID to download[^1^]. You can get a free trial version or purchase a license from the official website of NC Graphics, the developer of Depocam. If you have a current maintenance contract, you will receive the user ID automatically by email[^1^].

    -

    Depocam 11 Crack


    Download File ⚹⚹⚹ https://tiurll.com/2uCiX6



    -

    Depocam 11 is a reliable and efficient solution for mold and die making, with a user-friendly interface and a comprehensive support system. Whether you are a beginner or an expert, Depocam 11 can help you achieve your machining goals with ease and accuracy.

    - -

    If you are looking for a CAM software that can handle complex mold and die making projects, you might want to consider Depocam 11. Depocam 11 is a software that has been developed by NC Graphics, a company that has over 30 years of experience in the CAM industry. Depocam 11 is trusted by many customers around the world who appreciate its quality and performance.

    -

    One of the advantages of Depocam 11 is that it has a simple and intuitive user interface that allows you to create and edit toolpaths with ease. You can also customize the interface to suit your preferences and workflow. Depocam 11 also has a comprehensive help system that provides you with tutorials, videos, manuals, and tips to help you get started and master the software.

    -

    Another benefit of Depocam 11 is that it has a powerful and flexible 3D simulation and verification module that lets you check your toolpaths for errors, collisions, gouges, and excess material. You can also view the machining process from different angles and perspectives, and generate reports and statistics. Depocam 11 also supports cutter animation for several toolpaths at once, which can help you optimize your machining cycles and tool management.

    - -

    Before you decide to purchase or download Depocam 11, you should make sure that your PC meets the minimum system requirements for running the software. According to the official website of NC Graphics, the system requirement for Depocam 11 is Windows 10 64-bit[^1^]. Older systems are not supported since version 18. You also need a user ID to download Depocam 11, which you can get from NC Graphics if you have a current maintenance contract[^1^].

    -

    Depocam 11 does not specify any other hardware requirements, such as processor, memory, or storage. However, you should consider that CAM software in general can be demanding on your PC resources, especially when working with large and complex models and toolpaths. Therefore, it is recommended that you have a PC that has a fast and multi-core processor, at least 4 GB of RAM (or more if possible), and enough storage space for your projects and files.

    -

    If you are not sure if your PC can run Depocam 11 smoothly, you can always try the free trial version first and see how it performs on your system. The free trial version is available for download from the NC Graphics website and it allows you to use all the features of Depocam 11 for a limited time. You can also contact NC Graphics for any technical support or questions you may have about Depocam 11.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Diablo III V1.0.2.9991 Client Server Emulator REVOLT [CRACKED].md b/spaces/inreVtussa/clothingai/Examples/Diablo III V1.0.2.9991 Client Server Emulator REVOLT [CRACKED].md deleted file mode 100644 index 51445ed82cb31028d8c26d13f9ca8371845c7eb0..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Diablo III V1.0.2.9991 Client Server Emulator REVOLT [CRACKED].md +++ /dev/null @@ -1,13 +0,0 @@ -

    Diablo III v1.0.2.9991 Client Server Emulator REVOLT


    DOWNLOAD ✫✫✫ https://tiurll.com/2uCiLe



    - -Knock Out Movie Download In Hindi Hd 1080p Diablo III V1.0.2.9991 Client Server Emulator-REVOLT Team Mooege PC ENG 2012 The Legend Of Bhagat Singh Download. (Read More) -Diablo III Download In Hindi Full Movie Free Online Hd 1080p Diablo III V1.0.2.9991 Client Server Emulator-REVOLT Team Mooege PC ENG 2012 The Legend of Bhagat Singh Download... -(Read More) -Immortal Kombat 3 Game Download In Hindi Game Download In Hindi Gamers' Choice New Edition Full Action Movie Download (Read More) -Diablo III - Official Trailer -(Read More -Knock Out Movie Download In Hindi Download Knock Out Movie Online Full Mp4 Hd 1080p Diablo III V1.0.2.9991 Client Server Emulator-REVOLT Team (Read More -(Read More 8a78ff9644
    -
    -
    -

    diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/input.tsx b/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/input.tsx deleted file mode 100644 index 0757ddebdca3800bbd4a46fe1c2c17dff86c5e2f..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from "react" - -import { cn } from "@/lib/utils" - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = "Input" - -export { Input } diff --git a/spaces/jennysun/jwsun-multisubject-render-model/dataset/catalog.py b/spaces/jennysun/jwsun-multisubject-render-model/dataset/catalog.py deleted file mode 100644 index b622e477dae7cb4ba5c599fa7d2f7220b4311885..0000000000000000000000000000000000000000 --- a/spaces/jennysun/jwsun-multisubject-render-model/dataset/catalog.py +++ /dev/null @@ -1,72 +0,0 @@ -import os - -class DatasetCatalog: - def __init__(self, ROOT, which_embedder): - assert which_embedder in ['clip', 'bert'] - - # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # - - - self.VGGrounding = { - "target": "dataset.tsv_dataset.TSVDataset", - "train_params": dict( - tsv_path=os.path.join(ROOT,'GROUNDING/gqa/tsv/train-00.tsv'), - ) - } - - - # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # - - - self.FlickrGrounding = { - "target": "dataset.tsv_dataset.TSVDataset", - "train_params":dict( - tsv_path=os.path.join(ROOT,'GROUNDING/flickr30k/tsv/train-00.tsv'), - ) - } - - # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # - - self.SBUGrounding = { - "target": "dataset.tsv_dataset.TSVDataset", - "train_params":dict( - tsv_path=os.path.join(ROOT,'GROUNDING/SBU/tsv/train-00.tsv'), - ) - } - - - # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # - - - self.CC3MGrounding = { - "target": "dataset.tsv_dataset.TSVDataset", - "train_params":dict( - tsv_path=os.path.join(ROOT,'GROUNDING/CC3M/tsv/train-00.tsv'), - ) - } - - - # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # - - - self.CC12MGrounding = { - "target": "dataset.tsv_dataset.TSVDataset", - "train_params":dict( - tsv_path=os.path.join(ROOT,'GROUNDING/CC12M/tsv/train-00.tsv'), - ) - } - - - # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # - - # temp = 'category_embedding_clip.pth' if which_embedder == 'clip' else 'category_embedding_bert.pth' - # obj365_category_embedding_path = os.path.join(ROOT, 'OBJECTS365', temp) - - self.Obj365Detection = { - "target": "dataset.tsv_dataset.TSVDataset", - "train_params":dict( - tsv_path=os.path.join(ROOT,'OBJECTS365/tsv/train-00.tsv'), - ), - } - - diff --git a/spaces/jeonchangbin49/De-limiter/utils/loudness_utils.py b/spaces/jeonchangbin49/De-limiter/utils/loudness_utils.py deleted file mode 100644 index faf6fcbe34ca631709b5ca482e1f869132a22b27..0000000000000000000000000000000000000000 --- a/spaces/jeonchangbin49/De-limiter/utils/loudness_utils.py +++ /dev/null @@ -1,71 +0,0 @@ -import random - -import numpy as np -import torch - - -def linear2db(x, eps=1e-5, scale=20): - return scale * np.log10(x + eps) - - -def db2linear(x, eps=1e-5, scale=20): - return 10 ** (x / scale) - eps - - -def normalize_mag_spec(S, min_level_db=-100.0): - return torch.clamp((S - min_level_db) / -min_level_db, min=0.0, max=1.0) - - -def denormalize_mag_spec(S, min_level_db=-100.0): - return torch.clamp(S, min=0.0, max=1.0) * -min_level_db + min_level_db - - -def loudness_match_and_norm(audio1, audio2, meter): - lufs_1 = meter.integrated_loudness(audio1) - lufs_2 = meter.integrated_loudness(audio2) - - if np.isinf(lufs_1) or np.isinf(lufs_2): - return audio1, audio2 - else: - audio2 = audio2 * db2linear(lufs_1 - lufs_2) - - return audio1, audio2 - - -def loudness_normal_match_and_norm(audio1, audio2, meter): - lufs_1 = meter.integrated_loudness(audio1) - lufs_2 = meter.integrated_loudness(audio2) - - if np.isinf(lufs_1) or np.isinf(lufs_2): - return audio1, audio2 - else: - target_lufs = random.normalvariate(lufs_1, 6.0) - audio2 = audio2 * db2linear(target_lufs - lufs_2) - - return audio1, audio2 - - -def loudness_normal_match_and_norm_output_louder_first(audio1, audio2, meter): - lufs_1 = meter.integrated_loudness(audio1) - lufs_2 = meter.integrated_loudness(audio2) - - if np.isinf(lufs_1) or np.isinf(lufs_2): - return audio1, audio2 - else: - target_lufs = random.normalvariate( - lufs_1 - 2.0, 2.0 - ) # we want audio1 to be louder than audio2 about target_lufs_diff - audio2 = audio2 * db2linear(target_lufs - lufs_2) - - return audio1, audio2 - - -def loudnorm(audio, target_lufs, meter, eps=1e-5): - lufs = meter.integrated_loudness(audio) - if np.isinf(lufs): - return audio, 0.0 - else: - adjusted_gain = target_lufs - lufs - audio = audio * db2linear(adjusted_gain, eps) - - return audio, adjusted_gain diff --git a/spaces/jerpint/RAGTheDocs/app.py b/spaces/jerpint/RAGTheDocs/app.py deleted file mode 100644 index 8533b2de8aec626f2a94d6db71f5740c09d2d513..0000000000000000000000000000000000000000 --- a/spaces/jerpint/RAGTheDocs/app.py +++ /dev/null @@ -1,187 +0,0 @@ -import os -from typing import Optional, Tuple - -import gradio as gr -import pandas as pd -from buster.completers import Completion - -# from embed_docs import embed_rtd_website -# from rtd_scraper.scrape_rtd import scrape_rtd -from embed_docs import embed_documents -import cfg -from cfg import setup_buster - -# Typehint for chatbot history -ChatHistory = list[list[Optional[str], Optional[str]]] - - -# Because this is a one-click deploy app, we will be relying on env. variables being set -openai_api_key = os.getenv("OPENAI_API_KEY") # Mandatory for app to work -readthedocs_url = os.getenv("READTHEDOCS_URL") # Mandatory for app to work as intended -readthedocs_version = os.getenv("READTHEDOCS_VERSION") - -if openai_api_key is None: - print( - "Warning: No OPENAI_API_KEY detected. Set it with 'export OPENAI_API_KEY=sk-...'." - ) - -if readthedocs_url is None: - raise ValueError( - "No READTHEDOCS_URL detected. Set it with e.g. 'export READTHEDOCS_URL=https://orion.readthedocs.io/'" - ) - -if readthedocs_version is None: - print( - """ - Warning: No READTHEDOCS_VERSION detected. If multiple versions of the docs exist, they will all be scraped. - Set it with e.g. 'export READTHEDOCS_VERSION=en/stable' - """ - ) - - -# Override to put it anywhere -save_directory = "outputs/" - -# scrape and embed content from readthedocs website -# You only need to embed the first time the app runs, comment it out to skip -embed_documents( - homepage_url=readthedocs_url, - save_directory=save_directory, - target_version=readthedocs_version, -) - -# Setup RAG agent -buster = setup_buster(cfg.buster_cfg) - - -# Setup Gradio app -def add_user_question( - user_question: str, chat_history: Optional[ChatHistory] = None -) -> ChatHistory: - """Adds a user's question to the chat history. - - If no history is provided, the first element of the history will be the user conversation. - """ - if chat_history is None: - chat_history = [] - chat_history.append([user_question, None]) - return chat_history - - -def format_sources(matched_documents: pd.DataFrame) -> str: - if len(matched_documents) == 0: - return "" - - matched_documents.similarity_to_answer = ( - matched_documents.similarity_to_answer * 100 - ) - - # drop duplicate pages (by title), keep highest ranking ones - matched_documents = matched_documents.sort_values( - "similarity_to_answer", ascending=False - ).drop_duplicates("title", keep="first") - - documents_answer_template: str = "📝 Here are the sources I used to answer your question:\n\n{documents}\n\n{footnote}" - document_template: str = "[🔗 {document.title}]({document.url}), relevance: {document.similarity_to_answer:2.1f} %" - - documents = "\n".join( - [ - document_template.format(document=document) - for _, document in matched_documents.iterrows() - ] - ) - footnote: str = "I'm a bot 🤖 and not always perfect." - - return documents_answer_template.format(documents=documents, footnote=footnote) - - -def add_sources(history, completion): - if completion.answer_relevant: - formatted_sources = format_sources(completion.matched_documents) - history.append([None, formatted_sources]) - - return history - - -def chat(chat_history: ChatHistory) -> Tuple[ChatHistory, Completion]: - """Answer a user's question using retrieval augmented generation.""" - - # We assume that the question is the user's last interaction - user_input = chat_history[-1][0] - - # Do retrieval + augmented generation with buster - completion = buster.process_input(user_input) - - # Stream tokens one at a time to the user - chat_history[-1][1] = "" - for token in completion.answer_generator: - chat_history[-1][1] += token - - yield chat_history, completion - - -demo = gr.Blocks() -with demo: - with gr.Row(): - gr.Markdown("

    RAGTheDocs

    ") - - gr.Markdown( - """ - ## About - [RAGTheDocs](https://github.com/jerpint/RAGTheDocs) allows you to ask questions about any documentation hosted on readthedocs. - Simply clone this space and set the environment variables: - - * `OPENAI_API_KEY` (required): Needed for the app to work, e.g. `sk-...` - * `READTHEDOCS_URL` (required): The url of the website you are interested in scraping (must be built with - sphinx/readthedocs). e.g. `https://orion.readthedocs.io` - * `READTHEDOCS_VERSION` (optional): This is important if there exist multiple versions of the docs (e.g. `en/v0.2.7` or `en/latest`). If left empty, it will scrape all available versions (there can be many for open-source projects!). - - Try it out by asking a question below 👇 about [orion](https://orion.readthedocs.io/), an open-source hyperparameter optimization library. - - ## How it works - This app uses [Buster 🤖](https://github.com/jerpint/buster) and ChatGPT to search the docs for relevant info and - answer questions. - View the code on the [project homepage](https://github.com/jerpint/RAGTheDocs) - """ - ) - - chatbot = gr.Chatbot() - - with gr.Row(): - question = gr.Textbox( - label="What's your question?", - placeholder="Type your question here...", - lines=1, - ) - submit = gr.Button(value="Send", variant="secondary") - - examples = gr.Examples( - examples=[ - "How can I install the library?", - "What dependencies are required?", - "Give a brief overview of the library.", - ], - inputs=question, - ) - - response = gr.State() - - # fmt: off - gr.on( - triggers=[submit.click, question.submit], - fn=add_user_question, - inputs=[question], - outputs=[chatbot] - ).then( - chat, - inputs=[chatbot], - outputs=[chatbot, response] - ).then( - add_sources, - inputs=[chatbot, response], - outputs=[chatbot] - ) - - -demo.queue(concurrency_count=8) -demo.launch(share=False) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_exceptions.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_exceptions.py deleted file mode 100644 index 92ccd77a2de2e865e92c5e6943a66bdaff91f840..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_exceptions.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import annotations - -from traceback import format_exception - - -class BrokenResourceError(Exception): - """ - Raised when trying to use a resource that has been rendered unusable due to external causes - (e.g. a send stream whose peer has disconnected). - """ - - -class BrokenWorkerProcess(Exception): - """ - Raised by :func:`run_sync_in_process` if the worker process terminates abruptly or otherwise - misbehaves. - """ - - -class BusyResourceError(Exception): - """Raised when two tasks are trying to read from or write to the same resource concurrently.""" - - def __init__(self, action: str): - super().__init__(f"Another task is already {action} this resource") - - -class ClosedResourceError(Exception): - """Raised when trying to use a resource that has been closed.""" - - -class DelimiterNotFound(Exception): - """ - Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the - maximum number of bytes has been read without the delimiter being found. - """ - - def __init__(self, max_bytes: int) -> None: - super().__init__( - f"The delimiter was not found among the first {max_bytes} bytes" - ) - - -class EndOfStream(Exception): - """Raised when trying to read from a stream that has been closed from the other end.""" - - -class ExceptionGroup(BaseException): - """ - Raised when multiple exceptions have been raised in a task group. - - :var ~typing.Sequence[BaseException] exceptions: the sequence of exceptions raised together - """ - - SEPARATOR = "----------------------------\n" - - exceptions: list[BaseException] - - def __str__(self) -> str: - tracebacks = [ - "".join(format_exception(type(exc), exc, exc.__traceback__)) - for exc in self.exceptions - ] - return ( - f"{len(self.exceptions)} exceptions were raised in the task group:\n" - f"{self.SEPARATOR}{self.SEPARATOR.join(tracebacks)}" - ) - - def __repr__(self) -> str: - exception_reprs = ", ".join(repr(exc) for exc in self.exceptions) - return f"<{self.__class__.__name__}: {exception_reprs}>" - - -class IncompleteRead(Exception): - """ - Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_exactly` or - :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the - connection is closed before the requested amount of bytes has been read. - """ - - def __init__(self) -> None: - super().__init__( - "The stream was closed before the read operation could be completed" - ) - - -class TypedAttributeLookupError(LookupError): - """ - Raised by :meth:`~anyio.TypedAttributeProvider.extra` when the given typed attribute is not - found and no default value has been given. - """ - - -class WouldBlock(Exception): - """Raised by ``X_nowait`` functions if ``X()`` would block.""" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/node.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/node.py deleted file mode 100644 index c670243c527a9aa3da4e33e1ef7185658c3f8d52..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/node.py +++ /dev/null @@ -1,360 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2001-2017 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -"""DNS nodes. A node is a set of rdatasets.""" - -import enum -import io -from typing import Any, Dict, Optional - -import dns.immutable -import dns.name -import dns.rdataclass -import dns.rdataset -import dns.rdatatype -import dns.renderer -import dns.rrset - -_cname_types = { - dns.rdatatype.CNAME, -} - -# "neutral" types can coexist with a CNAME and thus are not "other data" -_neutral_types = { - dns.rdatatype.NSEC, # RFC 4035 section 2.5 - dns.rdatatype.NSEC3, # This is not likely to happen, but not impossible! - dns.rdatatype.KEY, # RFC 4035 section 2.5, RFC 3007 -} - - -def _matches_type_or_its_signature(rdtypes, rdtype, covers): - return rdtype in rdtypes or (rdtype == dns.rdatatype.RRSIG and covers in rdtypes) - - -@enum.unique -class NodeKind(enum.Enum): - """Rdatasets in nodes""" - - REGULAR = 0 # a.k.a "other data" - NEUTRAL = 1 - CNAME = 2 - - @classmethod - def classify( - cls, rdtype: dns.rdatatype.RdataType, covers: dns.rdatatype.RdataType - ) -> "NodeKind": - if _matches_type_or_its_signature(_cname_types, rdtype, covers): - return NodeKind.CNAME - elif _matches_type_or_its_signature(_neutral_types, rdtype, covers): - return NodeKind.NEUTRAL - else: - return NodeKind.REGULAR - - @classmethod - def classify_rdataset(cls, rdataset: dns.rdataset.Rdataset) -> "NodeKind": - return cls.classify(rdataset.rdtype, rdataset.covers) - - -class Node: - - """A Node is a set of rdatasets. - - A node is either a CNAME node or an "other data" node. A CNAME - node contains only CNAME, KEY, NSEC, and NSEC3 rdatasets along with their - covering RRSIG rdatasets. An "other data" node contains any - rdataset other than a CNAME or RRSIG(CNAME) rdataset. When - changes are made to a node, the CNAME or "other data" state is - always consistent with the update, i.e. the most recent change - wins. For example, if you have a node which contains a CNAME - rdataset, and then add an MX rdataset to it, then the CNAME - rdataset will be deleted. Likewise if you have a node containing - an MX rdataset and add a CNAME rdataset, the MX rdataset will be - deleted. - """ - - __slots__ = ["rdatasets"] - - def __init__(self): - # the set of rdatasets, represented as a list. - self.rdatasets = [] - - def to_text(self, name: dns.name.Name, **kw: Dict[str, Any]) -> str: - """Convert a node to text format. - - Each rdataset at the node is printed. Any keyword arguments - to this method are passed on to the rdataset's to_text() method. - - *name*, a ``dns.name.Name``, the owner name of the - rdatasets. - - Returns a ``str``. - - """ - - s = io.StringIO() - for rds in self.rdatasets: - if len(rds) > 0: - s.write(rds.to_text(name, **kw)) # type: ignore[arg-type] - s.write("\n") - return s.getvalue()[:-1] - - def __repr__(self): - return "" - - def __eq__(self, other): - # - # This is inefficient. Good thing we don't need to do it much. - # - for rd in self.rdatasets: - if rd not in other.rdatasets: - return False - for rd in other.rdatasets: - if rd not in self.rdatasets: - return False - return True - - def __ne__(self, other): - return not self.__eq__(other) - - def __len__(self): - return len(self.rdatasets) - - def __iter__(self): - return iter(self.rdatasets) - - def _append_rdataset(self, rdataset): - """Append rdataset to the node with special handling for CNAME and - other data conditions. - - Specifically, if the rdataset being appended has ``NodeKind.CNAME``, - then all rdatasets other than KEY, NSEC, NSEC3, and their covering - RRSIGs are deleted. If the rdataset being appended has - ``NodeKind.REGULAR`` then CNAME and RRSIG(CNAME) are deleted. - """ - # Make having just one rdataset at the node fast. - if len(self.rdatasets) > 0: - kind = NodeKind.classify_rdataset(rdataset) - if kind == NodeKind.CNAME: - self.rdatasets = [ - rds - for rds in self.rdatasets - if NodeKind.classify_rdataset(rds) != NodeKind.REGULAR - ] - elif kind == NodeKind.REGULAR: - self.rdatasets = [ - rds - for rds in self.rdatasets - if NodeKind.classify_rdataset(rds) != NodeKind.CNAME - ] - # Otherwise the rdataset is NodeKind.NEUTRAL and we do not need to - # edit self.rdatasets. - self.rdatasets.append(rdataset) - - def find_rdataset( - self, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - covers: dns.rdatatype.RdataType = dns.rdatatype.NONE, - create: bool = False, - ) -> dns.rdataset.Rdataset: - """Find an rdataset matching the specified properties in the - current node. - - *rdclass*, a ``dns.rdataclass.RdataClass``, the class of the rdataset. - - *rdtype*, a ``dns.rdatatype.RdataType``, the type of the rdataset. - - *covers*, a ``dns.rdatatype.RdataType``, the covered type. - Usually this value is ``dns.rdatatype.NONE``, but if the - rdtype is ``dns.rdatatype.SIG`` or ``dns.rdatatype.RRSIG``, - then the covers value will be the rdata type the SIG/RRSIG - covers. The library treats the SIG and RRSIG types as if they - were a family of types, e.g. RRSIG(A), RRSIG(NS), RRSIG(SOA). - This makes RRSIGs much easier to work with than if RRSIGs - covering different rdata types were aggregated into a single - RRSIG rdataset. - - *create*, a ``bool``. If True, create the rdataset if it is not found. - - Raises ``KeyError`` if an rdataset of the desired type and class does - not exist and *create* is not ``True``. - - Returns a ``dns.rdataset.Rdataset``. - """ - - for rds in self.rdatasets: - if rds.match(rdclass, rdtype, covers): - return rds - if not create: - raise KeyError - rds = dns.rdataset.Rdataset(rdclass, rdtype, covers) - self._append_rdataset(rds) - return rds - - def get_rdataset( - self, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - covers: dns.rdatatype.RdataType = dns.rdatatype.NONE, - create: bool = False, - ) -> Optional[dns.rdataset.Rdataset]: - """Get an rdataset matching the specified properties in the - current node. - - None is returned if an rdataset of the specified type and - class does not exist and *create* is not ``True``. - - *rdclass*, an ``int``, the class of the rdataset. - - *rdtype*, an ``int``, the type of the rdataset. - - *covers*, an ``int``, the covered type. Usually this value is - dns.rdatatype.NONE, but if the rdtype is dns.rdatatype.SIG or - dns.rdatatype.RRSIG, then the covers value will be the rdata - type the SIG/RRSIG covers. The library treats the SIG and RRSIG - types as if they were a family of - types, e.g. RRSIG(A), RRSIG(NS), RRSIG(SOA). This makes RRSIGs much - easier to work with than if RRSIGs covering different rdata - types were aggregated into a single RRSIG rdataset. - - *create*, a ``bool``. If True, create the rdataset if it is not found. - - Returns a ``dns.rdataset.Rdataset`` or ``None``. - """ - - try: - rds = self.find_rdataset(rdclass, rdtype, covers, create) - except KeyError: - rds = None - return rds - - def delete_rdataset( - self, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - covers: dns.rdatatype.RdataType = dns.rdatatype.NONE, - ) -> None: - """Delete the rdataset matching the specified properties in the - current node. - - If a matching rdataset does not exist, it is not an error. - - *rdclass*, an ``int``, the class of the rdataset. - - *rdtype*, an ``int``, the type of the rdataset. - - *covers*, an ``int``, the covered type. - """ - - rds = self.get_rdataset(rdclass, rdtype, covers) - if rds is not None: - self.rdatasets.remove(rds) - - def replace_rdataset(self, replacement: dns.rdataset.Rdataset) -> None: - """Replace an rdataset. - - It is not an error if there is no rdataset matching *replacement*. - - Ownership of the *replacement* object is transferred to the node; - in other words, this method does not store a copy of *replacement* - at the node, it stores *replacement* itself. - - *replacement*, a ``dns.rdataset.Rdataset``. - - Raises ``ValueError`` if *replacement* is not a - ``dns.rdataset.Rdataset``. - """ - - if not isinstance(replacement, dns.rdataset.Rdataset): - raise ValueError("replacement is not an rdataset") - if isinstance(replacement, dns.rrset.RRset): - # RRsets are not good replacements as the match() method - # is not compatible. - replacement = replacement.to_rdataset() - self.delete_rdataset( - replacement.rdclass, replacement.rdtype, replacement.covers - ) - self._append_rdataset(replacement) - - def classify(self) -> NodeKind: - """Classify a node. - - A node which contains a CNAME or RRSIG(CNAME) is a - ``NodeKind.CNAME`` node. - - A node which contains only "neutral" types, i.e. types allowed to - co-exist with a CNAME, is a ``NodeKind.NEUTRAL`` node. The neutral - types are NSEC, NSEC3, KEY, and their associated RRSIGS. An empty node - is also considered neutral. - - A node which contains some rdataset which is not a CNAME, RRSIG(CNAME), - or a neutral type is a a ``NodeKind.REGULAR`` node. Regular nodes are - also commonly referred to as "other data". - """ - for rdataset in self.rdatasets: - kind = NodeKind.classify(rdataset.rdtype, rdataset.covers) - if kind != NodeKind.NEUTRAL: - return kind - return NodeKind.NEUTRAL - - def is_immutable(self) -> bool: - return False - - -@dns.immutable.immutable -class ImmutableNode(Node): - def __init__(self, node): - super().__init__() - self.rdatasets = tuple( - [dns.rdataset.ImmutableRdataset(rds) for rds in node.rdatasets] - ) - - def find_rdataset( - self, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - covers: dns.rdatatype.RdataType = dns.rdatatype.NONE, - create: bool = False, - ) -> dns.rdataset.Rdataset: - if create: - raise TypeError("immutable") - return super().find_rdataset(rdclass, rdtype, covers, False) - - def get_rdataset( - self, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - covers: dns.rdatatype.RdataType = dns.rdatatype.NONE, - create: bool = False, - ) -> Optional[dns.rdataset.Rdataset]: - if create: - raise TypeError("immutable") - return super().get_rdataset(rdclass, rdtype, covers, False) - - def delete_rdataset( - self, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - covers: dns.rdatatype.RdataType = dns.rdatatype.NONE, - ) -> None: - raise TypeError("immutable") - - def replace_rdataset(self, replacement: dns.rdataset.Rdataset) -> None: - raise TypeError("immutable") - - def is_immutable(self) -> bool: - return True diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_T_A_T_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_T_A_T_.py deleted file mode 100644 index 1769de91b5f0416354e040b52e3615c6824fd2f9..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_T_A_T_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_S_T_A_T_(BaseTTXConverter): - pass diff --git a/spaces/jonatanklosko/chai/assets/vendor/topbar.js b/spaces/jonatanklosko/chai/assets/vendor/topbar.js deleted file mode 100644 index 41957274d71b29628e6aabe7ca9fd8750eff8a3e..0000000000000000000000000000000000000000 --- a/spaces/jonatanklosko/chai/assets/vendor/topbar.js +++ /dev/null @@ -1,165 +0,0 @@ -/** - * @license MIT - * topbar 2.0.0, 2023-02-04 - * https://buunguyen.github.io/topbar - * Copyright (c) 2021 Buu Nguyen - */ -(function (window, document) { - "use strict"; - - // https://gist.github.com/paulirish/1579671 - (function () { - var lastTime = 0; - var vendors = ["ms", "moz", "webkit", "o"]; - for (var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) { - window.requestAnimationFrame = - window[vendors[x] + "RequestAnimationFrame"]; - window.cancelAnimationFrame = - window[vendors[x] + "CancelAnimationFrame"] || - window[vendors[x] + "CancelRequestAnimationFrame"]; - } - if (!window.requestAnimationFrame) - window.requestAnimationFrame = function (callback, element) { - var currTime = new Date().getTime(); - var timeToCall = Math.max(0, 16 - (currTime - lastTime)); - var id = window.setTimeout(function () { - callback(currTime + timeToCall); - }, timeToCall); - lastTime = currTime + timeToCall; - return id; - }; - if (!window.cancelAnimationFrame) - window.cancelAnimationFrame = function (id) { - clearTimeout(id); - }; - })(); - - var canvas, - currentProgress, - showing, - progressTimerId = null, - fadeTimerId = null, - delayTimerId = null, - addEvent = function (elem, type, handler) { - if (elem.addEventListener) elem.addEventListener(type, handler, false); - else if (elem.attachEvent) elem.attachEvent("on" + type, handler); - else elem["on" + type] = handler; - }, - options = { - autoRun: true, - barThickness: 3, - barColors: { - 0: "rgba(26, 188, 156, .9)", - ".25": "rgba(52, 152, 219, .9)", - ".50": "rgba(241, 196, 15, .9)", - ".75": "rgba(230, 126, 34, .9)", - "1.0": "rgba(211, 84, 0, .9)", - }, - shadowBlur: 10, - shadowColor: "rgba(0, 0, 0, .6)", - className: null, - }, - repaint = function () { - canvas.width = window.innerWidth; - canvas.height = options.barThickness * 5; // need space for shadow - - var ctx = canvas.getContext("2d"); - ctx.shadowBlur = options.shadowBlur; - ctx.shadowColor = options.shadowColor; - - var lineGradient = ctx.createLinearGradient(0, 0, canvas.width, 0); - for (var stop in options.barColors) - lineGradient.addColorStop(stop, options.barColors[stop]); - ctx.lineWidth = options.barThickness; - ctx.beginPath(); - ctx.moveTo(0, options.barThickness / 2); - ctx.lineTo( - Math.ceil(currentProgress * canvas.width), - options.barThickness / 2 - ); - ctx.strokeStyle = lineGradient; - ctx.stroke(); - }, - createCanvas = function () { - canvas = document.createElement("canvas"); - var style = canvas.style; - style.position = "fixed"; - style.top = style.left = style.right = style.margin = style.padding = 0; - style.zIndex = 100001; - style.display = "none"; - if (options.className) canvas.classList.add(options.className); - document.body.appendChild(canvas); - addEvent(window, "resize", repaint); - }, - topbar = { - config: function (opts) { - for (var key in opts) - if (options.hasOwnProperty(key)) options[key] = opts[key]; - }, - show: function (delay) { - if (showing) return; - if (delay) { - if (delayTimerId) return; - delayTimerId = setTimeout(() => topbar.show(), delay); - } else { - showing = true; - if (fadeTimerId !== null) window.cancelAnimationFrame(fadeTimerId); - if (!canvas) createCanvas(); - canvas.style.opacity = 1; - canvas.style.display = "block"; - topbar.progress(0); - if (options.autoRun) { - (function loop() { - progressTimerId = window.requestAnimationFrame(loop); - topbar.progress( - "+" + 0.05 * Math.pow(1 - Math.sqrt(currentProgress), 2) - ); - })(); - } - } - }, - progress: function (to) { - if (typeof to === "undefined") return currentProgress; - if (typeof to === "string") { - to = - (to.indexOf("+") >= 0 || to.indexOf("-") >= 0 - ? currentProgress - : 0) + parseFloat(to); - } - currentProgress = to > 1 ? 1 : to; - repaint(); - return currentProgress; - }, - hide: function () { - clearTimeout(delayTimerId); - delayTimerId = null; - if (!showing) return; - showing = false; - if (progressTimerId != null) { - window.cancelAnimationFrame(progressTimerId); - progressTimerId = null; - } - (function loop() { - if (topbar.progress("+.1") >= 1) { - canvas.style.opacity -= 0.05; - if (canvas.style.opacity <= 0.05) { - canvas.style.display = "none"; - fadeTimerId = null; - return; - } - } - fadeTimerId = window.requestAnimationFrame(loop); - })(); - }, - }; - - if (typeof module === "object" && typeof module.exports === "object") { - module.exports = topbar; - } else if (typeof define === "function" && define.amd) { - define(function () { - return topbar; - }); - } else { - this.topbar = topbar; - } -}.call(this, window, document)); diff --git "a/spaces/joshen/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/joshen/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" deleted file mode 100644 index b0cef10a55493704e016ea115c7e9635e35f1269..0000000000000000000000000000000000000000 --- "a/spaces/joshen/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" +++ /dev/null @@ -1,186 +0,0 @@ -from predict import predict_no_ui -from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down, get_conf -import re, requests, unicodedata, os - -def download_arxiv_(url_pdf): - if 'arxiv.org' not in url_pdf: - if ('.' in url_pdf) and ('/' not in url_pdf): - new_url = 'https://arxiv.org/abs/'+url_pdf - print('下载编号:', url_pdf, '自动定位:', new_url) - # download_arxiv_(new_url) - return download_arxiv_(new_url) - else: - print('不能识别的URL!') - return None - if 'abs' in url_pdf: - url_pdf = url_pdf.replace('abs', 'pdf') - url_pdf = url_pdf + '.pdf' - - url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs') - title, other_info = get_name(_url_=url_abs) - - paper_id = title.split()[0] # '[1712.00559]' - if '2' in other_info['year']: - title = other_info['year'] + ' ' + title - - known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI'] - for k in known_conf: - if k in other_info['comment']: - title = k + ' ' + title - - download_dir = './gpt_log/arxiv/' - os.makedirs(download_dir, exist_ok=True) - - title_str = title.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - - requests_pdf_url = url_pdf - file_path = download_dir+title_str - # if os.path.exists(file_path): - # print('返回缓存文件') - # return './gpt_log/arxiv/'+title_str - - print('下载中') - proxies, = get_conf('proxies') - r = requests.get(requests_pdf_url, proxies=proxies) - with open(file_path, 'wb+') as f: - f.write(r.content) - print('下载完成') - - # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf)) - # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True) - - x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors']) - x = x.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - return './gpt_log/arxiv/'+title_str, other_info - - -def get_name(_url_): - import os - from bs4 import BeautifulSoup - print('正在获取文献名!') - print(_url_) - - # arxiv_recall = {} - # if os.path.exists('./arxiv_recall.pkl'): - # with open('./arxiv_recall.pkl', 'rb') as f: - # arxiv_recall = pickle.load(f) - - # if _url_ in arxiv_recall: - # print('在缓存中') - # return arxiv_recall[_url_] - - proxies, = get_conf('proxies') - res = requests.get(_url_, proxies=proxies) - - bs = BeautifulSoup(res.text, 'html.parser') - other_details = {} - - # get year - try: - year = bs.find_all(class_='dateline')[0].text - year = re.search(r'(\d{4})', year, re.M | re.I).group(1) - other_details['year'] = year - abstract = bs.find_all(class_='abstract mathjax')[0].text - other_details['abstract'] = abstract - except: - other_details['year'] = '' - print('年份获取失败') - - # get author - try: - authors = bs.find_all(class_='authors')[0].text - authors = authors.split('Authors:')[1] - other_details['authors'] = authors - except: - other_details['authors'] = '' - print('authors获取失败') - - # get comment - try: - comment = bs.find_all(class_='metatable')[0].text - real_comment = None - for item in comment.replace('\n', ' ').split(' '): - if 'Comments' in item: - real_comment = item - if real_comment is not None: - other_details['comment'] = real_comment - else: - other_details['comment'] = '' - except: - other_details['comment'] = '' - print('年份获取失败') - - title_str = BeautifulSoup( - res.text, 'html.parser').find('title').contents[0] - print('获取成功:', title_str) - # arxiv_recall[_url_] = (title_str+'.pdf', other_details) - # with open('./arxiv_recall.pkl', 'wb') as f: - # pickle.dump(arxiv_recall, f) - - return title_str+'.pdf', other_details - - - -@CatchException -def 下载arxiv论文并翻译摘要(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - - CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……" - import glob - import os - - # 基本信息:功能、贡献者 - chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO]) - yield chatbot, history, '正常' - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield chatbot, history, '正常' - return - - # 清空历史,以免输入溢出 - history = [] - - # 提取摘要,下载PDF文档 - try: - pdf_path, info = download_arxiv_(txt) - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"下载pdf文件未成功") - yield chatbot, history, '正常' - return - - # 翻译摘要等 - i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" - i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, api_key, temperature, history=[]) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield chatbot, history, msg - # 写入文件 - import shutil - # 重置文件的创建时间 - shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path) - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载")) - yield chatbot, history, msg - diff --git a/spaces/joushe/moe-tts/monotonic_align/core.py b/spaces/joushe/moe-tts/monotonic_align/core.py deleted file mode 100644 index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000 --- a/spaces/joushe/moe-tts/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 diff --git a/spaces/juliensimon/keyword-spotting/README.md b/spaces/juliensimon/keyword-spotting/README.md deleted file mode 100644 index adcc2eeab79ca6ffdbacdb119e06104d7dcd1b39..0000000000000000000000000000000000000000 --- a/spaces/juliensimon/keyword-spotting/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Keyword Spotting -emoji: 🏢 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jurgendn/table-extraction/models/modules/sample_torch_module.py b/spaces/jurgendn/table-extraction/models/modules/sample_torch_module.py deleted file mode 100644 index 99555837126d78ff0f6fe0e0b532134f5e10abd3..0000000000000000000000000000000000000000 --- a/spaces/jurgendn/table-extraction/models/modules/sample_torch_module.py +++ /dev/null @@ -1,12 +0,0 @@ -from torch import Tensor, nn - - -class UselessLayer(nn.Module): - - def __init__(self) -> None: - super(UselessLayer, self).__init__() - self.seq = nn.Identity() - - def forward(self, x: Tensor) -> Tensor: - x = self.seq(x) - return x diff --git a/spaces/jw2yang/unicl-img-recog-demo/config.py b/spaces/jw2yang/unicl-img-recog-demo/config.py deleted file mode 100644 index f17536ee6d5e9b2f87af6435d2dc6a38d5aa16d9..0000000000000000000000000000000000000000 --- a/spaces/jw2yang/unicl-img-recog-demo/config.py +++ /dev/null @@ -1,245 +0,0 @@ -# -------------------------------------------------------- -# Unified Contrastive Learning (UniCL) -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Jianwei Yang (jianwyan@microsoft.com) -# Based on Swin Transformer written by Zhe Liu -# -------------------------------------------------------- - -import os -import yaml -from yacs.config import CfgNode as CN - -_C = CN() -_C.VERBOSE = False - -# Base config files -_C.BASE = [''] - -# ----------------------------------------------------------------------------- -# Data settings -# ----------------------------------------------------------------------------- -_C.DATA = CN() -# Batch size for a single GPU, could be overwritten by command line argument -_C.DATA.BATCH_SIZE = 128 -# Path to dataset, could be overwritten by command line argument -_C.DATA.DATA_PATH = '' -# Dataset name -_C.DATA.DATASET = 'imagenet' -# Input image size -_C.DATA.IMG_SIZE = 224 -# Interpolation to resize image (random, bilinear, bicubic) -_C.DATA.INTERPOLATION = 'bicubic' -# Use zipped dataset instead of folder dataset -# could be overwritten by command line argument -_C.DATA.ZIP_MODE = False -# Cache Data in Memory, could be overwritten by command line argument -_C.DATA.CACHE_MODE = 'part' -# Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU. -_C.DATA.PIN_MEMORY = True -# Number of data loading threads -_C.DATA.NUM_WORKERS = 8 - -# ----------------------------------------------------------------------------- -# Model settings -# ----------------------------------------------------------------------------- -_C.MODEL = CN() -# Model name -_C.MODEL.NAME = '' -# Checkpoint to resume, could be overwritten by command line argument -_C.MODEL.RESUME = '' -# Number of classes, overwritten in data preparation -_C.MODEL.NUM_CLASSES = 0 -# Label Smoothing -_C.MODEL.LABEL_SMOOTHING = 0.1 -# Whether load pretrained model -_C.MODEL.PRETRAINED = '' -# Projection dimension -_C.MODEL.DIM_PROJECTION = 512 -# Mode specific -_C.MODEL.SPEC = CN(new_allowed=True) -# ----------------------------------------------------------------------------- -# Build Image Encoder -# ----------------------------------------------------------------------------- -_C.MODEL.IMAGE_ENCODER = CN() -# Image encoder type -_C.MODEL.IMAGE_ENCODER.TYPE = 'swin' -# Input image size -_C.MODEL.IMAGE_ENCODER.IMG_SIZE = 224 -# Dropout rate -_C.MODEL.IMAGE_ENCODER.DROP_RATE = 0.0 -# Drop path rate -_C.MODEL.IMAGE_ENCODER.DROP_PATH_RATE = 0.1 - -# Swin Transformer parameters -_C.MODEL.IMAGE_ENCODER.SWIN = CN() -_C.MODEL.IMAGE_ENCODER.SWIN.PATCH_SIZE = 4 -_C.MODEL.IMAGE_ENCODER.SWIN.IN_CHANS = 3 -_C.MODEL.IMAGE_ENCODER.SWIN.EMBED_DIM = 96 -_C.MODEL.IMAGE_ENCODER.SWIN.DEPTHS = [2, 2, 6, 2] -_C.MODEL.IMAGE_ENCODER.SWIN.NUM_HEADS = [3, 6, 12, 24] -_C.MODEL.IMAGE_ENCODER.SWIN.WINDOW_SIZE = 7 -_C.MODEL.IMAGE_ENCODER.SWIN.MLP_RATIO = 4. -_C.MODEL.IMAGE_ENCODER.SWIN.QKV_BIAS = True -_C.MODEL.IMAGE_ENCODER.SWIN.QK_SCALE = None -_C.MODEL.IMAGE_ENCODER.SWIN.APE = False -_C.MODEL.IMAGE_ENCODER.SWIN.PATCH_NORM = True - -# FocalNet parameters -_C.MODEL.IMAGE_ENCODER.FOCAL = CN() -_C.MODEL.IMAGE_ENCODER.FOCAL.PATCH_SIZE = 4 -_C.MODEL.IMAGE_ENCODER.FOCAL.IN_CHANS = 3 -_C.MODEL.IMAGE_ENCODER.FOCAL.EMBED_DIM = 96 -_C.MODEL.IMAGE_ENCODER.FOCAL.DEPTHS = [2, 2, 6, 2] -_C.MODEL.IMAGE_ENCODER.FOCAL.MLP_RATIO = 4. -_C.MODEL.IMAGE_ENCODER.FOCAL.PATCH_NORM = True -_C.MODEL.IMAGE_ENCODER.FOCAL.FOCAL_LEVELS = [2, 2, 2, 2] -_C.MODEL.IMAGE_ENCODER.FOCAL.FOCAL_WINDOWS = [3, 3, 3, 3] -_C.MODEL.IMAGE_ENCODER.FOCAL.FOCAL_FACTORS = [2, 2, 2, 2] -_C.MODEL.IMAGE_ENCODER.FOCAL.USE_CONV_EMBED = False -_C.MODEL.IMAGE_ENCODER.FOCAL.USE_LAYERSCALE = False -_C.MODEL.IMAGE_ENCODER.FOCAL.USE_POSTLN = False - -# ----------------------------------------------------------------------------- -# Build Text Encoder -# ----------------------------------------------------------------------------- -_C.MODEL.TEXT_ENCODER = CN() - -_C.MODEL.TEXT_ENCODER.NAME = 'transformer' -_C.MODEL.TEXT_ENCODER.LOAD_PRETRAINED = False -_C.MODEL.TEXT_ENCODER.PRETRAINED = '' -_C.MODEL.TEXT_ENCODER.TOKENIZER = 'clip' -_C.MODEL.TEXT_ENCODER.CONTEXT_LENGTH = 77 -_C.MODEL.TEXT_ENCODER.WIDTH = 1024 -_C.MODEL.TEXT_ENCODER.HEADS = 16 -_C.MODEL.TEXT_ENCODER.LAYERS = 12 -_C.MODEL.TEXT_ENCODER.AUTOGRESSIVE = True - -# ----------------------------------------------------------------------------- -# Training settings -# ----------------------------------------------------------------------------- -_C.TRAIN = CN() -_C.TRAIN.START_EPOCH = 0 -_C.TRAIN.EPOCHS = 32 -_C.TRAIN.WARMUP_EPOCHS = 5 -_C.TRAIN.WEIGHT_DECAY = 0.1 -_C.TRAIN.BASE_LR = 5e-4 -_C.TRAIN.WARMUP_LR = 5e-7 -_C.TRAIN.MIN_LR = 5e-6 -# Clip gradient norm -_C.TRAIN.CLIP_GRAD = 5.0 -# Auto resume from latest checkpoint -_C.TRAIN.AUTO_RESUME = True -# Gradient accumulation steps -# could be overwritten by command line argument -_C.TRAIN.ACCUMULATION_STEPS = 0 -# Whether to use gradient checkpointing to save memory -# could be overwritten by command line argument -_C.TRAIN.USE_CHECKPOINT = False - -# LR scheduler -_C.TRAIN.LR_SCHEDULER = CN() -_C.TRAIN.LR_SCHEDULER.NAME = 'cosine' -# Epoch interval to decay LR, used in StepLRScheduler -_C.TRAIN.LR_SCHEDULER.DECAY_EPOCHS = 30 -# LR decay rate, used in StepLRScheduler -_C.TRAIN.LR_SCHEDULER.DECAY_RATE = 0.1 - -# Optimizer -_C.TRAIN.OPTIMIZER = CN() -_C.TRAIN.OPTIMIZER.NAME = 'adamw' -# Optimizer Epsilon -_C.TRAIN.OPTIMIZER.EPS = 1e-8 -# Optimizer Betas -_C.TRAIN.OPTIMIZER.BETAS = (0.9, 0.999) -# SGD momentum -_C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 - -# ----------------------------------------------------------------------------- -# Augmentation settings -# ----------------------------------------------------------------------------- -_C.AUG = CN() -# Color jitter factor -_C.AUG.COLOR_JITTER = 0.4 -# Use AutoAugment policy. "v0" or "original" -_C.AUG.AUTO_AUGMENT = 'rand-m9-mstd0.5-inc1' -# Random erase prob -_C.AUG.REPROB = 0.25 -# Random erase mode -_C.AUG.REMODE = 'pixel' -# Random erase count -_C.AUG.RECOUNT = 1 -# Mixup alpha, mixup enabled if > 0 -_C.AUG.MIXUP = 0.8 -# Cutmix alpha, cutmix enabled if > 0 -_C.AUG.CUTMIX = 1.0 -# Cutmix min/max ratio, overrides alpha and enables cutmix if set -_C.AUG.CUTMIX_MINMAX = None -# Probability of performing mixup or cutmix when either/both is enabled -_C.AUG.MIXUP_PROB = 1.0 -# Probability of switching to cutmix when both mixup and cutmix enabled -_C.AUG.MIXUP_SWITCH_PROB = 0.5 -# How to apply mixup/cutmix params. Per "batch", "pair", or "elem" -_C.AUG.MIXUP_MODE = 'batch' - -# ----------------------------------------------------------------------------- -# Testing settings -# ----------------------------------------------------------------------------- -_C.TEST = CN() -# Whether to use center crop when testing -_C.TEST.CROP = True - -# ----------------------------------------------------------------------------- -# Misc -# ----------------------------------------------------------------------------- -# Mixed precision opt level, if O0, no amp is used ('O0', 'O1', 'O2') -# overwritten by command line argument -_C.AMP_OPT_LEVEL = '' -# Path to output folder, overwritten by command line argument -_C.OUTPUT = '' -# Tag of experiment, overwritten by command line argument -_C.TAG = 'default' -# Frequency to save checkpoint -_C.SAVE_FREQ = 1 -# Frequency to logging info -_C.PRINT_FREQ = 100 -# Fixed random seed -_C.SEED = 0 -# Perform evaluation only, overwritten by command line argument -_C.EVAL_MODE = False -# Test throughput only, overwritten by command line argument -_C.THROUGHPUT_MODE = False -# Debug only so that skip dataloader initialization, overwritten by command line argument -_C.DEBUG_MODE = False -# local rank for DistributedDataParallel, given by command line argument -_C.LOCAL_RANK = 0 - - -def _update_config_from_file(config, cfg_file): - config.defrost() - with open(cfg_file, 'r') as f: - yaml_cfg = yaml.load(f, Loader=yaml.FullLoader) - - for cfg in yaml_cfg.setdefault('BASE', ['']): - if cfg: - _update_config_from_file( - config, os.path.join(os.path.dirname(cfg_file), cfg) - ) - print('=> merge config from {}'.format(cfg_file)) - config.merge_from_file(cfg_file) - config.freeze() - - -def update_config(config, args): - _update_config_from_file(config, args.cfg) - config.freeze() - - -def get_config(args): - """Get a yacs CfgNode object with default values.""" - # Return a clone so that the defaults will not be altered - # This is for the "local variable" use pattern - config = _C.clone() - update_config(config, args) - - return config diff --git a/spaces/jyseo/3DFuse/cldm/model.py b/spaces/jyseo/3DFuse/cldm/model.py deleted file mode 100644 index fed3c31ac145b78907c7f771d1d8db6fb32d92ed..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/cldm/model.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import torch - -from omegaconf import OmegaConf -from ldm.util import instantiate_from_config - - -def get_state_dict(d): - return d.get('state_dict', d) - - -def load_state_dict(ckpt_path, location='cpu'): - _, extension = os.path.splitext(ckpt_path) - if extension.lower() == ".safetensors": - import safetensors.torch - state_dict = safetensors.torch.load_file(ckpt_path, device=location) - else: - state_dict = get_state_dict(torch.load(ckpt_path, map_location=torch.device(location))) - state_dict = get_state_dict(state_dict) - print(f'Loaded state_dict from [{ckpt_path}]') - return state_dict - - -def create_model(config_path): - config = OmegaConf.load(config_path) - model = instantiate_from_config(config.model).cpu() - print(f'Loaded model config from [{config_path}]') - return model diff --git a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/app.py b/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/app.py deleted file mode 100644 index c3c0d69e056470caf3b0565191fb9ab2ed4a518e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/app.py +++ /dev/null @@ -1,854 +0,0 @@ -import os -import sys - -os.system("git clone https://github.com/C0untFloyd/bark-gui.git") -sys.path.append("./bark-gui/") - -from cProfile import label -from distutils.command.check import check -from doctest import Example -import dataclasses -import gradio as gr -import numpy as np -import logging -import torch -import pytorch_seed -import time - -import torchaudio -from speechbrain.pretrained import SpectralMaskEnhancement - -enhance_model = SpectralMaskEnhancement.from_hparams( - source="speechbrain/metricgan-plus-voicebank", - savedir="pretrained_models/metricgan-plus-voicebank", - run_opts={"device":"cuda"}, -) - -from xml.sax import saxutils -from bark.api import generate_with_settings -from bark.api import save_as_prompt -from settings import Settings -#import nltk - -from bark import SAMPLE_RATE -from bark.clonevoice import clone_voice -from bark.generation import SAMPLE_RATE, preload_models, _load_history_prompt, codec_decode -from scipy.io.wavfile import write as write_wav -from parseinput import split_and_recombine_text, build_ssml, is_ssml, create_clips_from_ssml -from datetime import datetime -from tqdm.auto import tqdm -from id3tagging import add_id3_tag - -import shutil - -import string -import argparse -import json - -import gc, copy -from datetime import datetime -from huggingface_hub import hf_hub_download -from pynvml import * -nvmlInit() -gpu_h = nvmlDeviceGetHandleByIndex(0) -ctx_limit = 1536 -title = "RWKV-4-Raven-7B-v12-Eng98%-Other2%-20230521-ctx8192" - -os.environ["RWKV_JIT_ON"] = '1' -os.environ["RWKV_CUDA_ON"] = '1' # if '1' then use CUDA kernel for seq mode (much faster) - -from rwkv.model import RWKV -model_path1 = hf_hub_download(repo_id="BlinkDL/rwkv-4-raven", filename=f"{title}.pth") -model1 = RWKV(model=model_path1, strategy='cuda fp16i8 *8 -> cuda fp16') -from rwkv.utils import PIPELINE, PIPELINE_ARGS -pipeline = PIPELINE(model1, "20B_tokenizer.json") - -def generate_prompt(instruction, input=None): - instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n') - input = input.strip().replace('\r\n','\n').replace('\n\n','\n') - if input: - return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. -# Instruction: -{instruction} -# Input: -{input} -# Response: -""" - else: - return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. -# Instruction: -{instruction} -# Response: -""" - -def evaluate( - instruction, - input=None, - token_count=200, - temperature=1.0, - top_p=0.7, - presencePenalty = 0.1, - countPenalty = 0.1, -): - args = PIPELINE_ARGS(temperature = max(0.2, float(temperature)), top_p = float(top_p), - alpha_frequency = countPenalty, - alpha_presence = presencePenalty, - token_ban = [], # ban the generation of some tokens - token_stop = [0]) # stop generation whenever you see any token here - - instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n') - input = input.strip().replace('\r\n','\n').replace('\n\n','\n') - ctx = generate_prompt(instruction, input) - - all_tokens = [] - out_last = 0 - out_str = '' - occurrence = {} - state = None - for i in range(int(token_count)): - out, state = model1.forward(pipeline.encode(ctx)[-ctx_limit:] if i == 0 else [token], state) - for n in occurrence: - out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency) - - token = pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p) - if token in args.token_stop: - break - all_tokens += [token] - if token not in occurrence: - occurrence[token] = 1 - else: - occurrence[token] += 1 - - tmp = pipeline.decode(all_tokens[out_last:]) - if '\ufffd' not in tmp: - out_str += tmp - yield out_str.strip() - out_last = i + 1 - - gpu_info = nvmlDeviceGetMemoryInfo(gpu_h) - print(f'vram {gpu_info.total} used {gpu_info.used} free {gpu_info.free}') - del out - del state - gc.collect() - torch.cuda.empty_cache() - yield out_str.strip() - -examples = [ - ["Tell me about ravens.", "", 300, 1.2, 0.5, 0.4, 0.4], - ["Write a python function to mine 1 BTC, with details and comments.", "", 300, 1.2, 0.5, 0.4, 0.4], - ["Write a song about ravens.", "", 300, 1.2, 0.5, 0.4, 0.4], - ["Explain the following metaphor: Life is like cats.", "", 300, 1.2, 0.5, 0.4, 0.4], - ["Write a story using the following information", "A man named Alex chops a tree down", 300, 1.2, 0.5, 0.4, 0.4], - ["Generate a list of adjectives that describe a person as brave.", "", 300, 1.2, 0.5, 0.4, 0.4], - ["You have $100, and your goal is to turn that into as much money as possible with AI and Machine Learning. Please respond with detailed plan.", "", 300, 1.2, 0.5, 0.4, 0.4], -] - -########################################################################## - -chat_intro = '''The following is a coherent verbose detailed conversation between <|user|> and an AI girl named <|bot|>. -<|user|>: Hi <|bot|>, Would you like to chat with me for a while? -<|bot|>: Hi <|user|>. Sure. What would you like to talk about? I'm listening. -''' - -def user(message, chatbot): - chatbot = chatbot or [] - # print(f"User: {message}") - return "", chatbot + [[message, None]] - -def alternative(chatbot, history): - if not chatbot or not history: - return chatbot, history - - chatbot[-1][1] = None - history[0] = copy.deepcopy(history[1]) - - return chatbot, history - -def chat( - prompt, - user, - bot, - chatbot, - history, - temperature=1.0, - top_p=0.8, - presence_penalty=0.1, - count_penalty=0.1, -): - args = PIPELINE_ARGS(temperature=max(0.2, float(temperature)), top_p=float(top_p), - alpha_frequency=float(count_penalty), - alpha_presence=float(presence_penalty), - token_ban=[], # ban the generation of some tokens - token_stop=[]) # stop generation whenever you see any token here - - if not chatbot: - return chatbot, history - - message = chatbot[-1][0] - message = message.strip().replace('\r\n','\n').replace('\n\n','\n') - ctx = f"{user}: {message}\n\n{bot}:" - - if not history: - prompt = prompt.replace("<|user|>", user.strip()) - prompt = prompt.replace("<|bot|>", bot.strip()) - prompt = prompt.strip() - prompt = f"\n{prompt}\n\n" - - out, state = model1.forward(pipeline.encode(prompt), None) - history = [state, None, []] # [state, state_pre, tokens] - # print("History reloaded.") - - [state, _, all_tokens] = history - state_pre_0 = copy.deepcopy(state) - - out, state = model1.forward(pipeline.encode(ctx)[-ctx_limit:], state) - state_pre_1 = copy.deepcopy(state) # For recovery - - # print("Bot:", end='') - - begin = len(all_tokens) - out_last = begin - out_str: str = '' - occurrence = {} - for i in range(300): - if i <= 0: - nl_bias = -float('inf') - elif i <= 30: - nl_bias = (i - 30) * 0.1 - elif i <= 130: - nl_bias = 0 - else: - nl_bias = (i - 130) * 0.25 - out[187] += nl_bias - for n in occurrence: - out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency) - - token = pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p) - next_tokens = [token] - if token == 0: - next_tokens = pipeline.encode('\n\n') - all_tokens += next_tokens - - if token not in occurrence: - occurrence[token] = 1 - else: - occurrence[token] += 1 - - out, state = model1.forward(next_tokens, state) - - tmp = pipeline.decode(all_tokens[out_last:]) - if '\ufffd' not in tmp: - # print(tmp, end='', flush=True) - out_last = begin + i + 1 - out_str += tmp - - chatbot[-1][1] = out_str.strip() - history = [state, all_tokens] - yield chatbot, history - - out_str = pipeline.decode(all_tokens[begin:]) - out_str = out_str.replace("\r\n", '\n').replace('\\n', '\n') - - if '\n\n' in out_str: - break - - # State recovery - if f'{user}:' in out_str or f'{bot}:' in out_str: - idx_user = out_str.find(f'{user}:') - idx_user = len(out_str) if idx_user == -1 else idx_user - idx_bot = out_str.find(f'{bot}:') - idx_bot = len(out_str) if idx_bot == -1 else idx_bot - idx = min(idx_user, idx_bot) - - if idx < len(out_str): - out_str = f" {out_str[:idx].strip()}\n\n" - tokens = pipeline.encode(out_str) - - all_tokens = all_tokens[:begin] + tokens - out, state = model1.forward(tokens, state_pre_1) - break - - gpu_info = nvmlDeviceGetMemoryInfo(gpu_h) - print(f'vram {gpu_info.total} used {gpu_info.used} free {gpu_info.free}') - - gc.collect() - torch.cuda.empty_cache() - - chatbot[-1][1] = out_str.strip() - history = [state, state_pre_0, all_tokens] - yield chatbot, history - -from TTS.tts.utils.synthesis import synthesis -from TTS.tts.utils.text.symbols import make_symbols, phonemes, symbols -try: - from TTS.utils.audio import AudioProcessor -except: - from TTS.utils.audio import AudioProcessor - - -from TTS.tts.models import setup_model -from TTS.config import load_config -from TTS.tts.models.vits import * - -from TTS.tts.utils.speakers import SpeakerManager -from pydub import AudioSegment - -# from google.colab import files -import librosa - -from scipy.io.wavfile import write, read - -import subprocess - - -OUTPUTFOLDER = "Outputs" - -def speechbrain(aud): - # Load and add fake batch dimension - noisy = enhance_model.load_audio( - aud - ).unsqueeze(0) - enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.])) - torchaudio.save('enhanced.wav', enhanced.cpu(), 16000) - return 'enhanced.wav' - -def generate_text_to_speech(text, selected_speaker, text_temp, waveform_temp, eos_prob, quick_generation, complete_settings, seed, batchcount, progress=gr.Progress(track_tqdm=True)): - # Chunk the text into smaller pieces then combine the generated audio - - # generation settings - if selected_speaker == 'None': - selected_speaker = None - - voice_name = selected_speaker - - if text == None or len(text) < 1: - if selected_speaker == None: - raise gr.Error('No text entered!') - - # Extract audio data from speaker if no text and speaker selected - voicedata = _load_history_prompt(voice_name) - audio_arr = codec_decode(voicedata["fine_prompt"]) - result = create_filename(OUTPUTFOLDER, "None", "extract",".wav") - save_wav(audio_arr, result) - return result - - if batchcount < 1: - batchcount = 1 - - - silenceshort = np.zeros(int((float(settings.silence_sentence) / 1000.0) * SAMPLE_RATE), dtype=np.int16) # quarter second of silence - silencelong = np.zeros(int((float(settings.silence_speakers) / 1000.0) * SAMPLE_RATE), dtype=np.float32) # half a second of silence - use_last_generation_as_history = "Use last generation as history" in complete_settings - save_last_generation = "Save generation as Voice" in complete_settings - for l in range(batchcount): - currentseed = seed - if seed != None and seed > 2**32 - 1: - logger.warning(f"Seed {seed} > 2**32 - 1 (max), setting to random") - currentseed = None - if currentseed == None or currentseed <= 0: - currentseed = np.random.default_rng().integers(1, 2**32 - 1) - assert(0 < currentseed and currentseed < 2**32) - - progress(0, desc="Generating") - - full_generation = None - - all_parts = [] - complete_text = "" - text = text.lstrip() - if is_ssml(text): - list_speak = create_clips_from_ssml(text) - prev_speaker = None - for i, clip in tqdm(enumerate(list_speak), total=len(list_speak)): - selected_speaker = clip[0] - # Add pause break between speakers - if i > 0 and selected_speaker != prev_speaker: - all_parts += [silencelong.copy()] - prev_speaker = selected_speaker - text = clip[1] - text = saxutils.unescape(text) - if selected_speaker == "None": - selected_speaker = None - - print(f"\nGenerating Text ({i+1}/{len(list_speak)}) -> {selected_speaker} (Seed {currentseed}):`{text}`") - complete_text += text - with pytorch_seed.SavedRNG(currentseed): - audio_array = generate_with_settings(text_prompt=text, voice_name=selected_speaker, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob) - currentseed = torch.random.initial_seed() - if len(list_speak) > 1: - filename = create_filename(OUTPUTFOLDER, currentseed, "audioclip",".wav") - save_wav(audio_array, filename) - add_id3_tag(filename, text, selected_speaker, currentseed) - - all_parts += [audio_array] - else: - texts = split_and_recombine_text(text, settings.input_text_desired_length, settings.input_text_max_length) - for i, text in tqdm(enumerate(texts), total=len(texts)): - print(f"\nGenerating Text ({i+1}/{len(texts)}) -> {selected_speaker} (Seed {currentseed}):`{text}`") - complete_text += text - if quick_generation == True: - with pytorch_seed.SavedRNG(currentseed): - audio_array = generate_with_settings(text_prompt=text, voice_name=selected_speaker, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob) - currentseed = torch.random.initial_seed() - else: - full_output = use_last_generation_as_history or save_last_generation - if full_output: - full_generation, audio_array = generate_with_settings(text_prompt=text, voice_name=voice_name, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob, output_full=True) - else: - audio_array = generate_with_settings(text_prompt=text, voice_name=voice_name, semantic_temp=text_temp, coarse_temp=waveform_temp, eos_p=eos_prob) - - # Noticed this in the HF Demo - convert to 16bit int -32767/32767 - most used audio format - # audio_array = (audio_array * 32767).astype(np.int16) - - if len(texts) > 1: - filename = create_filename(OUTPUTFOLDER, currentseed, "audioclip",".wav") - save_wav(audio_array, filename) - add_id3_tag(filename, text, selected_speaker, currentseed) - - if quick_generation == False and (save_last_generation == True or use_last_generation_as_history == True): - # save to npz - voice_name = create_filename(OUTPUTFOLDER, seed, "audioclip", ".npz") - save_as_prompt(voice_name, full_generation) - if use_last_generation_as_history: - selected_speaker = voice_name - - all_parts += [audio_array] - # Add short pause between sentences - if text[-1] in "!?.\n" and i > 1: - all_parts += [silenceshort.copy()] - - # save & play audio - result = create_filename(OUTPUTFOLDER, currentseed, "final",".wav") - save_wav(np.concatenate(all_parts), result) - # write id3 tag with text truncated to 60 chars, as a precaution... - add_id3_tag(result, complete_text, selected_speaker, currentseed) - - return result - -def create_filename(path, seed, name, extension): - now = datetime.now() - date_str =now.strftime("%m-%d-%Y") - outputs_folder = os.path.join(os.getcwd(), path) - if not os.path.exists(outputs_folder): - os.makedirs(outputs_folder) - - sub_folder = os.path.join(outputs_folder, date_str) - if not os.path.exists(sub_folder): - os.makedirs(sub_folder) - - time_str = now.strftime("%H-%M-%S") - file_name = f"{name}_{time_str}_s{seed}{extension}" - return os.path.join(sub_folder, file_name) - - -def save_wav(audio_array, filename): - write_wav(filename, SAMPLE_RATE, audio_array) - -def save_voice(filename, semantic_prompt, coarse_prompt, fine_prompt): - np.savez_compressed( - filename, - semantic_prompt=semantic_prompt, - coarse_prompt=coarse_prompt, - fine_prompt=fine_prompt - ) - - -def on_quick_gen_changed(checkbox): - if checkbox == False: - return gr.CheckboxGroup.update(visible=True) - return gr.CheckboxGroup.update(visible=False) - -def delete_output_files(checkbox_state): - if checkbox_state: - outputs_folder = os.path.join(os.getcwd(), OUTPUTFOLDER) - if os.path.exists(outputs_folder): - purgedir(outputs_folder) - return False - - -# https://stackoverflow.com/a/54494779 -def purgedir(parent): - for root, dirs, files in os.walk(parent): - for item in files: - # Delete subordinate files - filespec = os.path.join(root, item) - os.unlink(filespec) - for item in dirs: - # Recursively perform this operation for subordinate directories - purgedir(os.path.join(root, item)) - -def convert_text_to_ssml(text, selected_speaker): - return build_ssml(text, selected_speaker) - - -def apply_settings(themes, input_server_name, input_server_port, input_server_public, input_desired_len, input_max_len, input_silence_break, input_silence_speaker): - settings.selected_theme = themes - settings.server_name = input_server_name - settings.server_port = input_server_port - settings.server_share = input_server_public - settings.input_text_desired_length = input_desired_len - settings.input_text_max_length = input_max_len - settings.silence_sentence = input_silence_break - settings.silence_speaker = input_silence_speaker - settings.save() - -def restart(): - global restart_server - restart_server = True - - -def create_version_html(): - python_version = ".".join([str(x) for x in sys.version_info[0:3]]) - versions_html = f""" -python: {python_version} - •  -torch: {getattr(torch, '__long_version__',torch.__version__)} - •  -gradio: {gr.__version__} -""" - return versions_html - - - -logger = logging.getLogger(__name__) -APPTITLE = "Bark UI Enhanced v0.4.8" - - -autolaunch = False - -if len(sys.argv) > 1: - autolaunch = "-autolaunch" in sys.argv - - -if torch.cuda.is_available() == False: - os.environ['BARK_FORCE_CPU'] = 'True' - logger.warning("No CUDA detected, fallback to CPU!") - -print(f'smallmodels={os.environ.get("SUNO_USE_SMALL_MODELS", False)}') -print(f'enablemps={os.environ.get("SUNO_ENABLE_MPS", False)}') -print(f'offloadcpu={os.environ.get("SUNO_OFFLOAD_CPU", False)}') -print(f'forcecpu={os.environ.get("BARK_FORCE_CPU", False)}') -print(f'autolaunch={autolaunch}\n\n') - -#print("Updating nltk\n") -#nltk.download('punkt') - -print("Preloading Models\n") -preload_models() - -settings = Settings('config.yaml') - -# Collect all existing speakers/voices in dir -speakers_list = [] - -for root, dirs, files in os.walk("./bark/assets/prompts"): - for file in files: - if(file.endswith(".npz")): - pathpart = root.replace("./bark/assets/prompts", "") - name = os.path.join(pathpart, file[:-4]) - if name.startswith("/") or name.startswith("\\"): - name = name[1:] - speakers_list.append(name) - -speakers_list = sorted(speakers_list, key=lambda x: x.lower()) -speakers_list.insert(0, 'None') - -available_themes = ["Default", "gradio/glass", "gradio/monochrome", "gradio/seafoam", "gradio/soft", "gstaff/xkcd", "freddyaboulton/dracula_revamped", "ysharma/steampunk"] - -seed = -1 -server_name = settings.server_name -if len(server_name) < 1: - server_name = None -server_port = settings.server_port -if server_port <= 0: - server_port = None -global run_server -global restart_server - -run_server = True - - - - - -''' -from google.colab import drive -drive.mount('/content/drive') -src_path = os.path.join(os.path.join(os.path.join(os.path.join(os.getcwd(), 'drive'), 'MyDrive'), 'Colab Notebooks'), 'best_model_latest.pth.tar') -dst_path = os.path.join(os.getcwd(), 'best_model.pth.tar') -shutil.copy(src_path, dst_path) -''' - -TTS_PATH = "TTS/" - -# add libraries into environment -sys.path.append(TTS_PATH) # set this if TTS is not installed globally - -# Paths definition - -OUT_PATH = 'out/' - -# create output path -os.makedirs(OUT_PATH, exist_ok=True) - -# model vars -MODEL_PATH = 'best_model.pth.tar' -CONFIG_PATH = 'config.json' -TTS_LANGUAGES = "language_ids.json" -TTS_SPEAKERS = "speakers.json" -USE_CUDA = torch.cuda.is_available() - -# load the config -C = load_config(CONFIG_PATH) - -# load the audio processor -ap = AudioProcessor(**C.audio) - -speaker_embedding = None - -C.model_args['d_vector_file'] = TTS_SPEAKERS -C.model_args['use_speaker_encoder_as_loss'] = False - -model = setup_model(C) -model.language_manager.set_language_ids_from_file(TTS_LANGUAGES) -# print(model.language_manager.num_languages, model.embedded_language_dim) -# print(model.emb_l) -cp = torch.load(MODEL_PATH, map_location=torch.device('cpu')) -# remove speaker encoder -model_weights = cp['model'].copy() -for key in list(model_weights.keys()): - if "speaker_encoder" in key: - del model_weights[key] - -model.load_state_dict(model_weights) - -model.eval() - -if USE_CUDA: - model = model.cuda() - -# synthesize voice -use_griffin_lim = False - -# Paths definition - -CONFIG_SE_PATH = "config_se.json" -CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar" - -# Load the Speaker encoder - -SE_speaker_manager = SpeakerManager(encoder_model_path=CHECKPOINT_SE_PATH, encoder_config_path=CONFIG_SE_PATH, use_cuda=USE_CUDA) - -# Define helper function - -def compute_spec(ref_file): - y, sr = librosa.load(ref_file, sr=ap.sample_rate) - spec = ap.spectrogram(y) - spec = torch.FloatTensor(spec).unsqueeze(0) - return spec - - -def voice_conversion(ta, ra, da): - - target_audio = 'target.wav' - reference_audio = 'reference.wav' - driving_audio = 'driving.wav' - - write(target_audio, ta[0], ta[1]) - write(reference_audio, ra[0], ra[1]) - write(driving_audio, da[0], da[1]) - - # !ffmpeg-normalize $target_audio -nt rms -t=-27 -o $target_audio -ar 16000 -f - # !ffmpeg-normalize $reference_audio -nt rms -t=-27 -o $reference_audio -ar 16000 -f - # !ffmpeg-normalize $driving_audio -nt rms -t=-27 -o $driving_audio -ar 16000 -f - - files = [target_audio, reference_audio, driving_audio] - - for file in files: - subprocess.run(["ffmpeg-normalize", file, "-nt", "rms", "-t=-27", "-o", file, "-ar", "16000", "-f"]) - - # ta_ = read(target_audio) - - target_emb = SE_speaker_manager.compute_d_vector_from_clip([target_audio]) - target_emb = torch.FloatTensor(target_emb).unsqueeze(0) - - driving_emb = SE_speaker_manager.compute_d_vector_from_clip([reference_audio]) - driving_emb = torch.FloatTensor(driving_emb).unsqueeze(0) - - # Convert the voice - - driving_spec = compute_spec(driving_audio) - y_lengths = torch.tensor([driving_spec.size(-1)]) - if USE_CUDA: - ref_wav_voc, _, _ = model.voice_conversion(driving_spec.cuda(), y_lengths.cuda(), driving_emb.cuda(), target_emb.cuda()) - ref_wav_voc = ref_wav_voc.squeeze().cpu().detach().numpy() - else: - ref_wav_voc, _, _ = model.voice_conversion(driving_spec, y_lengths, driving_emb, target_emb) - ref_wav_voc = ref_wav_voc.squeeze().detach().numpy() - - # print("Reference Audio after decoder:") - # IPython.display.display(Audio(ref_wav_voc, rate=ap.sample_rate)) - - return (ap.sample_rate, ref_wav_voc) - - -while run_server: - print(f'Launching {APPTITLE} Server') - - # Create Gradio Blocks - - with gr.Blocks(title=f"{APPTITLE}", mode=f"{APPTITLE}", theme=settings.selected_theme) as barkgui: - gr.Markdown("#
    🐶🥳🎶 - Bark拟声,开启声音真实复刻的新纪元!
    ") - gr.Markdown("###
    🦄 - [Bark](https://github.com/suno-ai/bark)拟声,能够实现语音、语调及说话情感的真实复刻
    ") - gr.Markdown( - f""" - ###
    🤗 - Powered by [Bark Enhanced](https://github.com/C0untFloyd/bark-gui). Thanks to C0untFloyd.
    - ###
    1. 您可以复制该程序并用GPU运行: Duplicate Space
    - ###
    2. 更多精彩应用,尽在[滔滔AI](http://www.talktalkai.com);滔滔AI,为爱滔滔!💕
    - """ - ) - with gr.Tab("Instruct mode"): - gr.Markdown(f"Raven is [RWKV 7B](https://github.com/BlinkDL/ChatRWKV) 100% RNN [RWKV-LM](https://github.com/BlinkDL/RWKV-LM) finetuned to follow instructions. *** Please try examples first (bottom of page) *** (edit them to use your question). Demo limited to ctxlen {ctx_limit}. Finetuned on alpaca, gpt4all, codealpaca and more. For best results, *** keep you prompt short and clear ***. UPDATE: now with Chat (see above, as a tab) ==> turn off as of now due to VRAM leak caused by buggy code..") - with gr.Row(): - with gr.Column(): - instruction = gr.Textbox(lines=2, label="Instruction", value="Tell me about ravens.") - input = gr.Textbox(lines=2, label="Input", placeholder="none") - token_count = gr.Slider(10, 300, label="Max Tokens", step=10, value=300) - temperature = gr.Slider(0.2, 2.0, label="Temperature", step=0.1, value=1.2) - top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.5) - presence_penalty = gr.Slider(0.0, 1.0, label="Presence Penalty", step=0.1, value=0.4) - count_penalty = gr.Slider(0.0, 1.0, label="Count Penalty", step=0.1, value=0.4) - with gr.Column(): - with gr.Row(): - submit = gr.Button("Submit", variant="primary") - clear = gr.Button("Clear", variant="secondary") - output = gr.Textbox(label="Output", lines=5) - data = gr.Dataset(components=[instruction, input, token_count, temperature, top_p, presence_penalty, count_penalty], samples=examples, label="Example Instructions", headers=["Instruction", "Input", "Max Tokens", "Temperature", "Top P", "Presence Penalty", "Count Penalty"]) - submit.click(evaluate, [instruction, input, token_count, temperature, top_p, presence_penalty, count_penalty], [output]) - clear.click(lambda: None, [], [output]) - data.click(lambda x: x, [data], [instruction, input, token_count, temperature, top_p, presence_penalty, count_penalty]) - - with gr.Tab("🐶 - Bark拟声"): - with gr.Row(): - with gr.Column(): - placeholder = "想让Bark说些什么呢?" - input_text = gr.Textbox(label="用作声音合成的文本", lines=4, placeholder=placeholder) - with gr.Column(): - convert_to_ssml_button = gr.Button("Convert Input Text to SSML") - seedcomponent = gr.Number(label="Seed (default -1 = Random)", precision=0, value=-1) - batchcount = gr.Number(label="Batch count", precision=0, value=1) - - with gr.Row(): - with gr.Column(): - gr.Markdown("查看Bark官方[语言库](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c)") - speaker = gr.Dropdown(speakers_list, value=speakers_list[0], label="中英双语的不同声音供您选择") - with gr.Column(): - text_temp = gr.Slider(0.1, 1.0, value=0.7, label="Generation Temperature", info="1.0 more diverse, 0.1 more conservative") - waveform_temp = gr.Slider(0.1, 1.0, value=0.7, label="Waveform temperature", info="1.0 more diverse, 0.1 more conservative") - - with gr.Row(): - with gr.Column(): - quick_gen_checkbox = gr.Checkbox(label="是否要快速合成语音", value=True) - settings_checkboxes = ["Use last generation as history", "Save generation as Voice"] - complete_settings = gr.CheckboxGroup(choices=settings_checkboxes, value=settings_checkboxes, label="Detailed Generation Settings", type="value", interactive=True, visible=False) - with gr.Column(): - eos_prob = gr.Slider(0.0, 0.5, value=0.05, label="End of sentence probability") - - with gr.Row(): - with gr.Column(): - tts_create_button = gr.Button("开始声音真实复刻吧") - with gr.Column(): - hidden_checkbox = gr.Checkbox(visible=False) - button_stop_generation = gr.Button("停止生成") - with gr.Row(): - output_audio = gr.Audio(label="真实复刻的声音") - - with gr.Row(): - inp1 = gr.Audio(label="请上传您喜欢的声音") - inp2 = output_audio - inp3 = output_audio - btn = gr.Button("开始生成专属声音吧") - out1 = gr.Audio(label="为您生成的专属声音", type="filepath") - btn.click(voice_conversion, [inp1, inp2, inp3], [out1]) - - with gr.Row(): - inp4 = out1 - btn2 = gr.Button("对专属声音降噪吧") - out2 = gr.Audio(label="降噪后的专属声音", type="filepath") - btn2.click(speechbrain, [inp4], [out2]) - - - - with gr.Row(): - with gr.Column(): - examples = [ - "Special meanings: [laughter] [laughs] [sighs] [music] [gasps] [clears throat] MAN: WOMAN:", - "♪ Never gonna make you cry, never gonna say goodbye, never gonna tell a lie and hurt you ♪", - "And now — a picture of a larch [laughter]", - """ - WOMAN: I would like an oatmilk latte please. - MAN: Wow, that's expensive! - """, - """ - - Look at that drunk guy! - Who is he? - WOMAN: [clears throat] 10 years ago, he proposed me and I rejected him. - Oh my God [laughs] he is still celebrating - """ - ] - examples = gr.Examples(examples=examples, inputs=input_text) - - with gr.Tab("🤖 - 设置"): - with gr.Row(): - themes = gr.Dropdown(available_themes, label="Theme", info="Change needs complete restart", value=settings.selected_theme) - with gr.Row(): - input_server_name = gr.Textbox(label="Server Name", lines=1, info="Leave blank to run locally", value=settings.server_name) - input_server_port = gr.Number(label="Server Port", precision=0, info="Leave at 0 to use default", value=settings.server_port) - share_checkbox = gr.Checkbox(label="Public Server", value=settings.server_share) - with gr.Row(): - input_desired_len = gr.Slider(100, 150, value=settings.input_text_desired_length, label="Desired Input Text Length", info="Ideal length to split input sentences") - input_max_len = gr.Slider(150, 256, value=settings.input_text_max_length, label="Max Input Text Length", info="Maximum Input Text Length") - with gr.Row(): - input_silence_break = gr.Slider(1, 1000, value=settings.silence_sentence, label="Sentence Pause Time (ms)", info="Silence between sentences in milliseconds") - input_silence_speakers = gr.Slider(1, 5000, value=settings.silence_speakers, label="Speaker Pause Time (ms)", info="Silence between different speakers in milliseconds") - - with gr.Row(): - button_apply_settings = gr.Button("Apply Settings") - button_apply_restart = gr.Button("Restart Server") - button_delete_files = gr.Button("Clear output folder") - - gr.HTML(''' - - ''') - - quick_gen_checkbox.change(fn=on_quick_gen_changed, inputs=quick_gen_checkbox, outputs=complete_settings) - convert_to_ssml_button.click(convert_text_to_ssml, inputs=[input_text, speaker],outputs=input_text) - gen_click = tts_create_button.click(generate_text_to_speech, inputs=[input_text, speaker, text_temp, waveform_temp, eos_prob, quick_gen_checkbox, complete_settings, seedcomponent, batchcount],outputs=output_audio) - button_stop_generation.click(fn=None, inputs=None, outputs=None, cancels=[gen_click]) - # Javascript hack to display modal confirmation dialog - js = "(x) => confirm('Are you sure? This will remove all files from output folder')" - button_delete_files.click(None, None, hidden_checkbox, _js=js) - hidden_checkbox.change(delete_output_files, [hidden_checkbox], [hidden_checkbox]) - button_apply_settings.click(apply_settings, inputs=[themes, input_server_name, input_server_port, share_checkbox, input_desired_len, input_max_len, input_silence_break, input_silence_speakers]) - button_apply_restart.click(restart) - restart_server = False - try: - barkgui.queue().launch(show_error=True) - except: - restart_server = True - run_server = False - try: - while restart_server == False: - time.sleep(1.0) - except (KeyboardInterrupt, OSError): - print("Keyboard interruption in main thread... closing server.") - run_server = False - barkgui.close() \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/speaker_batch.py b/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/speaker_batch.py deleted file mode 100644 index 4485605e3ece5b491d1e7d0f223c543b6c91eb96..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/speaker_batch.py +++ /dev/null @@ -1,12 +0,0 @@ -import numpy as np -from typing import List -from speaker_encoder.data_objects.speaker import Speaker - -class SpeakerBatch: - def __init__(self, speakers: List[Speaker], utterances_per_speaker: int, n_frames: int): - self.speakers = speakers - self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers} - - # Array of shape (n_speakers * n_utterances, n_frames, mel_n), e.g. for 3 speakers with - # 4 utterances each of 160 frames of 40 mel coefficients: (12, 160, 40) - self.data = np.array([frames for s in speakers for _, frames, _ in self.partials[s]]) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/eval/verification.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/eval/verification.py deleted file mode 100644 index 253343b83dbf9d1bd154d14ec068e098bf0968db..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/eval/verification.py +++ /dev/null @@ -1,407 +0,0 @@ -"""Helper for evaluation on the Labeled Faces in the Wild dataset -""" - -# MIT License -# -# Copyright (c) 2016 David Sandberg -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - - -import datetime -import os -import pickle - -import mxnet as mx -import numpy as np -import sklearn -import torch -from mxnet import ndarray as nd -from scipy import interpolate -from sklearn.decomposition import PCA -from sklearn.model_selection import KFold - - -class LFold: - def __init__(self, n_splits=2, shuffle=False): - self.n_splits = n_splits - if self.n_splits > 1: - self.k_fold = KFold(n_splits=n_splits, shuffle=shuffle) - - def split(self, indices): - if self.n_splits > 1: - return self.k_fold.split(indices) - else: - return [(indices, indices)] - - -def calculate_roc(thresholds, - embeddings1, - embeddings2, - actual_issame, - nrof_folds=10, - pca=0): - assert (embeddings1.shape[0] == embeddings2.shape[0]) - assert (embeddings1.shape[1] == embeddings2.shape[1]) - nrof_pairs = min(len(actual_issame), embeddings1.shape[0]) - nrof_thresholds = len(thresholds) - k_fold = LFold(n_splits=nrof_folds, shuffle=False) - - tprs = np.zeros((nrof_folds, nrof_thresholds)) - fprs = np.zeros((nrof_folds, nrof_thresholds)) - accuracy = np.zeros((nrof_folds)) - indices = np.arange(nrof_pairs) - - if pca == 0: - diff = np.subtract(embeddings1, embeddings2) - dist = np.sum(np.square(diff), 1) - - for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)): - if pca > 0: - print('doing pca on', fold_idx) - embed1_train = embeddings1[train_set] - embed2_train = embeddings2[train_set] - _embed_train = np.concatenate((embed1_train, embed2_train), axis=0) - pca_model = PCA(n_components=pca) - pca_model.fit(_embed_train) - embed1 = pca_model.transform(embeddings1) - embed2 = pca_model.transform(embeddings2) - embed1 = sklearn.preprocessing.normalize(embed1) - embed2 = sklearn.preprocessing.normalize(embed2) - diff = np.subtract(embed1, embed2) - dist = np.sum(np.square(diff), 1) - - # Find the best threshold for the fold - acc_train = np.zeros((nrof_thresholds)) - for threshold_idx, threshold in enumerate(thresholds): - _, _, acc_train[threshold_idx] = calculate_accuracy( - threshold, dist[train_set], actual_issame[train_set]) - best_threshold_index = np.argmax(acc_train) - for threshold_idx, threshold in enumerate(thresholds): - tprs[fold_idx, threshold_idx], fprs[fold_idx, threshold_idx], _ = calculate_accuracy( - threshold, dist[test_set], - actual_issame[test_set]) - _, _, accuracy[fold_idx] = calculate_accuracy( - thresholds[best_threshold_index], dist[test_set], - actual_issame[test_set]) - - tpr = np.mean(tprs, 0) - fpr = np.mean(fprs, 0) - return tpr, fpr, accuracy - - -def calculate_accuracy(threshold, dist, actual_issame): - predict_issame = np.less(dist, threshold) - tp = np.sum(np.logical_and(predict_issame, actual_issame)) - fp = np.sum(np.logical_and(predict_issame, np.logical_not(actual_issame))) - tn = np.sum( - np.logical_and(np.logical_not(predict_issame), - np.logical_not(actual_issame))) - fn = np.sum(np.logical_and(np.logical_not(predict_issame), actual_issame)) - - tpr = 0 if (tp + fn == 0) else float(tp) / float(tp + fn) - fpr = 0 if (fp + tn == 0) else float(fp) / float(fp + tn) - acc = float(tp + tn) / dist.size - return tpr, fpr, acc - - -def calculate_val(thresholds, - embeddings1, - embeddings2, - actual_issame, - far_target, - nrof_folds=10): - assert (embeddings1.shape[0] == embeddings2.shape[0]) - assert (embeddings1.shape[1] == embeddings2.shape[1]) - nrof_pairs = min(len(actual_issame), embeddings1.shape[0]) - nrof_thresholds = len(thresholds) - k_fold = LFold(n_splits=nrof_folds, shuffle=False) - - val = np.zeros(nrof_folds) - far = np.zeros(nrof_folds) - - diff = np.subtract(embeddings1, embeddings2) - dist = np.sum(np.square(diff), 1) - indices = np.arange(nrof_pairs) - - for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)): - - # Find the threshold that gives FAR = far_target - far_train = np.zeros(nrof_thresholds) - for threshold_idx, threshold in enumerate(thresholds): - _, far_train[threshold_idx] = calculate_val_far( - threshold, dist[train_set], actual_issame[train_set]) - if np.max(far_train) >= far_target: - f = interpolate.interp1d(far_train, thresholds, kind='slinear') - threshold = f(far_target) - else: - threshold = 0.0 - - val[fold_idx], far[fold_idx] = calculate_val_far( - threshold, dist[test_set], actual_issame[test_set]) - - val_mean = np.mean(val) - far_mean = np.mean(far) - val_std = np.std(val) - return val_mean, val_std, far_mean - - -def calculate_val_far(threshold, dist, actual_issame): - predict_issame = np.less(dist, threshold) - true_accept = np.sum(np.logical_and(predict_issame, actual_issame)) - false_accept = np.sum( - np.logical_and(predict_issame, np.logical_not(actual_issame))) - n_same = np.sum(actual_issame) - n_diff = np.sum(np.logical_not(actual_issame)) - # print(true_accept, false_accept) - # print(n_same, n_diff) - val = float(true_accept) / float(n_same) - far = float(false_accept) / float(n_diff) - return val, far - - -def evaluate(embeddings, actual_issame, nrof_folds=10, pca=0): - # Calculate evaluation metrics - thresholds = np.arange(0, 4, 0.01) - embeddings1 = embeddings[0::2] - embeddings2 = embeddings[1::2] - tpr, fpr, accuracy = calculate_roc(thresholds, - embeddings1, - embeddings2, - np.asarray(actual_issame), - nrof_folds=nrof_folds, - pca=pca) - thresholds = np.arange(0, 4, 0.001) - val, val_std, far = calculate_val(thresholds, - embeddings1, - embeddings2, - np.asarray(actual_issame), - 1e-3, - nrof_folds=nrof_folds) - return tpr, fpr, accuracy, val, val_std, far - -@torch.no_grad() -def load_bin(path, image_size): - try: - with open(path, 'rb') as f: - bins, issame_list = pickle.load(f) # py2 - except UnicodeDecodeError as e: - with open(path, 'rb') as f: - bins, issame_list = pickle.load(f, encoding='bytes') # py3 - data_list = [] - for flip in [0, 1]: - data = torch.empty((len(issame_list) * 2, 3, image_size[0], image_size[1])) - data_list.append(data) - for idx in range(len(issame_list) * 2): - _bin = bins[idx] - img = mx.image.imdecode(_bin) - if img.shape[1] != image_size[0]: - img = mx.image.resize_short(img, image_size[0]) - img = nd.transpose(img, axes=(2, 0, 1)) - for flip in [0, 1]: - if flip == 1: - img = mx.ndarray.flip(data=img, axis=2) - data_list[flip][idx][:] = torch.from_numpy(img.asnumpy()) - if idx % 1000 == 0: - print('loading bin', idx) - print(data_list[0].shape) - return data_list, issame_list - -@torch.no_grad() -def test(data_set, backbone, batch_size, nfolds=10): - print('testing verification..') - data_list = data_set[0] - issame_list = data_set[1] - embeddings_list = [] - time_consumed = 0.0 - for i in range(len(data_list)): - data = data_list[i] - embeddings = None - ba = 0 - while ba < data.shape[0]: - bb = min(ba + batch_size, data.shape[0]) - count = bb - ba - _data = data[bb - batch_size: bb] - time0 = datetime.datetime.now() - img = ((_data / 255) - 0.5) / 0.5 - net_out: torch.Tensor = backbone(img) - _embeddings = net_out.detach().cpu().numpy() - time_now = datetime.datetime.now() - diff = time_now - time0 - time_consumed += diff.total_seconds() - if embeddings is None: - embeddings = np.zeros((data.shape[0], _embeddings.shape[1])) - embeddings[ba:bb, :] = _embeddings[(batch_size - count):, :] - ba = bb - embeddings_list.append(embeddings) - - _xnorm = 0.0 - _xnorm_cnt = 0 - for embed in embeddings_list: - for i in range(embed.shape[0]): - _em = embed[i] - _norm = np.linalg.norm(_em) - _xnorm += _norm - _xnorm_cnt += 1 - _xnorm /= _xnorm_cnt - - acc1 = 0.0 - std1 = 0.0 - embeddings = embeddings_list[0] + embeddings_list[1] - embeddings = sklearn.preprocessing.normalize(embeddings) - print(embeddings.shape) - print('infer time', time_consumed) - _, _, accuracy, val, val_std, far = evaluate(embeddings, issame_list, nrof_folds=nfolds) - acc2, std2 = np.mean(accuracy), np.std(accuracy) - return acc1, std1, acc2, std2, _xnorm, embeddings_list - - -def dumpR(data_set, - backbone, - batch_size, - name='', - data_extra=None, - label_shape=None): - print('dump verification embedding..') - data_list = data_set[0] - issame_list = data_set[1] - embeddings_list = [] - time_consumed = 0.0 - for i in range(len(data_list)): - data = data_list[i] - embeddings = None - ba = 0 - while ba < data.shape[0]: - bb = min(ba + batch_size, data.shape[0]) - count = bb - ba - - _data = nd.slice_axis(data, axis=0, begin=bb - batch_size, end=bb) - time0 = datetime.datetime.now() - if data_extra is None: - db = mx.io.DataBatch(data=(_data,), label=(_label,)) - else: - db = mx.io.DataBatch(data=(_data, _data_extra), - label=(_label,)) - model.forward(db, is_train=False) - net_out = model.get_outputs() - _embeddings = net_out[0].asnumpy() - time_now = datetime.datetime.now() - diff = time_now - time0 - time_consumed += diff.total_seconds() - if embeddings is None: - embeddings = np.zeros((data.shape[0], _embeddings.shape[1])) - embeddings[ba:bb, :] = _embeddings[(batch_size - count):, :] - ba = bb - embeddings_list.append(embeddings) - embeddings = embeddings_list[0] + embeddings_list[1] - embeddings = sklearn.preprocessing.normalize(embeddings) - actual_issame = np.asarray(issame_list) - outname = os.path.join('temp.bin') - with open(outname, 'wb') as f: - pickle.dump((embeddings, issame_list), - f, - protocol=pickle.HIGHEST_PROTOCOL) - - -# if __name__ == '__main__': -# -# parser = argparse.ArgumentParser(description='do verification') -# # general -# parser.add_argument('--data-dir', default='', help='') -# parser.add_argument('--model', -# default='../model/softmax,50', -# help='path to load model.') -# parser.add_argument('--target', -# default='lfw,cfp_ff,cfp_fp,agedb_30', -# help='test targets.') -# parser.add_argument('--gpu', default=0, type=int, help='gpu id') -# parser.add_argument('--batch-size', default=32, type=int, help='') -# parser.add_argument('--max', default='', type=str, help='') -# parser.add_argument('--mode', default=0, type=int, help='') -# parser.add_argument('--nfolds', default=10, type=int, help='') -# args = parser.parse_args() -# image_size = [112, 112] -# print('image_size', image_size) -# ctx = mx.gpu(args.gpu) -# nets = [] -# vec = args.model.split(',') -# prefix = args.model.split(',')[0] -# epochs = [] -# if len(vec) == 1: -# pdir = os.path.dirname(prefix) -# for fname in os.listdir(pdir): -# if not fname.endswith('.params'): -# continue -# _file = os.path.join(pdir, fname) -# if _file.startswith(prefix): -# epoch = int(fname.split('.')[0].split('-')[1]) -# epochs.append(epoch) -# epochs = sorted(epochs, reverse=True) -# if len(args.max) > 0: -# _max = [int(x) for x in args.max.split(',')] -# assert len(_max) == 2 -# if len(epochs) > _max[1]: -# epochs = epochs[_max[0]:_max[1]] -# -# else: -# epochs = [int(x) for x in vec[1].split('|')] -# print('model number', len(epochs)) -# time0 = datetime.datetime.now() -# for epoch in epochs: -# print('loading', prefix, epoch) -# sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, epoch) -# # arg_params, aux_params = ch_dev(arg_params, aux_params, ctx) -# all_layers = sym.get_internals() -# sym = all_layers['fc1_output'] -# model = mx.mod.Module(symbol=sym, context=ctx, label_names=None) -# # model.bind(data_shapes=[('data', (args.batch_size, 3, image_size[0], image_size[1]))], label_shapes=[('softmax_label', (args.batch_size,))]) -# model.bind(data_shapes=[('data', (args.batch_size, 3, image_size[0], -# image_size[1]))]) -# model.set_params(arg_params, aux_params) -# nets.append(model) -# time_now = datetime.datetime.now() -# diff = time_now - time0 -# print('model loading time', diff.total_seconds()) -# -# ver_list = [] -# ver_name_list = [] -# for name in args.target.split(','): -# path = os.path.join(args.data_dir, name + ".bin") -# if os.path.exists(path): -# print('loading.. ', name) -# data_set = load_bin(path, image_size) -# ver_list.append(data_set) -# ver_name_list.append(name) -# -# if args.mode == 0: -# for i in range(len(ver_list)): -# results = [] -# for model in nets: -# acc1, std1, acc2, std2, xnorm, embeddings_list = test( -# ver_list[i], model, args.batch_size, args.nfolds) -# print('[%s]XNorm: %f' % (ver_name_list[i], xnorm)) -# print('[%s]Accuracy: %1.5f+-%1.5f' % (ver_name_list[i], acc1, std1)) -# print('[%s]Accuracy-Flip: %1.5f+-%1.5f' % (ver_name_list[i], acc2, std2)) -# results.append(acc2) -# print('Max of [%s] is %1.5f' % (ver_name_list[i], np.max(results))) -# elif args.mode == 1: -# raise ValueError -# else: -# model = nets[0] -# dumpR(ver_list[0], model, args.batch_size, args.target) diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/mobilenet_v2.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/mobilenet_v2.py deleted file mode 100644 index ab6b3791692a0d1b5da3601875711710b7bd01ba..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/mobilenet_v2.py +++ /dev/null @@ -1,180 +0,0 @@ -import logging - -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, constant_init, kaiming_init -from annotator.uniformer.mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidual, make_divisible - - -@BACKBONES.register_module() -class MobileNetV2(nn.Module): - """MobileNetV2 backbone. - - Args: - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - strides (Sequence[int], optional): Strides of the first block of each - layer. If not specified, default config in ``arch_setting`` will - be used. - dilations (Sequence[int]): Dilation of each layer. - out_indices (None or Sequence[int]): Output from which stages. - Default: (7, ). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - # Parameters to build layers. 3 parameters are needed to construct a - # layer, from left to right: expand_ratio, channel, num_blocks. - arch_settings = [[1, 16, 1], [6, 24, 2], [6, 32, 3], [6, 64, 4], - [6, 96, 3], [6, 160, 3], [6, 320, 1]] - - def __init__(self, - widen_factor=1., - strides=(1, 2, 2, 2, 1, 2, 1), - dilations=(1, 1, 1, 1, 1, 1, 1), - out_indices=(1, 2, 4, 6), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - norm_eval=False, - with_cp=False): - super(MobileNetV2, self).__init__() - self.widen_factor = widen_factor - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == len(self.arch_settings) - self.out_indices = out_indices - for index in out_indices: - if index not in range(0, 7): - raise ValueError('the item in out_indices must in ' - f'range(0, 8). But received {index}') - - if frozen_stages not in range(-1, 7): - raise ValueError('frozen_stages must be in range(-1, 7). ' - f'But received {frozen_stages}') - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.in_channels = make_divisible(32 * widen_factor, 8) - - self.conv1 = ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.layers = [] - - for i, layer_cfg in enumerate(self.arch_settings): - expand_ratio, channel, num_blocks = layer_cfg - stride = self.strides[i] - dilation = self.dilations[i] - out_channels = make_divisible(channel * widen_factor, 8) - inverted_res_layer = self.make_layer( - out_channels=out_channels, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - expand_ratio=expand_ratio) - layer_name = f'layer{i + 1}' - self.add_module(layer_name, inverted_res_layer) - self.layers.append(layer_name) - - def make_layer(self, out_channels, num_blocks, stride, dilation, - expand_ratio): - """Stack InvertedResidual blocks to build a layer for MobileNetV2. - - Args: - out_channels (int): out_channels of block. - num_blocks (int): Number of blocks. - stride (int): Stride of the first block. - dilation (int): Dilation of the first block. - expand_ratio (int): Expand the number of channels of the - hidden layer in InvertedResidual by this ratio. - """ - layers = [] - for i in range(num_blocks): - layers.append( - InvertedResidual( - self.in_channels, - out_channels, - stride if i == 0 else 1, - expand_ratio=expand_ratio, - dilation=dilation if i == 0 else 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - with_cp=self.with_cp)) - self.in_channels = out_channels - - return nn.Sequential(*layers) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for i in range(1, self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV2, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/rxf/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/rxf/README.md deleted file mode 100644 index 22a1cc47df23c7e0ebbf0ad805031478d1b4a95e..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/rxf/README.md +++ /dev/null @@ -1,52 +0,0 @@ -[Better Fine-Tuning by Reducing Representational Collapse](https://arxiv.org/abs/2008.03156) -===================== -This repo contains the code to replicate all experiments from the _Better Fine-Tuning by Reducing Representational Collapse_ paper excluding the probing results. - -The R3F sentence prediction criterion is registered as `sentence_prediction_r3f` while the label smoothing version of it is implemented as `label_smoothed_cross_entropy_r3f`. The R4F version of the sentence prediction criterion can be achieved by applying spectral norm to the classification head via the `--spectral-norm-classification-head` parameter. - -## Hyper-parameters -Our methods introduce 3 new hyper-parameters; `--eps` which sets the standard deviation or range of the distribution we're sampling from, `--r3f-lambda` which controls the combining of logistic loss and noisy KL loss and `--noise-type` which controls which parametric distribution we use ('normal', 'uniform'). - -For example to run R3F on RTE from GLUE - -``` -TOTAL_NUM_UPDATES=3120 -WARMUP_UPDATES=187 -LR=1e-05 -NUM_CLASSES=2 -MAX_SENTENCES=8 # Batch size. -ROBERTA_PATH=/path/to/roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin \ - --restore-file $ROBERTA_PATH \ - --max-positions 512 \ - --max-sentences $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction_r3f \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --noise-type uniform --r3f-lambda 0.7 \ - --user-dir examples/rxf/rxf_src -``` - -## Citation -```bibtex -@article{aghajanyan2020better, - title={Better Fine-Tuning by Reducing Representational Collapse}, - author={Aghajanyan, Armen and Shrivastava, Akshat and Gupta, Anchit and Goyal, Naman and Zettlemoyer, Luke and Gupta, Sonal}, - journal={arXiv preprint arXiv:2008.03156}, - year={2020} -} -``` diff --git a/spaces/kokofixcomputers/chat-ui/src/lib/server/models.ts b/spaces/kokofixcomputers/chat-ui/src/lib/server/models.ts deleted file mode 100644 index 6946f40623098176a3d262d08ef60961eeceb650..0000000000000000000000000000000000000000 --- a/spaces/kokofixcomputers/chat-ui/src/lib/server/models.ts +++ /dev/null @@ -1,80 +0,0 @@ -import { HF_ACCESS_TOKEN, MODELS, OLD_MODELS } from "$env/static/private"; -import { z } from "zod"; - -const modelsRaw = z - .array( - z.object({ - /** Used as an identifier in DB */ - id: z.string().optional(), - /** Used to link to the model page, and for inference */ - name: z.string().min(1), - displayName: z.string().min(1).optional(), - description: z.string().min(1).optional(), - websiteUrl: z.string().url().optional(), - datasetName: z.string().min(1).optional(), - userMessageToken: z.string().min(1), - assistantMessageToken: z.string().min(1), - messageEndToken: z.string().min(1).optional(), - preprompt: z.string().default(""), - prepromptUrl: z.string().url().optional(), - promptExamples: z - .array( - z.object({ - title: z.string().min(1), - prompt: z.string().min(1), - }) - ) - .optional(), - endpoints: z - .array( - z.object({ - url: z.string().url(), - authorization: z.string().min(1).default(`Bearer ${HF_ACCESS_TOKEN}`), - weight: z.number().int().positive().default(1), - }) - ) - .optional(), - parameters: z - .object({ - temperature: z.number().min(0).max(1), - truncate: z.number().int().positive(), - max_new_tokens: z.number().int().positive(), - stop: z.array(z.string()).optional(), - }) - .passthrough() - .optional(), - }) - ) - .parse(JSON.parse(MODELS)); - -export const models = await Promise.all( - modelsRaw.map(async (m) => ({ - ...m, - id: m.id || m.name, - displayName: m.displayName || m.name, - preprompt: m.prepromptUrl ? await fetch(m.prepromptUrl).then((r) => r.text()) : m.preprompt, - })) -); - -// Models that have been deprecated -export const oldModels = OLD_MODELS - ? z - .array( - z.object({ - id: z.string().optional(), - name: z.string().min(1), - displayName: z.string().min(1).optional(), - }) - ) - .parse(JSON.parse(OLD_MODELS)) - .map((m) => ({ ...m, id: m.id || m.name, displayName: m.displayName || m.name })) - : []; - -export type BackendModel = (typeof models)[0]; - -export const defaultModel = models[0]; - -export const validateModel = (_models: BackendModel[]) => { - // Zod enum function requires 2 parameters - return z.enum([_models[0].id, ..._models.slice(1).map((m) => m.id)]); -}; diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py deleted file mode 100644 index afd0c2664b295c62b29e0d258c1908e1937dac50..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py +++ /dev/null @@ -1,862 +0,0 @@ -from __future__ import absolute_import, division, print_function - -import asyncio -import io -import logging -import re -import weakref -from copy import copy -from urllib.parse import urlparse - -import aiohttp -import requests -import yarl - -from fsspec.asyn import AbstractAsyncStreamedFile, AsyncFileSystem, sync, sync_wrapper -from fsspec.callbacks import _DEFAULT_CALLBACK -from fsspec.exceptions import FSTimeoutError -from fsspec.spec import AbstractBufferedFile -from fsspec.utils import DEFAULT_BLOCK_SIZE, isfilelike, nullcontext, tokenize - -from ..caching import AllBytes - -# https://stackoverflow.com/a/15926317/3821154 -ex = re.compile(r"""<(a|A)\s+(?:[^>]*?\s+)?(href|HREF)=["'](?P[^"']+)""") -ex2 = re.compile(r"""(?Phttp[s]?://[-a-zA-Z0-9@:%_+.~#?&/=]+)""") -logger = logging.getLogger("fsspec.http") - - -async def get_client(**kwargs): - return aiohttp.ClientSession(**kwargs) - - -class HTTPFileSystem(AsyncFileSystem): - """ - Simple File-System for fetching data via HTTP(S) - - ``ls()`` is implemented by loading the parent page and doing a regex - match on the result. If simple_link=True, anything of the form - "http(s)://server.com/stuff?thing=other"; otherwise only links within - HTML href tags will be used. - """ - - sep = "/" - - def __init__( - self, - simple_links=True, - block_size=None, - same_scheme=True, - size_policy=None, - cache_type="bytes", - cache_options=None, - asynchronous=False, - loop=None, - client_kwargs=None, - get_client=get_client, - encoded=False, - **storage_options, - ): - """ - NB: if this is called async, you must await set_client - - Parameters - ---------- - block_size: int - Blocks to read bytes; if 0, will default to raw requests file-like - objects instead of HTTPFile instances - simple_links: bool - If True, will consider both HTML tags and anything that looks - like a URL; if False, will consider only the former. - same_scheme: True - When doing ls/glob, if this is True, only consider paths that have - http/https matching the input URLs. - size_policy: this argument is deprecated - client_kwargs: dict - Passed to aiohttp.ClientSession, see - https://docs.aiohttp.org/en/stable/client_reference.html - For example, ``{'auth': aiohttp.BasicAuth('user', 'pass')}`` - get_client: Callable[..., aiohttp.ClientSession] - A callable which takes keyword arguments and constructs - an aiohttp.ClientSession. It's state will be managed by - the HTTPFileSystem class. - storage_options: key-value - Any other parameters passed on to requests - cache_type, cache_options: defaults used in open - """ - super().__init__(self, asynchronous=asynchronous, loop=loop, **storage_options) - self.block_size = block_size if block_size is not None else DEFAULT_BLOCK_SIZE - self.simple_links = simple_links - self.same_schema = same_scheme - self.cache_type = cache_type - self.cache_options = cache_options - self.client_kwargs = client_kwargs or {} - self.get_client = get_client - self.encoded = encoded - self.kwargs = storage_options - self._session = None - - # Clean caching-related parameters from `storage_options` - # before propagating them as `request_options` through `self.kwargs`. - # TODO: Maybe rename `self.kwargs` to `self.request_options` to make - # it clearer. - request_options = copy(storage_options) - self.use_listings_cache = request_options.pop("use_listings_cache", False) - request_options.pop("listings_expiry_time", None) - request_options.pop("max_paths", None) - request_options.pop("skip_instance_cache", None) - self.kwargs = request_options - - @property - def fsid(self): - return "http" - - def encode_url(self, url): - return yarl.URL(url, encoded=self.encoded) - - @staticmethod - def close_session(loop, session): - if loop is not None and loop.is_running(): - try: - sync(loop, session.close, timeout=0.1) - return - except (TimeoutError, FSTimeoutError): - pass - connector = getattr(session, "_connector", None) - if connector is not None: - # close after loop is dead - connector._close() - - async def set_session(self): - if self._session is None: - self._session = await self.get_client(loop=self.loop, **self.client_kwargs) - if not self.asynchronous: - weakref.finalize(self, self.close_session, self.loop, self._session) - return self._session - - @classmethod - def _strip_protocol(cls, path): - """For HTTP, we always want to keep the full URL""" - return path - - @classmethod - def _parent(cls, path): - # override, since _strip_protocol is different for URLs - par = super()._parent(path) - if len(par) > 7: # "http://..." - return par - return "" - - async def _ls_real(self, url, detail=True, **kwargs): - # ignoring URL-encoded arguments - kw = self.kwargs.copy() - kw.update(kwargs) - logger.debug(url) - session = await self.set_session() - async with session.get(self.encode_url(url), **self.kwargs) as r: - self._raise_not_found_for_status(r, url) - text = await r.text() - if self.simple_links: - links = ex2.findall(text) + [u[2] for u in ex.findall(text)] - else: - links = [u[2] for u in ex.findall(text)] - out = set() - parts = urlparse(url) - for l in links: - if isinstance(l, tuple): - l = l[1] - if l.startswith("/") and len(l) > 1: - # absolute URL on this server - l = parts.scheme + "://" + parts.netloc + l - if l.startswith("http"): - if self.same_schema and l.startswith(url.rstrip("/") + "/"): - out.add(l) - elif l.replace("https", "http").startswith( - url.replace("https", "http").rstrip("/") + "/" - ): - # allowed to cross http <-> https - out.add(l) - else: - if l not in ["..", "../"]: - # Ignore FTP-like "parent" - out.add("/".join([url.rstrip("/"), l.lstrip("/")])) - if not out and url.endswith("/"): - out = await self._ls_real(url.rstrip("/"), detail=False) - if detail: - return [ - { - "name": u, - "size": None, - "type": "directory" if u.endswith("/") else "file", - } - for u in out - ] - else: - return list(sorted(out)) - - async def _ls(self, url, detail=True, **kwargs): - - if self.use_listings_cache and url in self.dircache: - out = self.dircache[url] - else: - out = await self._ls_real(url, detail=detail, **kwargs) - self.dircache[url] = out - return out - - ls = sync_wrapper(_ls) - - def _raise_not_found_for_status(self, response, url): - """ - Raises FileNotFoundError for 404s, otherwise uses raise_for_status. - """ - if response.status == 404: - raise FileNotFoundError(url) - response.raise_for_status() - - async def _cat_file(self, url, start=None, end=None, **kwargs): - kw = self.kwargs.copy() - kw.update(kwargs) - logger.debug(url) - - if start is not None or end is not None: - if start == end: - return b"" - headers = kw.pop("headers", {}).copy() - - headers["Range"] = await self._process_limits(url, start, end) - kw["headers"] = headers - session = await self.set_session() - async with session.get(self.encode_url(url), **kw) as r: - out = await r.read() - self._raise_not_found_for_status(r, url) - return out - - async def _get_file( - self, rpath, lpath, chunk_size=5 * 2**20, callback=_DEFAULT_CALLBACK, **kwargs - ): - kw = self.kwargs.copy() - kw.update(kwargs) - logger.debug(rpath) - session = await self.set_session() - async with session.get(self.encode_url(rpath), **kw) as r: - try: - size = int(r.headers["content-length"]) - except (ValueError, KeyError): - size = None - - callback.set_size(size) - self._raise_not_found_for_status(r, rpath) - if isfilelike(lpath): - outfile = lpath - else: - outfile = open(lpath, "wb") - - try: - chunk = True - while chunk: - chunk = await r.content.read(chunk_size) - outfile.write(chunk) - callback.relative_update(len(chunk)) - finally: - if not isfilelike(lpath): - outfile.close() - - async def _put_file( - self, - lpath, - rpath, - chunk_size=5 * 2**20, - callback=_DEFAULT_CALLBACK, - method="post", - **kwargs, - ): - async def gen_chunks(): - # Support passing arbitrary file-like objects - # and use them instead of streams. - if isinstance(lpath, io.IOBase): - context = nullcontext(lpath) - use_seek = False # might not support seeking - else: - context = open(lpath, "rb") - use_seek = True - - with context as f: - if use_seek: - callback.set_size(f.seek(0, 2)) - f.seek(0) - else: - callback.set_size(getattr(f, "size", None)) - - chunk = f.read(chunk_size) - while chunk: - yield chunk - callback.relative_update(len(chunk)) - chunk = f.read(chunk_size) - - kw = self.kwargs.copy() - kw.update(kwargs) - session = await self.set_session() - - method = method.lower() - if method not in ("post", "put"): - raise ValueError( - f"method has to be either 'post' or 'put', not: {method!r}" - ) - - meth = getattr(session, method) - async with meth(rpath, data=gen_chunks(), **kw) as resp: - self._raise_not_found_for_status(resp, rpath) - - async def _exists(self, path, **kwargs): - kw = self.kwargs.copy() - kw.update(kwargs) - try: - logger.debug(path) - session = await self.set_session() - r = await session.get(self.encode_url(path), **kw) - async with r: - return r.status < 400 - except (requests.HTTPError, aiohttp.ClientError): - return False - - async def _isfile(self, path, **kwargs): - return await self._exists(path, **kwargs) - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=None, # XXX: This differs from the base class. - cache_type=None, - cache_options=None, - size=None, - **kwargs, - ): - """Make a file-like object - - Parameters - ---------- - path: str - Full URL with protocol - mode: string - must be "rb" - block_size: int or None - Bytes to download in one request; use instance value if None. If - zero, will return a streaming Requests file-like instance. - kwargs: key-value - Any other parameters, passed to requests calls - """ - if mode != "rb": - raise NotImplementedError - block_size = block_size if block_size is not None else self.block_size - kw = self.kwargs.copy() - kw["asynchronous"] = self.asynchronous - kw.update(kwargs) - size = size or self.info(path, **kwargs)["size"] - session = sync(self.loop, self.set_session) - if block_size and size: - return HTTPFile( - self, - path, - session=session, - block_size=block_size, - mode=mode, - size=size, - cache_type=cache_type or self.cache_type, - cache_options=cache_options or self.cache_options, - loop=self.loop, - **kw, - ) - else: - return HTTPStreamFile( - self, - path, - mode=mode, - loop=self.loop, - session=session, - **kw, - ) - - async def open_async(self, path, mode="rb", size=None, **kwargs): - session = await self.set_session() - if size is None: - try: - size = (await self._info(path, **kwargs))["size"] - except FileNotFoundError: - pass - return AsyncStreamFile( - self, - path, - loop=self.loop, - session=session, - size=size, - **kwargs, - ) - - def ukey(self, url): - """Unique identifier; assume HTTP files are static, unchanging""" - return tokenize(url, self.kwargs, self.protocol) - - async def _info(self, url, **kwargs): - """Get info of URL - - Tries to access location via HEAD, and then GET methods, but does - not fetch the data. - - It is possible that the server does not supply any size information, in - which case size will be given as None (and certain operations on the - corresponding file will not work). - """ - info = {} - session = await self.set_session() - - for policy in ["head", "get"]: - try: - info.update( - await _file_info( - self.encode_url(url), - size_policy=policy, - session=session, - **self.kwargs, - **kwargs, - ) - ) - if info.get("size") is not None: - break - except Exception as exc: - if policy == "get": - # If get failed, then raise a FileNotFoundError - raise FileNotFoundError(url) from exc - logger.debug(str(exc)) - - return {"name": url, "size": None, **info, "type": "file"} - - async def _glob(self, path, **kwargs): - """ - Find files by glob-matching. - - This implementation is idntical to the one in AbstractFileSystem, - but "?" is not considered as a character for globbing, because it is - so common in URLs, often identifying the "query" part. - """ - import re - - ends = path.endswith("/") - path = self._strip_protocol(path) - indstar = path.find("*") if path.find("*") >= 0 else len(path) - indbrace = path.find("[") if path.find("[") >= 0 else len(path) - - ind = min(indstar, indbrace) - - detail = kwargs.pop("detail", False) - - if not has_magic(path): - root = path - depth = 1 - if ends: - path += "/*" - elif await self._exists(path): - if not detail: - return [path] - else: - return {path: await self._info(path)} - else: - if not detail: - return [] # glob of non-existent returns empty - else: - return {} - elif "/" in path[:ind]: - ind2 = path[:ind].rindex("/") - root = path[: ind2 + 1] - depth = None if "**" in path else path[ind2 + 1 :].count("/") + 1 - else: - root = "" - depth = None if "**" in path else path[ind + 1 :].count("/") + 1 - - allpaths = await self._find( - root, maxdepth=depth, withdirs=True, detail=True, **kwargs - ) - # Escape characters special to python regex, leaving our supported - # special characters in place. - # See https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html - # for shell globbing details. - pattern = ( - "^" - + ( - path.replace("\\", r"\\") - .replace(".", r"\.") - .replace("+", r"\+") - .replace("//", "/") - .replace("(", r"\(") - .replace(")", r"\)") - .replace("|", r"\|") - .replace("^", r"\^") - .replace("$", r"\$") - .replace("{", r"\{") - .replace("}", r"\}") - .rstrip("/") - ) - + "$" - ) - pattern = re.sub("[*]{2}", "=PLACEHOLDER=", pattern) - pattern = re.sub("[*]", "[^/]*", pattern) - pattern = re.compile(pattern.replace("=PLACEHOLDER=", ".*")) - out = { - p: allpaths[p] - for p in sorted(allpaths) - if pattern.match(p.replace("//", "/").rstrip("/")) - } - if detail: - return out - else: - return list(out) - - async def _isdir(self, path): - # override, since all URLs are (also) files - try: - return bool(await self._ls(path)) - except (FileNotFoundError, ValueError): - return False - - -class HTTPFile(AbstractBufferedFile): - """ - A file-like object pointing to a remove HTTP(S) resource - - Supports only reading, with read-ahead of a predermined block-size. - - In the case that the server does not supply the filesize, only reading of - the complete file in one go is supported. - - Parameters - ---------- - url: str - Full URL of the remote resource, including the protocol - session: requests.Session or None - All calls will be made within this session, to avoid restarting - connections where the server allows this - block_size: int or None - The amount of read-ahead to do, in bytes. Default is 5MB, or the value - configured for the FileSystem creating this file - size: None or int - If given, this is the size of the file in bytes, and we don't attempt - to call the server to find the value. - kwargs: all other key-values are passed to requests calls. - """ - - def __init__( - self, - fs, - url, - session=None, - block_size=None, - mode="rb", - cache_type="bytes", - cache_options=None, - size=None, - loop=None, - asynchronous=False, - **kwargs, - ): - if mode != "rb": - raise NotImplementedError("File mode not supported") - self.asynchronous = asynchronous - self.url = url - self.session = session - self.details = {"name": url, "size": size, "type": "file"} - super().__init__( - fs=fs, - path=url, - mode=mode, - block_size=block_size, - cache_type=cache_type, - cache_options=cache_options, - **kwargs, - ) - self.loop = loop - - def read(self, length=-1): - """Read bytes from file - - Parameters - ---------- - length: int - Read up to this many bytes. If negative, read all content to end of - file. If the server has not supplied the filesize, attempting to - read only part of the data will raise a ValueError. - """ - if ( - (length < 0 and self.loc == 0) # explicit read all - # but not when the size is known and fits into a block anyways - and not (self.size is not None and self.size <= self.blocksize) - ): - self._fetch_all() - if self.size is None: - if length < 0: - self._fetch_all() - else: - length = min(self.size - self.loc, length) - return super().read(length) - - async def async_fetch_all(self): - """Read whole file in one shot, without caching - - This is only called when position is still at zero, - and read() is called without a byte-count. - """ - logger.debug(f"Fetch all for {self}") - if not isinstance(self.cache, AllBytes): - r = await self.session.get(self.fs.encode_url(self.url), **self.kwargs) - async with r: - r.raise_for_status() - out = await r.read() - self.cache = AllBytes( - size=len(out), fetcher=None, blocksize=None, data=out - ) - self.size = len(out) - - _fetch_all = sync_wrapper(async_fetch_all) - - def _parse_content_range(self, headers): - """Parse the Content-Range header""" - s = headers.get("Content-Range", "") - m = re.match(r"bytes (\d+-\d+|\*)/(\d+|\*)", s) - if not m: - return None, None, None - - if m[1] == "*": - start = end = None - else: - start, end = [int(x) for x in m[1].split("-")] - total = None if m[2] == "*" else int(m[2]) - return start, end, total - - async def async_fetch_range(self, start, end): - """Download a block of data - - The expectation is that the server returns only the requested bytes, - with HTTP code 206. If this is not the case, we first check the headers, - and then stream the output - if the data size is bigger than we - requested, an exception is raised. - """ - logger.debug(f"Fetch range for {self}: {start}-{end}") - kwargs = self.kwargs.copy() - headers = kwargs.pop("headers", {}).copy() - headers["Range"] = "bytes=%i-%i" % (start, end - 1) - logger.debug(str(self.url) + " : " + headers["Range"]) - r = await self.session.get( - self.fs.encode_url(self.url), headers=headers, **kwargs - ) - async with r: - if r.status == 416: - # range request outside file - return b"" - r.raise_for_status() - - # If the server has handled the range request, it should reply - # with status 206 (partial content). But we'll guess that a suitable - # Content-Range header or a Content-Length no more than the - # requested range also mean we have got the desired range. - response_is_range = ( - r.status == 206 - or self._parse_content_range(r.headers)[0] == start - or int(r.headers.get("Content-Length", end + 1)) <= end - start - ) - - if response_is_range: - # partial content, as expected - out = await r.read() - elif start > 0: - raise ValueError( - "The HTTP server doesn't appear to support range requests. " - "Only reading this file from the beginning is supported. " - "Open with block_size=0 for a streaming file interface." - ) - else: - # Response is not a range, but we want the start of the file, - # so we can read the required amount anyway. - cl = 0 - out = [] - while True: - chunk = await r.content.read(2**20) - # data size unknown, let's read until we have enough - if chunk: - out.append(chunk) - cl += len(chunk) - if cl > end - start: - break - else: - break - out = b"".join(out)[: end - start] - return out - - _fetch_range = sync_wrapper(async_fetch_range) - - def __reduce__(self): - return ( - reopen, - ( - self.fs, - self.url, - self.mode, - self.blocksize, - self.cache.name if self.cache else "none", - self.size, - ), - ) - - -def reopen(fs, url, mode, blocksize, cache_type, size=None): - return fs.open( - url, mode=mode, block_size=blocksize, cache_type=cache_type, size=size - ) - - -magic_check = re.compile("([*[])") - - -def has_magic(s): - match = magic_check.search(s) - return match is not None - - -class HTTPStreamFile(AbstractBufferedFile): - def __init__(self, fs, url, mode="rb", loop=None, session=None, **kwargs): - self.asynchronous = kwargs.pop("asynchronous", False) - self.url = url - self.loop = loop - self.session = session - if mode != "rb": - raise ValueError - self.details = {"name": url, "size": None} - super().__init__(fs=fs, path=url, mode=mode, cache_type="none", **kwargs) - - async def cor(): - r = await self.session.get(self.fs.encode_url(url), **kwargs).__aenter__() - self.fs._raise_not_found_for_status(r, url) - return r - - self.r = sync(self.loop, cor) - - def seek(self, loc, whence=0): - if loc == 0 and whence == 1: - return - if loc == self.loc and whence == 0: - return - raise ValueError("Cannot seek streaming HTTP file") - - async def _read(self, num=-1): - out = await self.r.content.read(num) - self.loc += len(out) - return out - - read = sync_wrapper(_read) - - async def _close(self): - self.r.close() - - def close(self): - asyncio.run_coroutine_threadsafe(self._close(), self.loop) - super().close() - - def __reduce__(self): - return reopen, (self.fs, self.url, self.mode, self.blocksize, self.cache.name) - - -class AsyncStreamFile(AbstractAsyncStreamedFile): - def __init__( - self, fs, url, mode="rb", loop=None, session=None, size=None, **kwargs - ): - self.url = url - self.session = session - self.r = None - if mode != "rb": - raise ValueError - self.details = {"name": url, "size": None} - self.kwargs = kwargs - super().__init__(fs=fs, path=url, mode=mode, cache_type="none") - self.size = size - - async def read(self, num=-1): - if self.r is None: - r = await self.session.get( - self.fs.encode_url(self.url), **self.kwargs - ).__aenter__() - self.fs._raise_not_found_for_status(r, self.url) - self.r = r - out = await self.r.content.read(num) - self.loc += len(out) - return out - - async def close(self): - if self.r is not None: - self.r.close() - self.r = None - await super().close() - - -async def get_range(session, url, start, end, file=None, **kwargs): - # explicit get a range when we know it must be safe - kwargs = kwargs.copy() - headers = kwargs.pop("headers", {}).copy() - headers["Range"] = "bytes=%i-%i" % (start, end - 1) - r = await session.get(url, headers=headers, **kwargs) - r.raise_for_status() - async with r: - out = await r.read() - if file: - with open(file, "rb+") as f: - f.seek(start) - f.write(out) - else: - return out - - -async def _file_info(url, session, size_policy="head", **kwargs): - """Call HEAD on the server to get details about the file (size/checksum etc.) - - Default operation is to explicitly allow redirects and use encoding - 'identity' (no compression) to get the true size of the target. - """ - logger.debug("Retrieve file size for %s" % url) - kwargs = kwargs.copy() - ar = kwargs.pop("allow_redirects", True) - head = kwargs.get("headers", {}).copy() - head["Accept-Encoding"] = "identity" - kwargs["headers"] = head - - info = {} - if size_policy == "head": - r = await session.head(url, allow_redirects=ar, **kwargs) - elif size_policy == "get": - r = await session.get(url, allow_redirects=ar, **kwargs) - else: - raise TypeError('size_policy must be "head" or "get", got %s' "" % size_policy) - async with r: - r.raise_for_status() - - # TODO: - # recognise lack of 'Accept-Ranges', - # or 'Accept-Ranges': 'none' (not 'bytes') - # to mean streaming only, no random access => return None - if "Content-Length" in r.headers: - info["size"] = int(r.headers["Content-Length"]) - elif "Content-Range" in r.headers: - info["size"] = int(r.headers["Content-Range"].split("/")[1]) - - for checksum_field in ["ETag", "Content-MD5", "Digest"]: - if r.headers.get(checksum_field): - info[checksum_field] = r.headers[checksum_field] - - return info - - -async def _file_size(url, session=None, *args, **kwargs): - if session is None: - session = await get_client() - info = await _file_info(url, session=session, *args, **kwargs) - return info.get("size") - - -file_size = sync_wrapper(_file_size) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/tests/abstract/get.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/tests/abstract/get.py deleted file mode 100644 index 992e9c6ae54822a76212ff68a8ea604121c71628..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/tests/abstract/get.py +++ /dev/null @@ -1,36 +0,0 @@ -class AbstractGetTests: - def test_get_directory_recursive( - self, fs, fs_join, fs_path, local_fs, local_join, local_path - ): - # https://github.com/fsspec/filesystem_spec/issues/1062 - # Recursive cp/get/put of source directory into non-existent target directory. - src = fs_join(fs_path, "src") - src_file = fs_join(src, "file") - fs.mkdir(src) - fs.touch(src_file) - - target = local_join(local_path, "target") - - # get without slash - assert not local_fs.exists(target) - for loop in range(2): - fs.get(src, target, recursive=True) - assert local_fs.isdir(target) - - if loop == 0: - assert local_fs.isfile(local_join(target, "file")) - assert not local_fs.exists(local_join(target, "src")) - else: - assert local_fs.isfile(local_join(target, "file")) - assert local_fs.isdir(local_join(target, "src")) - assert local_fs.isfile(local_join(target, "src", "file")) - - local_fs.rm(target, recursive=True) - - # get with slash - assert not local_fs.exists(target) - for loop in range(2): - fs.get(src + "/", target, recursive=True) - assert local_fs.isdir(target) - assert local_fs.isfile(local_join(target, "file")) - assert not local_fs.exists(local_join(target, "src")) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/image.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/image.py deleted file mode 100644 index 51db3fa5c3d488372ee7a76fbd9be2fb6e246c24..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/image.py +++ /dev/null @@ -1,1818 +0,0 @@ -""" -The image module supports basic image loading, rescaling and display -operations. -""" - -import math -import os -import logging -from pathlib import Path -import warnings - -import numpy as np -import PIL.PngImagePlugin - -import matplotlib as mpl -from matplotlib import _api, cbook, cm -# For clarity, names from _image are given explicitly in this module -from matplotlib import _image -# For user convenience, the names from _image are also imported into -# the image namespace -from matplotlib._image import * -import matplotlib.artist as martist -from matplotlib.backend_bases import FigureCanvasBase -import matplotlib.colors as mcolors -from matplotlib.transforms import ( - Affine2D, BboxBase, Bbox, BboxTransform, BboxTransformTo, - IdentityTransform, TransformedBbox) - -_log = logging.getLogger(__name__) - -# map interpolation strings to module constants -_interpd_ = { - 'antialiased': _image.NEAREST, # this will use nearest or Hanning... - 'none': _image.NEAREST, # fall back to nearest when not supported - 'nearest': _image.NEAREST, - 'bilinear': _image.BILINEAR, - 'bicubic': _image.BICUBIC, - 'spline16': _image.SPLINE16, - 'spline36': _image.SPLINE36, - 'hanning': _image.HANNING, - 'hamming': _image.HAMMING, - 'hermite': _image.HERMITE, - 'kaiser': _image.KAISER, - 'quadric': _image.QUADRIC, - 'catrom': _image.CATROM, - 'gaussian': _image.GAUSSIAN, - 'bessel': _image.BESSEL, - 'mitchell': _image.MITCHELL, - 'sinc': _image.SINC, - 'lanczos': _image.LANCZOS, - 'blackman': _image.BLACKMAN, -} - -interpolations_names = set(_interpd_) - - -def composite_images(images, renderer, magnification=1.0): - """ - Composite a number of RGBA images into one. The images are - composited in the order in which they appear in the *images* list. - - Parameters - ---------- - images : list of Images - Each must have a `make_image` method. For each image, - `can_composite` should return `True`, though this is not - enforced by this function. Each image must have a purely - affine transformation with no shear. - - renderer : `.RendererBase` - - magnification : float, default: 1 - The additional magnification to apply for the renderer in use. - - Returns - ------- - image : uint8 array (M, N, 4) - The composited RGBA image. - offset_x, offset_y : float - The (left, bottom) offset where the composited image should be placed - in the output figure. - """ - if len(images) == 0: - return np.empty((0, 0, 4), dtype=np.uint8), 0, 0 - - parts = [] - bboxes = [] - for image in images: - data, x, y, trans = image.make_image(renderer, magnification) - if data is not None: - x *= magnification - y *= magnification - parts.append((data, x, y, image._get_scalar_alpha())) - bboxes.append( - Bbox([[x, y], [x + data.shape[1], y + data.shape[0]]])) - - if len(parts) == 0: - return np.empty((0, 0, 4), dtype=np.uint8), 0, 0 - - bbox = Bbox.union(bboxes) - - output = np.zeros( - (int(bbox.height), int(bbox.width), 4), dtype=np.uint8) - - for data, x, y, alpha in parts: - trans = Affine2D().translate(x - bbox.x0, y - bbox.y0) - _image.resample(data, output, trans, _image.NEAREST, - resample=False, alpha=alpha) - - return output, bbox.x0 / magnification, bbox.y0 / magnification - - -def _draw_list_compositing_images( - renderer, parent, artists, suppress_composite=None): - """ - Draw a sorted list of artists, compositing images into a single - image where possible. - - For internal Matplotlib use only: It is here to reduce duplication - between `Figure.draw` and `Axes.draw`, but otherwise should not be - generally useful. - """ - has_images = any(isinstance(x, _ImageBase) for x in artists) - - # override the renderer default if suppressComposite is not None - not_composite = (suppress_composite if suppress_composite is not None - else renderer.option_image_nocomposite()) - - if not_composite or not has_images: - for a in artists: - a.draw(renderer) - else: - # Composite any adjacent images together - image_group = [] - mag = renderer.get_image_magnification() - - def flush_images(): - if len(image_group) == 1: - image_group[0].draw(renderer) - elif len(image_group) > 1: - data, l, b = composite_images(image_group, renderer, mag) - if data.size != 0: - gc = renderer.new_gc() - gc.set_clip_rectangle(parent.bbox) - gc.set_clip_path(parent.get_clip_path()) - renderer.draw_image(gc, round(l), round(b), data) - gc.restore() - del image_group[:] - - for a in artists: - if (isinstance(a, _ImageBase) and a.can_composite() and - a.get_clip_on() and not a.get_clip_path()): - image_group.append(a) - else: - flush_images() - a.draw(renderer) - flush_images() - - -def _resample( - image_obj, data, out_shape, transform, *, resample=None, alpha=1): - """ - Convenience wrapper around `._image.resample` to resample *data* to - *out_shape* (with a third dimension if *data* is RGBA) that takes care of - allocating the output array and fetching the relevant properties from the - Image object *image_obj*. - """ - # AGG can only handle coordinates smaller than 24-bit signed integers, - # so raise errors if the input data is larger than _image.resample can - # handle. - msg = ('Data with more than {n} cannot be accurately displayed. ' - 'Downsampling to less than {n} before displaying. ' - 'To remove this warning, manually downsample your data.') - if data.shape[1] > 2**23: - warnings.warn(msg.format(n='2**23 columns')) - step = int(np.ceil(data.shape[1] / 2**23)) - data = data[:, ::step] - transform = Affine2D().scale(step, 1) + transform - if data.shape[0] > 2**24: - warnings.warn(msg.format(n='2**24 rows')) - step = int(np.ceil(data.shape[0] / 2**24)) - data = data[::step, :] - transform = Affine2D().scale(1, step) + transform - # decide if we need to apply anti-aliasing if the data is upsampled: - # compare the number of displayed pixels to the number of - # the data pixels. - interpolation = image_obj.get_interpolation() - if interpolation == 'antialiased': - # don't antialias if upsampling by an integer number or - # if zooming in more than a factor of 3 - pos = np.array([[0, 0], [data.shape[1], data.shape[0]]]) - disp = transform.transform(pos) - dispx = np.abs(np.diff(disp[:, 0])) - dispy = np.abs(np.diff(disp[:, 1])) - if ((dispx > 3 * data.shape[1] or - dispx == data.shape[1] or - dispx == 2 * data.shape[1]) and - (dispy > 3 * data.shape[0] or - dispy == data.shape[0] or - dispy == 2 * data.shape[0])): - interpolation = 'nearest' - else: - interpolation = 'hanning' - out = np.zeros(out_shape + data.shape[2:], data.dtype) # 2D->2D, 3D->3D. - if resample is None: - resample = image_obj.get_resample() - _image.resample(data, out, transform, - _interpd_[interpolation], - resample, - alpha, - image_obj.get_filternorm(), - image_obj.get_filterrad()) - return out - - -def _rgb_to_rgba(A): - """ - Convert an RGB image to RGBA, as required by the image resample C++ - extension. - """ - rgba = np.zeros((A.shape[0], A.shape[1], 4), dtype=A.dtype) - rgba[:, :, :3] = A - if rgba.dtype == np.uint8: - rgba[:, :, 3] = 255 - else: - rgba[:, :, 3] = 1.0 - return rgba - - -class _ImageBase(martist.Artist, cm.ScalarMappable): - """ - Base class for images. - - interpolation and cmap default to their rc settings - - cmap is a colors.Colormap instance - norm is a colors.Normalize instance to map luminance to 0-1 - - extent is data axes (left, right, bottom, top) for making image plots - registered with data plots. Default is to label the pixel - centers with the zero-based row and column indices. - - Additional kwargs are matplotlib.artist properties - """ - zorder = 0 - - def __init__(self, ax, - cmap=None, - norm=None, - interpolation=None, - origin=None, - filternorm=True, - filterrad=4.0, - resample=False, - *, - interpolation_stage=None, - **kwargs - ): - martist.Artist.__init__(self) - cm.ScalarMappable.__init__(self, norm, cmap) - if origin is None: - origin = mpl.rcParams['image.origin'] - _api.check_in_list(["upper", "lower"], origin=origin) - self.origin = origin - self.set_filternorm(filternorm) - self.set_filterrad(filterrad) - self.set_interpolation(interpolation) - self.set_interpolation_stage(interpolation_stage) - self.set_resample(resample) - self.axes = ax - - self._imcache = None - - self._internal_update(kwargs) - - def __str__(self): - try: - size = self.get_size() - return f"{type(self).__name__}(size={size!r})" - except RuntimeError: - return type(self).__name__ - - def __getstate__(self): - # Save some space on the pickle by not saving the cache. - return {**super().__getstate__(), "_imcache": None} - - def get_size(self): - """Return the size of the image as tuple (numrows, numcols).""" - if self._A is None: - raise RuntimeError('You must first set the image array') - - return self._A.shape[:2] - - def set_alpha(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : float or 2D array-like or None - """ - martist.Artist._set_alpha_for_array(self, alpha) - if np.ndim(alpha) not in (0, 2): - raise TypeError('alpha must be a float, two-dimensional ' - 'array, or None') - self._imcache = None - - def _get_scalar_alpha(self): - """ - Get a scalar alpha value to be applied to the artist as a whole. - - If the alpha value is a matrix, the method returns 1.0 because pixels - have individual alpha values (see `~._ImageBase._make_image` for - details). If the alpha value is a scalar, the method returns said value - to be applied to the artist as a whole because pixels do not have - individual alpha values. - """ - return 1.0 if self._alpha is None or np.ndim(self._alpha) > 0 \ - else self._alpha - - def changed(self): - """ - Call this whenever the mappable is changed so observers can update. - """ - self._imcache = None - cm.ScalarMappable.changed(self) - - def _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification=1.0, - unsampled=False, round_to_pixel_border=True): - """ - Normalize, rescale, and colormap the image *A* from the given *in_bbox* - (in data space), to the given *out_bbox* (in pixel space) clipped to - the given *clip_bbox* (also in pixel space), and magnified by the - *magnification* factor. - - *A* may be a greyscale image (M, N) with a dtype of float32, float64, - float128, uint16 or uint8, or an (M, N, 4) RGBA image with a dtype of - float32, float64, float128, or uint8. - - If *unsampled* is True, the image will not be scaled, but an - appropriate affine transformation will be returned instead. - - If *round_to_pixel_border* is True, the output image size will be - rounded to the nearest pixel boundary. This makes the images align - correctly with the axes. It should not be used if exact scaling is - needed, such as for `FigureImage`. - - Returns - ------- - image : (M, N, 4) uint8 array - The RGBA image, resampled unless *unsampled* is True. - x, y : float - The upper left corner where the image should be drawn, in pixel - space. - trans : Affine2D - The affine transformation from image to pixel space. - """ - if A is None: - raise RuntimeError('You must first set the image ' - 'array or the image attribute') - if A.size == 0: - raise RuntimeError("_make_image must get a non-empty image. " - "Your Artist's draw method must filter before " - "this method is called.") - - clipped_bbox = Bbox.intersection(out_bbox, clip_bbox) - - if clipped_bbox is None: - return None, 0, 0, None - - out_width_base = clipped_bbox.width * magnification - out_height_base = clipped_bbox.height * magnification - - if out_width_base == 0 or out_height_base == 0: - return None, 0, 0, None - - if self.origin == 'upper': - # Flip the input image using a transform. This avoids the - # problem with flipping the array, which results in a copy - # when it is converted to contiguous in the C wrapper - t0 = Affine2D().translate(0, -A.shape[0]).scale(1, -1) - else: - t0 = IdentityTransform() - - t0 += ( - Affine2D() - .scale( - in_bbox.width / A.shape[1], - in_bbox.height / A.shape[0]) - .translate(in_bbox.x0, in_bbox.y0) - + self.get_transform()) - - t = (t0 - + (Affine2D() - .translate(-clipped_bbox.x0, -clipped_bbox.y0) - .scale(magnification))) - - # So that the image is aligned with the edge of the axes, we want to - # round up the output width to the next integer. This also means - # scaling the transform slightly to account for the extra subpixel. - if (t.is_affine and round_to_pixel_border and - (out_width_base % 1.0 != 0.0 or out_height_base % 1.0 != 0.0)): - out_width = math.ceil(out_width_base) - out_height = math.ceil(out_height_base) - extra_width = (out_width - out_width_base) / out_width_base - extra_height = (out_height - out_height_base) / out_height_base - t += Affine2D().scale(1.0 + extra_width, 1.0 + extra_height) - else: - out_width = int(out_width_base) - out_height = int(out_height_base) - out_shape = (out_height, out_width) - - if not unsampled: - if not (A.ndim == 2 or A.ndim == 3 and A.shape[-1] in (3, 4)): - raise ValueError(f"Invalid shape {A.shape} for image data") - if A.ndim == 2 and self._interpolation_stage != 'rgba': - # if we are a 2D array, then we are running through the - # norm + colormap transformation. However, in general the - # input data is not going to match the size on the screen so we - # have to resample to the correct number of pixels - - # TODO slice input array first - a_min = A.min() - a_max = A.max() - if a_min is np.ma.masked: # All masked; values don't matter. - a_min, a_max = np.int32(0), np.int32(1) - if A.dtype.kind == 'f': # Float dtype: scale to same dtype. - scaled_dtype = np.dtype( - np.float64 if A.dtype.itemsize > 4 else np.float32) - if scaled_dtype.itemsize < A.dtype.itemsize: - _api.warn_external(f"Casting input data from {A.dtype}" - f" to {scaled_dtype} for imshow.") - else: # Int dtype, likely. - # Scale to appropriately sized float: use float32 if the - # dynamic range is small, to limit the memory footprint. - da = a_max.astype(np.float64) - a_min.astype(np.float64) - scaled_dtype = np.float64 if da > 1e8 else np.float32 - - # Scale the input data to [.1, .9]. The Agg interpolators clip - # to [0, 1] internally, and we use a smaller input scale to - # identify the interpolated points that need to be flagged as - # over/under. This may introduce numeric instabilities in very - # broadly scaled data. - - # Always copy, and don't allow array subtypes. - A_scaled = np.array(A, dtype=scaled_dtype) - # Clip scaled data around norm if necessary. This is necessary - # for big numbers at the edge of float64's ability to represent - # changes. Applying a norm first would be good, but ruins the - # interpolation of over numbers. - self.norm.autoscale_None(A) - dv = np.float64(self.norm.vmax) - np.float64(self.norm.vmin) - vmid = np.float64(self.norm.vmin) + dv / 2 - fact = 1e7 if scaled_dtype == np.float64 else 1e4 - newmin = vmid - dv * fact - if newmin < a_min: - newmin = None - else: - a_min = np.float64(newmin) - newmax = vmid + dv * fact - if newmax > a_max: - newmax = None - else: - a_max = np.float64(newmax) - if newmax is not None or newmin is not None: - np.clip(A_scaled, newmin, newmax, out=A_scaled) - - # Rescale the raw data to [offset, 1-offset] so that the - # resampling code will run cleanly. Using dyadic numbers here - # could reduce the error, but would not fully eliminate it and - # breaks a number of tests (due to the slightly different - # error bouncing some pixels across a boundary in the (very - # quantized) colormapping step). - offset = .1 - frac = .8 - # Run vmin/vmax through the same rescaling as the raw data; - # otherwise, data values close or equal to the boundaries can - # end up on the wrong side due to floating point error. - vmin, vmax = self.norm.vmin, self.norm.vmax - if vmin is np.ma.masked: - vmin, vmax = a_min, a_max - vrange = np.array([vmin, vmax], dtype=scaled_dtype) - - A_scaled -= a_min - vrange -= a_min - # .item() handles a_min/a_max being ndarray subclasses. - a_min = a_min.astype(scaled_dtype).item() - a_max = a_max.astype(scaled_dtype).item() - - if a_min != a_max: - A_scaled /= ((a_max - a_min) / frac) - vrange /= ((a_max - a_min) / frac) - A_scaled += offset - vrange += offset - # resample the input data to the correct resolution and shape - A_resampled = _resample(self, A_scaled, out_shape, t) - del A_scaled # Make sure we don't use A_scaled anymore! - # Un-scale the resampled data to approximately the original - # range. Things that interpolated to outside the original range - # will still be outside, but possibly clipped in the case of - # higher order interpolation + drastically changing data. - A_resampled -= offset - vrange -= offset - if a_min != a_max: - A_resampled *= ((a_max - a_min) / frac) - vrange *= ((a_max - a_min) / frac) - A_resampled += a_min - vrange += a_min - # if using NoNorm, cast back to the original datatype - if isinstance(self.norm, mcolors.NoNorm): - A_resampled = A_resampled.astype(A.dtype) - - mask = (np.where(A.mask, np.float32(np.nan), np.float32(1)) - if A.mask.shape == A.shape # nontrivial mask - else np.ones_like(A, np.float32)) - # we always have to interpolate the mask to account for - # non-affine transformations - out_alpha = _resample(self, mask, out_shape, t, resample=True) - del mask # Make sure we don't use mask anymore! - # Agg updates out_alpha in place. If the pixel has no image - # data it will not be updated (and still be 0 as we initialized - # it), if input data that would go into that output pixel than - # it will be `nan`, if all the input data for a pixel is good - # it will be 1, and if there is _some_ good data in that output - # pixel it will be between [0, 1] (such as a rotated image). - out_mask = np.isnan(out_alpha) - out_alpha[out_mask] = 1 - # Apply the pixel-by-pixel alpha values if present - alpha = self.get_alpha() - if alpha is not None and np.ndim(alpha) > 0: - out_alpha *= _resample(self, alpha, out_shape, - t, resample=True) - # mask and run through the norm - resampled_masked = np.ma.masked_array(A_resampled, out_mask) - # we have re-set the vmin/vmax to account for small errors - # that may have moved input values in/out of range - s_vmin, s_vmax = vrange - if isinstance(self.norm, mcolors.LogNorm) and s_vmin <= 0: - # Don't give 0 or negative values to LogNorm - s_vmin = np.finfo(scaled_dtype).eps - # Block the norm from sending an update signal during the - # temporary vmin/vmax change - with self.norm.callbacks.blocked(), \ - cbook._setattr_cm(self.norm, vmin=s_vmin, vmax=s_vmax): - output = self.norm(resampled_masked) - else: - if A.ndim == 2: # _interpolation_stage == 'rgba' - self.norm.autoscale_None(A) - A = self.to_rgba(A) - if A.shape[2] == 3: - A = _rgb_to_rgba(A) - alpha = self._get_scalar_alpha() - output_alpha = _resample( # resample alpha channel - self, A[..., 3], out_shape, t, alpha=alpha) - output = _resample( # resample rgb channels - self, _rgb_to_rgba(A[..., :3]), out_shape, t, alpha=alpha) - output[..., 3] = output_alpha # recombine rgb and alpha - - # output is now either a 2D array of normed (int or float) data - # or an RGBA array of re-sampled input - output = self.to_rgba(output, bytes=True, norm=False) - # output is now a correctly sized RGBA array of uint8 - - # Apply alpha *after* if the input was greyscale without a mask - if A.ndim == 2: - alpha = self._get_scalar_alpha() - alpha_channel = output[:, :, 3] - alpha_channel[:] = ( # Assignment will cast to uint8. - alpha_channel.astype(np.float32) * out_alpha * alpha) - - else: - if self._imcache is None: - self._imcache = self.to_rgba(A, bytes=True, norm=(A.ndim == 2)) - output = self._imcache - - # Subset the input image to only the part that will be displayed. - subset = TransformedBbox(clip_bbox, t0.inverted()).frozen() - output = output[ - int(max(subset.ymin, 0)): - int(min(subset.ymax + 1, output.shape[0])), - int(max(subset.xmin, 0)): - int(min(subset.xmax + 1, output.shape[1]))] - - t = Affine2D().translate( - int(max(subset.xmin, 0)), int(max(subset.ymin, 0))) + t - - return output, clipped_bbox.x0, clipped_bbox.y0, t - - def make_image(self, renderer, magnification=1.0, unsampled=False): - """ - Normalize, rescale, and colormap this image's data for rendering using - *renderer*, with the given *magnification*. - - If *unsampled* is True, the image will not be scaled, but an - appropriate affine transformation will be returned instead. - - Returns - ------- - image : (M, N, 4) uint8 array - The RGBA image, resampled unless *unsampled* is True. - x, y : float - The upper left corner where the image should be drawn, in pixel - space. - trans : Affine2D - The affine transformation from image to pixel space. - """ - raise NotImplementedError('The make_image method must be overridden') - - def _check_unsampled_image(self): - """ - Return whether the image is better to be drawn unsampled. - - The derived class needs to override it. - """ - return False - - @martist.allow_rasterization - def draw(self, renderer, *args, **kwargs): - # if not visible, declare victory and return - if not self.get_visible(): - self.stale = False - return - # for empty images, there is nothing to draw! - if self.get_array().size == 0: - self.stale = False - return - # actually render the image. - gc = renderer.new_gc() - self._set_gc_clip(gc) - gc.set_alpha(self._get_scalar_alpha()) - gc.set_url(self.get_url()) - gc.set_gid(self.get_gid()) - if (renderer.option_scale_image() # Renderer supports transform kwarg. - and self._check_unsampled_image() - and self.get_transform().is_affine): - im, l, b, trans = self.make_image(renderer, unsampled=True) - if im is not None: - trans = Affine2D().scale(im.shape[1], im.shape[0]) + trans - renderer.draw_image(gc, l, b, im, trans) - else: - im, l, b, trans = self.make_image( - renderer, renderer.get_image_magnification()) - if im is not None: - renderer.draw_image(gc, l, b, im) - gc.restore() - self.stale = False - - def contains(self, mouseevent): - """Test whether the mouse event occurred within the image.""" - inside, info = self._default_contains(mouseevent) - if inside is not None: - return inside, info - # 1) This doesn't work for figimage; but figimage also needs a fix - # below (as the check cannot use x/ydata and extents). - # 2) As long as the check below uses x/ydata, we need to test axes - # identity instead of `self.axes.contains(event)` because even if - # axes overlap, x/ydata is only valid for event.inaxes anyways. - if self.axes is not mouseevent.inaxes: - return False, {} - # TODO: make sure this is consistent with patch and patch - # collection on nonlinear transformed coordinates. - # TODO: consider returning image coordinates (shouldn't - # be too difficult given that the image is rectilinear - trans = self.get_transform().inverted() - x, y = trans.transform([mouseevent.x, mouseevent.y]) - xmin, xmax, ymin, ymax = self.get_extent() - if xmin > xmax: - xmin, xmax = xmax, xmin - if ymin > ymax: - ymin, ymax = ymax, ymin - - if x is not None and y is not None: - inside = (xmin <= x <= xmax) and (ymin <= y <= ymax) - else: - inside = False - - return inside, {} - - def write_png(self, fname): - """Write the image to png file *fname*.""" - im = self.to_rgba(self._A[::-1] if self.origin == 'lower' else self._A, - bytes=True, norm=True) - PIL.Image.fromarray(im).save(fname, format="png") - - def set_data(self, A): - """ - Set the image array. - - Note that this function does *not* update the normalization used. - - Parameters - ---------- - A : array-like or `PIL.Image.Image` - """ - if isinstance(A, PIL.Image.Image): - A = pil_to_array(A) # Needed e.g. to apply png palette. - self._A = cbook.safe_masked_invalid(A, copy=True) - - if (self._A.dtype != np.uint8 and - not np.can_cast(self._A.dtype, float, "same_kind")): - raise TypeError("Image data of dtype {} cannot be converted to " - "float".format(self._A.dtype)) - - if self._A.ndim == 3 and self._A.shape[-1] == 1: - # If just one dimension assume scalar and apply colormap - self._A = self._A[:, :, 0] - - if not (self._A.ndim == 2 - or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]): - raise TypeError("Invalid shape {} for image data" - .format(self._A.shape)) - - if self._A.ndim == 3: - # If the input data has values outside the valid range (after - # normalisation), we issue a warning and then clip X to the bounds - # - otherwise casting wraps extreme values, hiding outliers and - # making reliable interpretation impossible. - high = 255 if np.issubdtype(self._A.dtype, np.integer) else 1 - if self._A.min() < 0 or high < self._A.max(): - _log.warning( - 'Clipping input data to the valid range for imshow with ' - 'RGB data ([0..1] for floats or [0..255] for integers).' - ) - self._A = np.clip(self._A, 0, high) - # Cast unsupported integer types to uint8 - if self._A.dtype != np.uint8 and np.issubdtype(self._A.dtype, - np.integer): - self._A = self._A.astype(np.uint8) - - self._imcache = None - self.stale = True - - def set_array(self, A): - """ - Retained for backwards compatibility - use set_data instead. - - Parameters - ---------- - A : array-like - """ - # This also needs to be here to override the inherited - # cm.ScalarMappable.set_array method so it is not invoked by mistake. - self.set_data(A) - - def get_interpolation(self): - """ - Return the interpolation method the image uses when resizing. - - One of 'antialiased', 'nearest', 'bilinear', 'bicubic', 'spline16', - 'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', - 'catrom', 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos', - or 'none'. - """ - return self._interpolation - - def set_interpolation(self, s): - """ - Set the interpolation method the image uses when resizing. - - If None, use :rc:`image.interpolation`. If 'none', the image is - shown as is without interpolating. 'none' is only supported in - agg, ps and pdf backends and will fall back to 'nearest' mode - for other backends. - - Parameters - ---------- - s : {'antialiased', 'nearest', 'bilinear', 'bicubic', 'spline16', \ -'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'catrom', \ -'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos', 'none'} or None - """ - if s is None: - s = mpl.rcParams['image.interpolation'] - s = s.lower() - _api.check_in_list(_interpd_, interpolation=s) - self._interpolation = s - self.stale = True - - def set_interpolation_stage(self, s): - """ - Set when interpolation happens during the transform to RGBA. - - Parameters - ---------- - s : {'data', 'rgba'} or None - Whether to apply up/downsampling interpolation in data or rgba - space. - """ - if s is None: - s = "data" # placeholder for maybe having rcParam - _api.check_in_list(['data', 'rgba'], s=s) - self._interpolation_stage = s - self.stale = True - - def can_composite(self): - """Return whether the image can be composited with its neighbors.""" - trans = self.get_transform() - return ( - self._interpolation != 'none' and - trans.is_affine and - trans.is_separable) - - def set_resample(self, v): - """ - Set whether image resampling is used. - - Parameters - ---------- - v : bool or None - If None, use :rc:`image.resample`. - """ - if v is None: - v = mpl.rcParams['image.resample'] - self._resample = v - self.stale = True - - def get_resample(self): - """Return whether image resampling is used.""" - return self._resample - - def set_filternorm(self, filternorm): - """ - Set whether the resize filter normalizes the weights. - - See help for `~.Axes.imshow`. - - Parameters - ---------- - filternorm : bool - """ - self._filternorm = bool(filternorm) - self.stale = True - - def get_filternorm(self): - """Return whether the resize filter normalizes the weights.""" - return self._filternorm - - def set_filterrad(self, filterrad): - """ - Set the resize filter radius only applicable to some - interpolation schemes -- see help for imshow - - Parameters - ---------- - filterrad : positive float - """ - r = float(filterrad) - if r <= 0: - raise ValueError("The filter radius must be a positive number") - self._filterrad = r - self.stale = True - - def get_filterrad(self): - """Return the filterrad setting.""" - return self._filterrad - - -class AxesImage(_ImageBase): - """ - An image attached to an Axes. - - Parameters - ---------- - ax : `~.axes.Axes` - The axes the image will belong to. - cmap : str or `~matplotlib.colors.Colormap`, default: :rc:`image.cmap` - The Colormap instance or registered colormap name used to map scalar - data to colors. - norm : str or `~matplotlib.colors.Normalize` - Maps luminance to 0-1. - interpolation : str, default: :rc:`image.interpolation` - Supported values are 'none', 'antialiased', 'nearest', 'bilinear', - 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', - 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', - 'sinc', 'lanczos', 'blackman'. - interpolation_stage : {'data', 'rgba'}, default: 'data' - If 'data', interpolation - is carried out on the data provided by the user. If 'rgba', the - interpolation is carried out after the colormapping has been - applied (visual interpolation). - origin : {'upper', 'lower'}, default: :rc:`image.origin` - Place the [0, 0] index of the array in the upper left or lower left - corner of the axes. The convention 'upper' is typically used for - matrices and images. - extent : tuple, optional - The data axes (left, right, bottom, top) for making image plots - registered with data plots. Default is to label the pixel - centers with the zero-based row and column indices. - filternorm : bool, default: True - A parameter for the antigrain image resize filter - (see the antigrain documentation). - If filternorm is set, the filter normalizes integer values and corrects - the rounding errors. It doesn't do anything with the source floating - point values, it corrects only integers according to the rule of 1.0 - which means that any sum of pixel weights must be equal to 1.0. So, - the filter function must produce a graph of the proper shape. - filterrad : float > 0, default: 4 - The filter radius for filters that have a radius parameter, i.e. when - interpolation is one of: 'sinc', 'lanczos' or 'blackman'. - resample : bool, default: False - When True, use a full resampling method. When False, only resample when - the output image is larger than the input image. - **kwargs : `.Artist` properties - """ - - @_api.make_keyword_only("3.6", name="cmap") - def __init__(self, ax, - cmap=None, - norm=None, - interpolation=None, - origin=None, - extent=None, - filternorm=True, - filterrad=4.0, - resample=False, - *, - interpolation_stage=None, - **kwargs - ): - - self._extent = extent - - super().__init__( - ax, - cmap=cmap, - norm=norm, - interpolation=interpolation, - origin=origin, - filternorm=filternorm, - filterrad=filterrad, - resample=resample, - interpolation_stage=interpolation_stage, - **kwargs - ) - - def get_window_extent(self, renderer=None): - x0, x1, y0, y1 = self._extent - bbox = Bbox.from_extents([x0, y0, x1, y1]) - return bbox.transformed(self.get_transform()) - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - trans = self.get_transform() - # image is created in the canvas coordinate. - x1, x2, y1, y2 = self.get_extent() - bbox = Bbox(np.array([[x1, y1], [x2, y2]])) - transformed_bbox = TransformedBbox(bbox, trans) - clip = ((self.get_clip_box() or self.axes.bbox) if self.get_clip_on() - else self.figure.bbox) - return self._make_image(self._A, bbox, transformed_bbox, clip, - magnification, unsampled=unsampled) - - def _check_unsampled_image(self): - """Return whether the image would be better drawn unsampled.""" - return self.get_interpolation() == "none" - - def set_extent(self, extent, **kwargs): - """ - Set the image extent. - - Parameters - ---------- - extent : 4-tuple of float - The position and size of the image as tuple - ``(left, right, bottom, top)`` in data coordinates. - **kwargs - Other parameters from which unit info (i.e., the *xunits*, - *yunits*, *zunits* (for 3D axes), *runits* and *thetaunits* (for - polar axes) entries are applied, if present. - - Notes - ----- - This updates ``ax.dataLim``, and, if autoscaling, sets ``ax.viewLim`` - to tightly fit the image, regardless of ``dataLim``. Autoscaling - state is not changed, so following this with ``ax.autoscale_view()`` - will redo the autoscaling in accord with ``dataLim``. - """ - (xmin, xmax), (ymin, ymax) = self.axes._process_unit_info( - [("x", [extent[0], extent[1]]), - ("y", [extent[2], extent[3]])], - kwargs) - if kwargs: - raise _api.kwarg_error("set_extent", kwargs) - xmin = self.axes._validate_converted_limits( - xmin, self.convert_xunits) - xmax = self.axes._validate_converted_limits( - xmax, self.convert_xunits) - ymin = self.axes._validate_converted_limits( - ymin, self.convert_yunits) - ymax = self.axes._validate_converted_limits( - ymax, self.convert_yunits) - extent = [xmin, xmax, ymin, ymax] - - self._extent = extent - corners = (xmin, ymin), (xmax, ymax) - self.axes.update_datalim(corners) - self.sticky_edges.x[:] = [xmin, xmax] - self.sticky_edges.y[:] = [ymin, ymax] - if self.axes.get_autoscalex_on(): - self.axes.set_xlim((xmin, xmax), auto=None) - if self.axes.get_autoscaley_on(): - self.axes.set_ylim((ymin, ymax), auto=None) - self.stale = True - - def get_extent(self): - """Return the image extent as tuple (left, right, bottom, top).""" - if self._extent is not None: - return self._extent - else: - sz = self.get_size() - numrows, numcols = sz - if self.origin == 'upper': - return (-0.5, numcols-0.5, numrows-0.5, -0.5) - else: - return (-0.5, numcols-0.5, -0.5, numrows-0.5) - - def get_cursor_data(self, event): - """ - Return the image value at the event position or *None* if the event is - outside the image. - - See Also - -------- - matplotlib.artist.Artist.get_cursor_data - """ - xmin, xmax, ymin, ymax = self.get_extent() - if self.origin == 'upper': - ymin, ymax = ymax, ymin - arr = self.get_array() - data_extent = Bbox([[xmin, ymin], [xmax, ymax]]) - array_extent = Bbox([[0, 0], [arr.shape[1], arr.shape[0]]]) - trans = self.get_transform().inverted() - trans += BboxTransform(boxin=data_extent, boxout=array_extent) - point = trans.transform([event.x, event.y]) - if any(np.isnan(point)): - return None - j, i = point.astype(int) - # Clip the coordinates at array bounds - if not (0 <= i < arr.shape[0]) or not (0 <= j < arr.shape[1]): - return None - else: - return arr[i, j] - - -class NonUniformImage(AxesImage): - mouseover = False # This class still needs its own get_cursor_data impl. - - def __init__(self, ax, *, interpolation='nearest', **kwargs): - """ - Parameters - ---------- - ax : `~.axes.Axes` - The axes the image will belong to. - interpolation : {'nearest', 'bilinear'}, default: 'nearest' - The interpolation scheme used in the resampling. - **kwargs - All other keyword arguments are identical to those of `.AxesImage`. - """ - super().__init__(ax, **kwargs) - self.set_interpolation(interpolation) - - def _check_unsampled_image(self): - """Return False. Do not use unsampled image.""" - return False - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - if self._A is None: - raise RuntimeError('You must first set the image array') - if unsampled: - raise ValueError('unsampled not supported on NonUniformImage') - A = self._A - if A.ndim == 2: - if A.dtype != np.uint8: - A = self.to_rgba(A, bytes=True) - else: - A = np.repeat(A[:, :, np.newaxis], 4, 2) - A[:, :, 3] = 255 - else: - if A.dtype != np.uint8: - A = (255*A).astype(np.uint8) - if A.shape[2] == 3: - B = np.zeros(tuple([*A.shape[0:2], 4]), np.uint8) - B[:, :, 0:3] = A - B[:, :, 3] = 255 - A = B - vl = self.axes.viewLim - l, b, r, t = self.axes.bbox.extents - width = int(((round(r) + 0.5) - (round(l) - 0.5)) * magnification) - height = int(((round(t) + 0.5) - (round(b) - 0.5)) * magnification) - x_pix = np.linspace(vl.x0, vl.x1, width) - y_pix = np.linspace(vl.y0, vl.y1, height) - if self._interpolation == "nearest": - x_mid = (self._Ax[:-1] + self._Ax[1:]) / 2 - y_mid = (self._Ay[:-1] + self._Ay[1:]) / 2 - x_int = x_mid.searchsorted(x_pix) - y_int = y_mid.searchsorted(y_pix) - # The following is equal to `A[y_int[:, None], x_int[None, :]]`, - # but many times faster. Both casting to uint32 (to have an - # effectively 1D array) and manual index flattening matter. - im = ( - np.ascontiguousarray(A).view(np.uint32).ravel()[ - np.add.outer(y_int * A.shape[1], x_int)] - .view(np.uint8).reshape((height, width, 4))) - else: # self._interpolation == "bilinear" - # Use np.interp to compute x_int/x_float has similar speed. - x_int = np.clip( - self._Ax.searchsorted(x_pix) - 1, 0, len(self._Ax) - 2) - y_int = np.clip( - self._Ay.searchsorted(y_pix) - 1, 0, len(self._Ay) - 2) - idx_int = np.add.outer(y_int * A.shape[1], x_int) - x_frac = np.clip( - np.divide(x_pix - self._Ax[x_int], np.diff(self._Ax)[x_int], - dtype=np.float32), # Downcasting helps with speed. - 0, 1) - y_frac = np.clip( - np.divide(y_pix - self._Ay[y_int], np.diff(self._Ay)[y_int], - dtype=np.float32), - 0, 1) - f00 = np.outer(1 - y_frac, 1 - x_frac) - f10 = np.outer(y_frac, 1 - x_frac) - f01 = np.outer(1 - y_frac, x_frac) - f11 = np.outer(y_frac, x_frac) - im = np.empty((height, width, 4), np.uint8) - for chan in range(4): - ac = A[:, :, chan].reshape(-1) # reshape(-1) avoids a copy. - # Shifting the buffer start (`ac[offset:]`) avoids an array - # addition (`ac[idx_int + offset]`). - buf = f00 * ac[idx_int] - buf += f10 * ac[A.shape[1]:][idx_int] - buf += f01 * ac[1:][idx_int] - buf += f11 * ac[A.shape[1] + 1:][idx_int] - im[:, :, chan] = buf # Implicitly casts to uint8. - return im, l, b, IdentityTransform() - - def set_data(self, x, y, A): - """ - Set the grid for the pixel centers, and the pixel values. - - Parameters - ---------- - x, y : 1D array-like - Monotonic arrays of shapes (N,) and (M,), respectively, specifying - pixel centers. - A : array-like - (M, N) `~numpy.ndarray` or masked array of values to be - colormapped, or (M, N, 3) RGB array, or (M, N, 4) RGBA array. - """ - x = np.array(x, np.float32) - y = np.array(y, np.float32) - A = cbook.safe_masked_invalid(A, copy=True) - if not (x.ndim == y.ndim == 1 and A.shape[0:2] == y.shape + x.shape): - raise TypeError("Axes don't match array shape") - if A.ndim not in [2, 3]: - raise TypeError("Can only plot 2D or 3D data") - if A.ndim == 3 and A.shape[2] not in [1, 3, 4]: - raise TypeError("3D arrays must have three (RGB) " - "or four (RGBA) color components") - if A.ndim == 3 and A.shape[2] == 1: - A = A.squeeze(axis=-1) - self._A = A - self._Ax = x - self._Ay = y - self._imcache = None - - self.stale = True - - def set_array(self, *args): - raise NotImplementedError('Method not supported') - - def set_interpolation(self, s): - """ - Parameters - ---------- - s : {'nearest', 'bilinear'} or None - If None, use :rc:`image.interpolation`. - """ - if s is not None and s not in ('nearest', 'bilinear'): - raise NotImplementedError('Only nearest neighbor and ' - 'bilinear interpolations are supported') - super().set_interpolation(s) - - def get_extent(self): - if self._A is None: - raise RuntimeError('Must set data first') - return self._Ax[0], self._Ax[-1], self._Ay[0], self._Ay[-1] - - def set_filternorm(self, s): - pass - - def set_filterrad(self, s): - pass - - def set_norm(self, norm): - if self._A is not None: - raise RuntimeError('Cannot change colors after loading data') - super().set_norm(norm) - - def set_cmap(self, cmap): - if self._A is not None: - raise RuntimeError('Cannot change colors after loading data') - super().set_cmap(cmap) - - -class PcolorImage(AxesImage): - """ - Make a pcolor-style plot with an irregular rectangular grid. - - This uses a variation of the original irregular image code, - and it is used by pcolorfast for the corresponding grid type. - """ - - @_api.make_keyword_only("3.6", name="cmap") - def __init__(self, ax, - x=None, - y=None, - A=None, - cmap=None, - norm=None, - **kwargs - ): - """ - Parameters - ---------- - ax : `~.axes.Axes` - The axes the image will belong to. - x, y : 1D array-like, optional - Monotonic arrays of length N+1 and M+1, respectively, specifying - rectangle boundaries. If not given, will default to - ``range(N + 1)`` and ``range(M + 1)``, respectively. - A : array-like - The data to be color-coded. The interpretation depends on the - shape: - - - (M, N) `~numpy.ndarray` or masked array: values to be colormapped - - (M, N, 3): RGB array - - (M, N, 4): RGBA array - - cmap : str or `~matplotlib.colors.Colormap`, default: :rc:`image.cmap` - The Colormap instance or registered colormap name used to map - scalar data to colors. - norm : str or `~matplotlib.colors.Normalize` - Maps luminance to 0-1. - **kwargs : `.Artist` properties - """ - super().__init__(ax, norm=norm, cmap=cmap) - self._internal_update(kwargs) - if A is not None: - self.set_data(x, y, A) - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - if self._A is None: - raise RuntimeError('You must first set the image array') - if unsampled: - raise ValueError('unsampled not supported on PColorImage') - - if self._imcache is None: - A = self.to_rgba(self._A, bytes=True) - self._imcache = np.pad(A, [(1, 1), (1, 1), (0, 0)], "constant") - padded_A = self._imcache - bg = mcolors.to_rgba(self.axes.patch.get_facecolor(), 0) - bg = (np.array(bg) * 255).astype(np.uint8) - if (padded_A[0, 0] != bg).all(): - padded_A[[0, -1], :] = padded_A[:, [0, -1]] = bg - - l, b, r, t = self.axes.bbox.extents - width = (round(r) + 0.5) - (round(l) - 0.5) - height = (round(t) + 0.5) - (round(b) - 0.5) - width = round(width * magnification) - height = round(height * magnification) - vl = self.axes.viewLim - - x_pix = np.linspace(vl.x0, vl.x1, width) - y_pix = np.linspace(vl.y0, vl.y1, height) - x_int = self._Ax.searchsorted(x_pix) - y_int = self._Ay.searchsorted(y_pix) - im = ( # See comment in NonUniformImage.make_image re: performance. - padded_A.view(np.uint32).ravel()[ - np.add.outer(y_int * padded_A.shape[1], x_int)] - .view(np.uint8).reshape((height, width, 4))) - return im, l, b, IdentityTransform() - - def _check_unsampled_image(self): - return False - - def set_data(self, x, y, A): - """ - Set the grid for the rectangle boundaries, and the data values. - - Parameters - ---------- - x, y : 1D array-like, optional - Monotonic arrays of length N+1 and M+1, respectively, specifying - rectangle boundaries. If not given, will default to - ``range(N + 1)`` and ``range(M + 1)``, respectively. - A : array-like - The data to be color-coded. The interpretation depends on the - shape: - - - (M, N) `~numpy.ndarray` or masked array: values to be colormapped - - (M, N, 3): RGB array - - (M, N, 4): RGBA array - """ - A = cbook.safe_masked_invalid(A, copy=True) - if x is None: - x = np.arange(0, A.shape[1]+1, dtype=np.float64) - else: - x = np.array(x, np.float64).ravel() - if y is None: - y = np.arange(0, A.shape[0]+1, dtype=np.float64) - else: - y = np.array(y, np.float64).ravel() - - if A.shape[:2] != (y.size-1, x.size-1): - raise ValueError( - "Axes don't match array shape. Got %s, expected %s." % - (A.shape[:2], (y.size - 1, x.size - 1))) - if A.ndim not in [2, 3]: - raise ValueError("A must be 2D or 3D") - if A.ndim == 3: - if A.shape[2] == 1: - A = A.squeeze(axis=-1) - elif A.shape[2] not in [3, 4]: - raise ValueError("3D arrays must have RGB or RGBA as last dim") - - # For efficient cursor readout, ensure x and y are increasing. - if x[-1] < x[0]: - x = x[::-1] - A = A[:, ::-1] - if y[-1] < y[0]: - y = y[::-1] - A = A[::-1] - - self._A = A - self._Ax = x - self._Ay = y - self._imcache = None - self.stale = True - - def set_array(self, *args): - raise NotImplementedError('Method not supported') - - def get_cursor_data(self, event): - # docstring inherited - x, y = event.xdata, event.ydata - if (x < self._Ax[0] or x > self._Ax[-1] or - y < self._Ay[0] or y > self._Ay[-1]): - return None - j = np.searchsorted(self._Ax, x) - 1 - i = np.searchsorted(self._Ay, y) - 1 - try: - return self._A[i, j] - except IndexError: - return None - - -class FigureImage(_ImageBase): - """An image attached to a figure.""" - - zorder = 0 - - _interpolation = 'nearest' - - @_api.make_keyword_only("3.6", name="cmap") - def __init__(self, fig, - cmap=None, - norm=None, - offsetx=0, - offsety=0, - origin=None, - **kwargs - ): - """ - cmap is a colors.Colormap instance - norm is a colors.Normalize instance to map luminance to 0-1 - - kwargs are an optional list of Artist keyword args - """ - super().__init__( - None, - norm=norm, - cmap=cmap, - origin=origin - ) - self.figure = fig - self.ox = offsetx - self.oy = offsety - self._internal_update(kwargs) - self.magnification = 1.0 - - def get_extent(self): - """Return the image extent as tuple (left, right, bottom, top).""" - numrows, numcols = self.get_size() - return (-0.5 + self.ox, numcols-0.5 + self.ox, - -0.5 + self.oy, numrows-0.5 + self.oy) - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - fac = renderer.dpi/self.figure.dpi - # fac here is to account for pdf, eps, svg backends where - # figure.dpi is set to 72. This means we need to scale the - # image (using magnification) and offset it appropriately. - bbox = Bbox([[self.ox/fac, self.oy/fac], - [(self.ox/fac + self._A.shape[1]), - (self.oy/fac + self._A.shape[0])]]) - width, height = self.figure.get_size_inches() - width *= renderer.dpi - height *= renderer.dpi - clip = Bbox([[0, 0], [width, height]]) - return self._make_image( - self._A, bbox, bbox, clip, magnification=magnification / fac, - unsampled=unsampled, round_to_pixel_border=False) - - def set_data(self, A): - """Set the image array.""" - cm.ScalarMappable.set_array(self, A) - self.stale = True - - -class BboxImage(_ImageBase): - """The Image class whose size is determined by the given bbox.""" - - @_api.make_keyword_only("3.6", name="cmap") - def __init__(self, bbox, - cmap=None, - norm=None, - interpolation=None, - origin=None, - filternorm=True, - filterrad=4.0, - resample=False, - **kwargs - ): - """ - cmap is a colors.Colormap instance - norm is a colors.Normalize instance to map luminance to 0-1 - - kwargs are an optional list of Artist keyword args - """ - super().__init__( - None, - cmap=cmap, - norm=norm, - interpolation=interpolation, - origin=origin, - filternorm=filternorm, - filterrad=filterrad, - resample=resample, - **kwargs - ) - self.bbox = bbox - - def get_window_extent(self, renderer=None): - if renderer is None: - renderer = self.get_figure()._get_renderer() - - if isinstance(self.bbox, BboxBase): - return self.bbox - elif callable(self.bbox): - return self.bbox(renderer) - else: - raise ValueError("Unknown type of bbox") - - def contains(self, mouseevent): - """Test whether the mouse event occurred within the image.""" - inside, info = self._default_contains(mouseevent) - if inside is not None: - return inside, info - - if not self.get_visible(): # or self.get_figure()._renderer is None: - return False, {} - - x, y = mouseevent.x, mouseevent.y - inside = self.get_window_extent().contains(x, y) - - return inside, {} - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - width, height = renderer.get_canvas_width_height() - bbox_in = self.get_window_extent(renderer).frozen() - bbox_in._points /= [width, height] - bbox_out = self.get_window_extent(renderer) - clip = Bbox([[0, 0], [width, height]]) - self._transform = BboxTransformTo(clip) - return self._make_image( - self._A, - bbox_in, bbox_out, clip, magnification, unsampled=unsampled) - - -def imread(fname, format=None): - """ - Read an image from a file into an array. - - .. note:: - - This function exists for historical reasons. It is recommended to - use `PIL.Image.open` instead for loading images. - - Parameters - ---------- - fname : str or file-like - The image file to read: a filename, a URL or a file-like object opened - in read-binary mode. - - Passing a URL is deprecated. Please open the URL - for reading and pass the result to Pillow, e.g. with - ``np.array(PIL.Image.open(urllib.request.urlopen(url)))``. - format : str, optional - The image file format assumed for reading the data. The image is - loaded as a PNG file if *format* is set to "png", if *fname* is a path - or opened file with a ".png" extension, or if it is a URL. In all - other cases, *format* is ignored and the format is auto-detected by - `PIL.Image.open`. - - Returns - ------- - `numpy.array` - The image data. The returned array has shape - - - (M, N) for grayscale images. - - (M, N, 3) for RGB images. - - (M, N, 4) for RGBA images. - - PNG images are returned as float arrays (0-1). All other formats are - returned as int arrays, with a bit depth determined by the file's - contents. - """ - # hide imports to speed initial import on systems with slow linkers - from urllib import parse - - if format is None: - if isinstance(fname, str): - parsed = parse.urlparse(fname) - # If the string is a URL (Windows paths appear as if they have a - # length-1 scheme), assume png. - if len(parsed.scheme) > 1: - ext = 'png' - else: - ext = Path(fname).suffix.lower()[1:] - elif hasattr(fname, 'geturl'): # Returned by urlopen(). - # We could try to parse the url's path and use the extension, but - # returning png is consistent with the block above. Note that this - # if clause has to come before checking for fname.name as - # urlopen("file:///...") also has a name attribute (with the fixed - # value ""). - ext = 'png' - elif hasattr(fname, 'name'): - ext = Path(fname.name).suffix.lower()[1:] - else: - ext = 'png' - else: - ext = format - img_open = ( - PIL.PngImagePlugin.PngImageFile if ext == 'png' else PIL.Image.open) - if isinstance(fname, str) and len(parse.urlparse(fname).scheme) > 1: - # Pillow doesn't handle URLs directly. - raise ValueError( - "Please open the URL for reading and pass the " - "result to Pillow, e.g. with " - "``np.array(PIL.Image.open(urllib.request.urlopen(url)))``." - ) - with img_open(fname) as image: - return (_pil_png_to_float_array(image) - if isinstance(image, PIL.PngImagePlugin.PngImageFile) else - pil_to_array(image)) - - -def imsave(fname, arr, vmin=None, vmax=None, cmap=None, format=None, - origin=None, dpi=100, *, metadata=None, pil_kwargs=None): - """ - Colormap and save an array as an image file. - - RGB(A) images are passed through. Single channel images will be - colormapped according to *cmap* and *norm*. - - .. note:: - - If you want to save a single channel image as gray scale please use an - image I/O library (such as pillow, tifffile, or imageio) directly. - - Parameters - ---------- - fname : str or path-like or file-like - A path or a file-like object to store the image in. - If *format* is not set, then the output format is inferred from the - extension of *fname*, if any, and from :rc:`savefig.format` otherwise. - If *format* is set, it determines the output format. - arr : array-like - The image data. The shape can be one of - MxN (luminance), MxNx3 (RGB) or MxNx4 (RGBA). - vmin, vmax : float, optional - *vmin* and *vmax* set the color scaling for the image by fixing the - values that map to the colormap color limits. If either *vmin* - or *vmax* is None, that limit is determined from the *arr* - min/max value. - cmap : str or `~matplotlib.colors.Colormap`, default: :rc:`image.cmap` - A Colormap instance or registered colormap name. The colormap - maps scalar data to colors. It is ignored for RGB(A) data. - format : str, optional - The file format, e.g. 'png', 'pdf', 'svg', ... The behavior when this - is unset is documented under *fname*. - origin : {'upper', 'lower'}, default: :rc:`image.origin` - Indicates whether the ``(0, 0)`` index of the array is in the upper - left or lower left corner of the axes. - dpi : float - The DPI to store in the metadata of the file. This does not affect the - resolution of the output image. Depending on file format, this may be - rounded to the nearest integer. - metadata : dict, optional - Metadata in the image file. The supported keys depend on the output - format, see the documentation of the respective backends for more - information. - pil_kwargs : dict, optional - Keyword arguments passed to `PIL.Image.Image.save`. If the 'pnginfo' - key is present, it completely overrides *metadata*, including the - default 'Software' key. - """ - from matplotlib.figure import Figure - if isinstance(fname, os.PathLike): - fname = os.fspath(fname) - if format is None: - format = (Path(fname).suffix[1:] if isinstance(fname, str) - else mpl.rcParams["savefig.format"]).lower() - if format in ["pdf", "ps", "eps", "svg"]: - # Vector formats that are not handled by PIL. - if pil_kwargs is not None: - raise ValueError( - f"Cannot use 'pil_kwargs' when saving to {format}") - fig = Figure(dpi=dpi, frameon=False) - fig.figimage(arr, cmap=cmap, vmin=vmin, vmax=vmax, origin=origin, - resize=True) - fig.savefig(fname, dpi=dpi, format=format, transparent=True, - metadata=metadata) - else: - # Don't bother creating an image; this avoids rounding errors on the - # size when dividing and then multiplying by dpi. - if origin is None: - origin = mpl.rcParams["image.origin"] - if origin == "lower": - arr = arr[::-1] - if (isinstance(arr, memoryview) and arr.format == "B" - and arr.ndim == 3 and arr.shape[-1] == 4): - # Such an ``arr`` would also be handled fine by sm.to_rgba below - # (after casting with asarray), but it is useful to special-case it - # because that's what backend_agg passes, and can be in fact used - # as is, saving a few operations. - rgba = arr - else: - sm = cm.ScalarMappable(cmap=cmap) - sm.set_clim(vmin, vmax) - rgba = sm.to_rgba(arr, bytes=True) - if pil_kwargs is None: - pil_kwargs = {} - else: - # we modify this below, so make a copy (don't modify caller's dict) - pil_kwargs = pil_kwargs.copy() - pil_shape = (rgba.shape[1], rgba.shape[0]) - image = PIL.Image.frombuffer( - "RGBA", pil_shape, rgba, "raw", "RGBA", 0, 1) - if format == "png": - # Only use the metadata kwarg if pnginfo is not set, because the - # semantics of duplicate keys in pnginfo is unclear. - if "pnginfo" in pil_kwargs: - if metadata: - _api.warn_external("'metadata' is overridden by the " - "'pnginfo' entry in 'pil_kwargs'.") - else: - metadata = { - "Software": (f"Matplotlib version{mpl.__version__}, " - f"https://matplotlib.org/"), - **(metadata if metadata is not None else {}), - } - pil_kwargs["pnginfo"] = pnginfo = PIL.PngImagePlugin.PngInfo() - for k, v in metadata.items(): - if v is not None: - pnginfo.add_text(k, v) - if format in ["jpg", "jpeg"]: - format = "jpeg" # Pillow doesn't recognize "jpg". - facecolor = mpl.rcParams["savefig.facecolor"] - if cbook._str_equal(facecolor, "auto"): - facecolor = mpl.rcParams["figure.facecolor"] - color = tuple(int(x * 255) for x in mcolors.to_rgb(facecolor)) - background = PIL.Image.new("RGB", pil_shape, color) - background.paste(image, image) - image = background - pil_kwargs.setdefault("format", format) - pil_kwargs.setdefault("dpi", (dpi, dpi)) - image.save(fname, **pil_kwargs) - - -def pil_to_array(pilImage): - """ - Load a `PIL image`_ and return it as a numpy int array. - - .. _PIL image: https://pillow.readthedocs.io/en/latest/reference/Image.html - - Returns - ------- - numpy.array - - The array shape depends on the image type: - - - (M, N) for grayscale images. - - (M, N, 3) for RGB images. - - (M, N, 4) for RGBA images. - """ - if pilImage.mode in ['RGBA', 'RGBX', 'RGB', 'L']: - # return MxNx4 RGBA, MxNx3 RBA, or MxN luminance array - return np.asarray(pilImage) - elif pilImage.mode.startswith('I;16'): - # return MxN luminance array of uint16 - raw = pilImage.tobytes('raw', pilImage.mode) - if pilImage.mode.endswith('B'): - x = np.frombuffer(raw, '>u2') - else: - x = np.frombuffer(raw, 'Multiple Skype Loader (from daofile.com). This one actually works. https://daofile.com/9ugtots73q40/DJ12409.rar. Sxrce6k94.rar. DJ12678.rar. There is also a Skype Sling Loader (available. https://daofile.com/5jdmwwr1em2h/DJ12678.rar. 1_share_win.rar.

    -

    2.. Fsdfsdfsdfsd Sdfsd, profile picture. Fsdfsdfsdfsd Sdfsd. https://storex.cc/giwpvevmhbxl/DJ12678.rar Download DJ12678 rar. 2... Fsdfsdfsdfsd Sdfsd, profile picture. Fsdfsdfsdfsd Sdfsd. https://storex.cc/giwpvevmhbxl/DJ12678.rar Download DJ12678 rar. 14.. Fsdfsdfsdfsd Sdfsd, profile picture. Fsdfsdfsdfsd Sdfsd. https://storex.cc/giwpvevmhbxl/DJ12678.rar Download DJ12678 rar.

    -

    DJ12678.rar


    Download · https://bytlly.com/2uGvAc



    -

    DJ12678.rar. Image with caption: DOWNLOAD: https://urloso.com/2fhqet. 372a6038bc. Related links: danganronpa 2 goodbye despair psp english download zip. DJ12678.rar > bit.ly/1bRsLoi Lockwood Mansion Residentials.

    -

    7 mos Report. Fsdfsdfsdfsd Sdfsd, profile picture. Fsdfsdfsdfsd Sdfsd. https://storex.cc/giwpvevmhbxl/DJ12678.rar Download DJ12678 rar. Galactik Football Season 2 English Subtitles.rar >>> DOWNLOAD.. DJ12678.rar https://bltlly.com/1ik6xt DJ12678.rar ibn sirin book of.

    -

    Multiple Skype Loader by Laurynas (for Skype 3.8.0.154).rar. WINDOWS 8.1 PRO x64-ACTIVATED.rar. DJ12678.rar. There is also a Skype Sling Loader (available. https://daofile.com/9ugtots73q40/DJ12409.rar https://daofile.com/5jdmwwr1em2h/DJ12678.rar https://daofile.com/w3kx8t8doyma/DJ18213.rar.

    -

    7Mes Fsdfsdfsdfsd Sdfsd, profile picture. Fsdfsdfsdfsd Sdfsd. https://storex.cc/giwpvevmhbxl/DJ12678.rar Download DJ12678 rar. DJ12678.rar. Adde, imgur.com/7.. f5zpp9.. Message... 5. DJ12678.rar http://pemucam.co.il/uploads/2u.htm. 7. DJ12678.rar azfvajjk.ru/club_file_sharing/1/9/11/7495110bfc2d9cc1f045f1a79e70e/DJ12678.rar. CUPIR EMAIL 2.7, lahore.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Dropbox 2020 Crack License Key [PATCHED].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Dropbox 2020 Crack License Key [PATCHED].md deleted file mode 100644 index 3f711ef8e86c020a4c6f404df81033532eccbda5..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Dropbox 2020 Crack License Key [PATCHED].md +++ /dev/null @@ -1,96 +0,0 @@ - -

    Dropbox 2020 Crack License Key: How to Get It for Free

    - -

    If you are looking for a way to access all the features of Dropbox 2020 without paying a dime, then you might be interested in getting a crack license key. A crack license key is a code that bypasses the security of a software and allows you to use it as if you have purchased it legally. However, finding a working crack license key for Dropbox 2020 is not an easy task, as there are many fake and malicious websites that claim to offer them. In this article, we will show you how to find and use a crack license key for Dropbox 2020 safely and effectively.

    -

    Dropbox 2020 Crack License Key


    Download Zip ○○○ https://bytlly.com/2uGvBg



    - -

    What is Dropbox 2020 and Why Do You Need a Crack License Key?

    - -

    Dropbox 2020 is the latest version of the popular cloud storage service that lets you store, sync and share your files online. With Dropbox 2020, you can access your files from any device, collaborate with others in real time, and enjoy advanced features such as smart sync, file requests, selective sync, and more. Dropbox 2020 offers different plans for personal and business use, ranging from 2 GB to unlimited storage space. However, these plans are not cheap, and you might not want to spend money on something that you can get for free.

    - -

    A crack license key for Dropbox 2020 is a code that allows you to activate the software without paying anything. By using a crack license key, you can enjoy all the benefits of Dropbox 2020 without any limitations or restrictions. You can store as much data as you want, sync your files across multiple devices, and share them with anyone you want. A crack license key for Dropbox 2020 can save you a lot of money and hassle.

    - -

    How to Find a Crack License Key for Dropbox 2020?

    - -

    There are many websites that claim to offer crack license keys for Dropbox 2020, but not all of them are reliable or trustworthy. Some of them might contain viruses, malware, or spyware that can harm your computer or steal your personal information. Some of them might not work at all or might expire after a short period of time. Some of them might even get you into legal trouble if you are caught using them.

    - -

    To avoid these risks, you need to find a crack license key for Dropbox 2020 from a reputable and verified source. One of the best ways to do that is to use a serial key site. A serial key site is a website that collects and provides serial keys for various software programs, including Dropbox 2020. A serial key site usually updates its database regularly and offers user ratings and feedback to help you choose the best serial key for your needs.

    -

    - -

    Some of the top serial key sites for Dropbox 2020 are:

    - -
      -
    • Jiho: This site offers free serial keys for over 120,000 software programs, including Dropbox 2020. It has a simple and informative web interface that lets you search by keywords or browse by categories. It also shows the update date and user rating for each serial key.
    • -
    • FreeProSoftz: This site provides both crack files and serial keys for Dropbox 2020. It has a detailed description of the software features and installation instructions. It also has a download link for the official package or cracked copy of Dropbox 2020.
    • -
    • Docker: This site offers a crack file for Dropbox 2020 that can be downloaded from its repository. It has a brief overview of the software functionality and requirements. It also has a link to the official website of Dropbox 2020.
    • -
    - -

    How to Use a Crack License Key for Dropbox 2020?

    - -

    Once you have found a crack license key for Dropbox 2020 from one of the serial key sites mentioned above, you can use it to activate the software on your computer. Here are the steps to follow:

    - -
      -
    1. Download and install the official package or cracked copy of Dropbox 2020 from the link provided by the serial key site.
    2. -
    3. Launch the software and enter the crack license key when prompted.
    4. -
    5. Enjoy using Dropbox 2020 with all its features unlocked.
    6. -
    - -

    Note: Some crack license keys might require you to disable your antivirus or firewall before using them. Some crack license keys might also expire after a certain period of time or stop working due to updates or patches from the software developer. In that case, you might need to find another crack license key or use another method to activate Dropbox 2020.

    - -

    Conclusion

    - -

    Dropbox 2020 is a powerful and convenient cloud storage service that can help you store, sync and share your files online. However, it can also be expensive and restrictive if you want to use all its features. That is why many people look for a crack license key for Dropbox 2020 that can give them access to the software for free.

    - -

    A crack license key for Dropbox 2020 is a code that bypasses the security of the software and allows you to use it as if you have purchased it legally. You can find a crack license key for Dropbox 2020 from various serial key sites on the internet, but you need to be careful about their reliability and safety. You also need to follow the instructions on how to use the crack license key correctly and avoid any legal or technical issues.

    - -

    We hope this article has helped you understand how to find and use a crack license key for Dropbox 2020 safely and effectively. If you have any questions or suggestions, please feel free to leave a comment below.

    -

    What are the Benefits of Using a Crack License Key for Dropbox 2020?

    - -

    Using a crack license key for Dropbox 2020 can bring you many benefits, such as:

    - -
      -
    • Cost savings: You can save a lot of money by using a crack license key for Dropbox 2020 instead of paying for a subscription plan. Depending on the plan you choose, you might have to pay from $9.99 to $20 per month or more for Dropbox 2020. With a crack license key, you can get all the features of Dropbox 2020 for free.
    • -
    • Unlimited storage: You can store as much data as you want on Dropbox 2020 with a crack license key. You don't have to worry about running out of space or deleting your files to free up some room. You can also sync your files across multiple devices and access them anytime, anywhere.
    • -
    • Easy sharing: You can share your files with anyone you want with a crack license key for Dropbox 2020. You don't have to create an account or sign in to Dropbox 2020 to share your files. You can simply send a link or an invitation to anyone you want, whether they are your friends, family, colleagues, or clients.
    • -
    • Advanced features: You can enjoy all the advanced features of Dropbox 2020 with a crack license key. Some of these features include smart sync, file requests, selective sync, team folders, file locking, watermarking, and more. These features can help you manage your files more efficiently and securely.
    • -
    - -

    What are the Risks of Using a Crack License Key for Dropbox 2020?

    - -

    While using a crack license key for Dropbox 2020 can bring you many benefits, it also comes with some risks that you should be aware of, such as:

    - -
      -
    • Legal issues: Using a crack license key for Dropbox 2020 is illegal and violates the terms of service of the software developer. You might face legal consequences if you are caught using a crack license key for Dropbox 2020. You might also be liable for damages or compensation if you infringe the intellectual property rights of the software developer or other parties.
    • -
    • Security threats: Using a crack license key for Dropbox 2020 might expose your computer or mobile phone to viruses, malware, or spyware that can harm your device or steal your personal information. You might also lose your data or compromise your privacy if you use a crack license key for Dropbox 2020 from an untrusted source.
    • -
    • Performance issues: Using a crack license key for Dropbox 2020 might affect the performance of the software or your device. You might experience crashes, errors, glitches, or slow speed when using a crack license key for Dropbox 2020. You might also miss out on updates or patches from the software developer that can fix bugs or improve features.
    • -
    - -

    How to Use Dropbox 2020 Safely and Legally?

    - -

    If you want to use Dropbox 2020 safely and legally, you should avoid using a crack license key and opt for other alternatives instead. Some of these alternatives are:

    - -
      -
    • Free trial: You can try out Dropbox 2020 for free for a limited period of time before deciding whether to buy it or not. You can sign up for a free trial on the official website of Dropbox 2020 and enjoy all the features of the software without any risk or obligation.
    • -
    • Free plan: You can use Dropbox 2020 for free with a basic plan that offers 2 GB of storage space and some essential features. You can create an account on the official website of Dropbox 2020 and use it as long as you want without paying anything.
    • -
    • Discounts and coupons: You can save money by using discounts and coupons that are offered by the software developer or other sources. You can look for discounts and coupons on the official website of Dropbox 2020, social media platforms, newsletters, blogs, forums, or other websites that provide deals and offers.
    • -
    • Alternative software: You can use alternative software that provides similar or better features than Dropbox 2020 at a lower price or for free. Some of these alternative software are Google Drive, OneDrive, iCloud, Box, Mega, and more. You can compare their features, prices, reviews, and ratings before choosing the best one for your needs.
    • -
    - -

    Conclusion

    - -

    Dropbox 2020 is a powerful and convenient cloud storage service that can help you store, sync and share your files online. However, it can also be expensive and restrictive if you want to use all its features. That is why many people look for a crack license key for Dropbox 2020 that can give them access to the software for free.

    - -

    A crack license key for Dropbox 2020 is a code that bypasses the security of the software and allows you to use it as if you have purchased it legally. You can find a crack license key for Dropbox 2020 from various serial key sites on the internet, but you need to be careful about their reliability and safety. You also need to follow the instructions on how to use the crack license key correctly and avoid any legal or technical issues.

    - -

    We hope this article has helped you understand how to find and use a crack license key for Dropbox 2020 safely and effectively. If you have any questions or suggestions, please feel free to leave a comment below.

    -

    The conclusion of the article is:

    - -

    In conclusion, Dropbox 2020 is a great cloud storage service that can help you store, sync and share your files online. However, it can also be costly and limiting if you want to use all its features. That is why some people look for a crack license key for Dropbox 2020 that can give them access to the software for free.

    - -

    However, using a crack license key for Dropbox 2020 is not a good idea, as it can bring you many risks and problems. You might face legal issues, security threats, or performance issues if you use a crack license key for Dropbox 2020. You might also miss out on updates or patches that can improve the software.

    - -

    Therefore, we recommend you to use Dropbox 2020 safely and legally by using other alternatives instead. You can use a free trial, a free plan, discounts and coupons, or alternative software that can provide similar or better features than Dropbox 2020 at a lower price or for free. You can compare their features, prices, reviews, and ratings before choosing the best one for your needs.

    - -

    We hope this article has helped you understand how to use Dropbox 2020 safely and legally. If you have any questions or suggestions, please feel free to leave a comment below.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/lindeberg/whisper-webui/LICENSE.md b/spaces/lindeberg/whisper-webui/LICENSE.md deleted file mode 100644 index f5f4b8b5ecd27c09e4ef16e9662bcb7bb2bfc76f..0000000000000000000000000000000000000000 --- a/spaces/lindeberg/whisper-webui/LICENSE.md +++ /dev/null @@ -1,195 +0,0 @@ -Apache License -============== - -_Version 2.0, January 2004_ -_<>_ - -### Terms and Conditions for use, reproduction, and distribution - -#### 1. Definitions - -“License” shall mean the terms and conditions for use, reproduction, and -distribution as defined by Sections 1 through 9 of this document. - -“Licensor” shall mean the copyright owner or entity authorized by the copyright -owner that is granting the License. - -“Legal Entity” shall mean the union of the acting entity and all other entities -that control, are controlled by, or are under common control with that entity. -For the purposes of this definition, “control” means **(i)** the power, direct or -indirect, to cause the direction or management of such entity, whether by -contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the -outstanding shares, or **(iii)** beneficial ownership of such entity. - -“You” (or “Your”) shall mean an individual or Legal Entity exercising -permissions granted by this License. - -“Source” form shall mean the preferred form for making modifications, including -but not limited to software source code, documentation source, and configuration -files. - -“Object” form shall mean any form resulting from mechanical transformation or -translation of a Source form, including but not limited to compiled object code, -generated documentation, and conversions to other media types. - -“Work” shall mean the work of authorship, whether in Source or Object form, made -available under the License, as indicated by a copyright notice that is included -in or attached to the work (an example is provided in the Appendix below). - -“Derivative Works” shall mean any work, whether in Source or Object form, that -is based on (or derived from) the Work and for which the editorial revisions, -annotations, elaborations, or other modifications represent, as a whole, an -original work of authorship. For the purposes of this License, Derivative Works -shall not include works that remain separable from, or merely link (or bind by -name) to the interfaces of, the Work and Derivative Works thereof. - -“Contribution” shall mean any work of authorship, including the original version -of the Work and any modifications or additions to that Work or Derivative Works -thereof, that is intentionally submitted to Licensor for inclusion in the Work -by the copyright owner or by an individual or Legal Entity authorized to submit -on behalf of the copyright owner. For the purposes of this definition, -“submitted” means any form of electronic, verbal, or written communication sent -to the Licensor or its representatives, including but not limited to -communication on electronic mailing lists, source code control systems, and -issue tracking systems that are managed by, or on behalf of, the Licensor for -the purpose of discussing and improving the Work, but excluding communication -that is conspicuously marked or otherwise designated in writing by the copyright -owner as “Not a Contribution.” - -“Contributor” shall mean Licensor and any individual or Legal Entity on behalf -of whom a Contribution has been received by Licensor and subsequently -incorporated within the Work. - -#### 2. Grant of Copyright License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable copyright license to reproduce, prepare Derivative Works of, -publicly display, publicly perform, sublicense, and distribute the Work and such -Derivative Works in Source or Object form. - -#### 3. Grant of Patent License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable (except as stated in this section) patent license to make, have -made, use, offer to sell, sell, import, and otherwise transfer the Work, where -such license applies only to those patent claims licensable by such Contributor -that are necessarily infringed by their Contribution(s) alone or by combination -of their Contribution(s) with the Work to which such Contribution(s) was -submitted. If You institute patent litigation against any entity (including a -cross-claim or counterclaim in a lawsuit) alleging that the Work or a -Contribution incorporated within the Work constitutes direct or contributory -patent infringement, then any patent licenses granted to You under this License -for that Work shall terminate as of the date such litigation is filed. - -#### 4. Redistribution - -You may reproduce and distribute copies of the Work or Derivative Works thereof -in any medium, with or without modifications, and in Source or Object form, -provided that You meet the following conditions: - -* **(a)** You must give any other recipients of the Work or Derivative Works a copy of -this License; and -* **(b)** You must cause any modified files to carry prominent notices stating that You -changed the files; and -* **(c)** You must retain, in the Source form of any Derivative Works that You distribute, -all copyright, patent, trademark, and attribution notices from the Source form -of the Work, excluding those notices that do not pertain to any part of the -Derivative Works; and -* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any -Derivative Works that You distribute must include a readable copy of the -attribution notices contained within such NOTICE file, excluding those notices -that do not pertain to any part of the Derivative Works, in at least one of the -following places: within a NOTICE text file distributed as part of the -Derivative Works; within the Source form or documentation, if provided along -with the Derivative Works; or, within a display generated by the Derivative -Works, if and wherever such third-party notices normally appear. The contents of -the NOTICE file are for informational purposes only and do not modify the -License. You may add Your own attribution notices within Derivative Works that -You distribute, alongside or as an addendum to the NOTICE text from the Work, -provided that such additional attribution notices cannot be construed as -modifying the License. - -You may add Your own copyright statement to Your modifications and may provide -additional or different license terms and conditions for use, reproduction, or -distribution of Your modifications, or for any such Derivative Works as a whole, -provided Your use, reproduction, and distribution of the Work otherwise complies -with the conditions stated in this License. - -#### 5. Submission of Contributions - -Unless You explicitly state otherwise, any Contribution intentionally submitted -for inclusion in the Work by You to the Licensor shall be under the terms and -conditions of this License, without any additional terms or conditions. -Notwithstanding the above, nothing herein shall supersede or modify the terms of -any separate license agreement you may have executed with Licensor regarding -such Contributions. - -#### 6. Trademarks - -This License does not grant permission to use the trade names, trademarks, -service marks, or product names of the Licensor, except as required for -reasonable and customary use in describing the origin of the Work and -reproducing the content of the NOTICE file. - -#### 7. Disclaimer of Warranty - -Unless required by applicable law or agreed to in writing, Licensor provides the -Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, -including, without limitation, any warranties or conditions of TITLE, -NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are -solely responsible for determining the appropriateness of using or -redistributing the Work and assume any risks associated with Your exercise of -permissions under this License. - -#### 8. Limitation of Liability - -In no event and under no legal theory, whether in tort (including negligence), -contract, or otherwise, unless required by applicable law (such as deliberate -and grossly negligent acts) or agreed to in writing, shall any Contributor be -liable to You for damages, including any direct, indirect, special, incidental, -or consequential damages of any character arising as a result of this License or -out of the use or inability to use the Work (including but not limited to -damages for loss of goodwill, work stoppage, computer failure or malfunction, or -any and all other commercial damages or losses), even if such Contributor has -been advised of the possibility of such damages. - -#### 9. Accepting Warranty or Additional Liability - -While redistributing the Work or Derivative Works thereof, You may choose to -offer, and charge a fee for, acceptance of support, warranty, indemnity, or -other liability obligations and/or rights consistent with this License. However, -in accepting such obligations, You may act only on Your own behalf and on Your -sole responsibility, not on behalf of any other Contributor, and only if You -agree to indemnify, defend, and hold each Contributor harmless for any liability -incurred by, or claims asserted against, such Contributor by reason of your -accepting any such warranty or additional liability. - -_END OF TERMS AND CONDITIONS_ - -### APPENDIX: How to apply the Apache License to your work - -To apply the Apache License to your work, attach the following boilerplate -notice, with the fields enclosed by brackets `[]` replaced with your own -identifying information. (Don't include the brackets!) The text should be -enclosed in the appropriate comment syntax for the file format. We also -recommend that a file or class name and description of purpose be included on -the same “printed page” as the copyright notice for easier identification within -third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - diff --git a/spaces/linfanluntan/Grounded-SAM/segment_anything/README.md b/spaces/linfanluntan/Grounded-SAM/segment_anything/README.md deleted file mode 100644 index 6256d2b7f5a387988338d538df4e699eb17ba702..0000000000000000000000000000000000000000 --- a/spaces/linfanluntan/Grounded-SAM/segment_anything/README.md +++ /dev/null @@ -1,107 +0,0 @@ -# Segment Anything - -**[Meta AI Research, FAIR](https://ai.facebook.com/research/)** - -[Alexander Kirillov](https://alexander-kirillov.github.io/), [Eric Mintun](https://ericmintun.github.io/), [Nikhila Ravi](https://nikhilaravi.com/), [Hanzi Mao](https://hanzimao.me/), Chloe Rolland, Laura Gustafson, [Tete Xiao](https://tetexiao.com), [Spencer Whitehead](https://www.spencerwhitehead.com/), Alex Berg, Wan-Yen Lo, [Piotr Dollar](https://pdollar.github.io/), [Ross Girshick](https://www.rossgirshick.info/) - -[[`Paper`](https://ai.facebook.com/research/publications/segment-anything/)] [[`Project`](https://segment-anything.com/)] [[`Demo`](https://segment-anything.com/demo)] [[`Dataset`](https://segment-anything.com/dataset/index.html)] [[`Blog`](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/)] - -![SAM design](assets/model_diagram.png?raw=true) - -The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks. - -

    - - -

    - -## Installation - -The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended. - -Install Segment Anything: - -``` -pip install git+https://github.com/facebookresearch/segment-anything.git -``` - -or clone the repository locally and install with - -``` -git clone git@github.com:facebookresearch/segment-anything.git -cd segment-anything; pip install -e . -``` - -The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. `jupyter` is also required to run the example notebooks. -``` -pip install opencv-python pycocotools matplotlib onnxruntime onnx -``` - - -## Getting Started - -First download a [model checkpoint](#model-checkpoints). Then the model can be used in just a few lines to get masks from a given prompt: - -``` -from segment_anything import build_sam, SamPredictor -predictor = SamPredictor(build_sam(checkpoint="")) -predictor.set_image() -masks, _, _ = predictor.predict() -``` - -or generate masks for an entire image: - -``` -from segment_anything import build_sam, SamAutomaticMaskGenerator -mask_generator = SamAutomaticMaskGenerator(build_sam(checkpoint="")) -masks = mask_generator_generate() -``` - -Additionally, masks can be generated for images from the command line: - -``` -python scripts/amg.py --checkpoint --input --output -``` - -See the examples notebooks on [using SAM with prompts](/notebooks/predictor_example.ipynb) and [automatically generating masks](/notebooks/automatic_mask_generator_example.ipynb) for more details. - -

    - - -

    - -## ONNX Export - -SAM's lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime, such as in-browser as showcased in the [demo](https://segment-anything.com/demo). Export the model with - -``` -python scripts/export_onnx_model.py --checkpoint --output -``` - -See the [example notebook](https://github.com/facebookresearch/segment-anything/blob/main/notebooks/onnx_model_example.ipynb) for details on how to combine image preprocessing via SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export. - -## Model Checkpoints - -Three model versions of the model are available with different backbone sizes. These models can be instantiated by running -``` -from segment_anything import sam_model_registry -sam = sam_model_registry[""](checkpoint="") -``` -Click the links below to download the checkpoint for the corresponding model name. The default model in bold can also be instantiated with `build_sam`, as in the examples in [Getting Started](#getting-started). - -* **`default` or `vit_h`: [ViT-H SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)** -* `vit_l`: [ViT-L SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth) -* `vit_b`: [ViT-B SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth) - -## License -The model is licensed under the [Apache 2.0 license](LICENSE). - -## Contributing - -See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md). - -## Contributors - -The Segment Anything project was made possible with the help of many contributors (alphabetical): - -Aaron Adcock, Vaibhav Aggarwal, Morteza Behrooz, Cheng-Yang Fu, Ashley Gabriel, Ahuva Goldstand, Allen Goodman, Sumanth Gurram, Jiabo Hu, Somya Jain, Devansh Kukreja, Robert Kuo, Joshua Lane, Yanghao Li, Lilian Luong, Jitendra Malik, Mallika Malhotra, William Ngan, Omkar Parkhi, Nikhil Raina, Dirk Rowe, Neil Sejoor, Vanessa Stark, Bala Varadarajan, Bram Wasti, Zachary Winstrom diff --git a/spaces/lizhaoyin/newbing/README.md b/spaces/lizhaoyin/newbing/README.md deleted file mode 100644 index f7ad9bbfd3a4d70150cd630edc41ddfcd952e66e..0000000000000000000000000000000000000000 --- a/spaces/lizhaoyin/newbing/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Newbing -emoji: 💻 -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec768L12_Onnx.py b/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec768L12_Onnx.py deleted file mode 100644 index 8dde0f173ed60169282128cc51eb1c200c5d82c5..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec768L12_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec768L12_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-768-layer-12.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 768 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/lojban/text-to-speech/vits/train.py b/spaces/lojban/text-to-speech/vits/train.py deleted file mode 100644 index ddced38f83e54d459405b21b54f19976a1d00717..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/vits/train.py +++ /dev/null @@ -1,290 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import vits.commons as commons -import vits.utils as utils -from data_utils import ( - TextAudioLoader, - TextAudioCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '80000' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32,300,400,500,600,700,800,900,1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioCollate() - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank==0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(train_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask,\ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths) - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank==0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000) - y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0,:,:y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/ltgoslo/ssa-perin/mtool/codec/eds.py b/spaces/ltgoslo/ssa-perin/mtool/codec/eds.py deleted file mode 100644 index 626a732fb20c379a7b9e19a12ab136b2eb2e2bae..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/mtool/codec/eds.py +++ /dev/null @@ -1,95 +0,0 @@ -import os.path; -import re; - -from graph import Graph; - -EDS_MATCHER = re.compile(r'(.+?)(?$"); - -def read_instances(fp): - top_handle, predicates = None, []; - sentence_id = None; - try: - sentence_id = int(os.path.splitext(os.path.basename(fp.name))[0]); - except: - pass; - first_curly = True - for line in fp: - line = line.strip() - if len(line) == 0: - pass - elif line.startswith("#"): - sentence_id = line[1:] - first_curly = True - elif line.startswith("{"): - colon = line.index(":") - assert colon >= 0 - top_handle = line[1:colon].strip() - elif line.endswith("}"): - assert len(line) == 1 - if first_curly: - assert sentence_id is not None - assert top_handle is not None - assert len(predicates) > 0 - yield (sentence_id, top_handle, predicates) - sentence_id, top_handle, predicates = None, None, [] - first_curly = False - else: - match = EDS_MATCHER.match(line) - assert match is not None - node_id, label, arguments = match.groups() - arguments = [tuple(arg.split()) for arg in arguments.split(',') if len(arg) > 0] - predicates.append((node_id, label.strip(), arguments)) - -def instance2graph(instance, reify = False, text = None): - sentence_id, top, predicates = instance; - anchors = None; - graph = Graph(sentence_id, flavor = 1, framework = "eds"); - if text: graph.add_input(text); - handle2node = {}; - for handle, label, _ in predicates: - assert handle not in handle2node - properties = None; - values = None; - match = PROPERTIES_MATCHER.search(label); - if match: - label = label[:match.start()]; - fields = match.group(1).replace(",", "").split(); - properties, values = list(), list(); - for i, field in enumerate(fields[1:]): - if i % 2 == 0: properties.append(field); - else: values.append(field); - carg = None; - match = CARG_MATCHER.search(label); - if match: - label = label[:match.start()]; - if not reify: - properties = ["CARG"] + properties; - values = [match.group(1)] + values; - else: - carg = match.group(1); - anchors = None; - match = LNK_MATCHER.search(label); - if match: - label = label[:match.start()]; - anchors = [{"from": int(match.group(1)), "to": int(match.group(2))}]; - handle2node[handle] = \ - graph.add_node(label = label, properties = properties, values = values, anchors = anchors); - if carg and reify: - carg = graph.add_node(label = carg, anchors = anchors); - source = handle2node[handle].id; - target = carg.id; - graph.add_edge(source, target, "CARG"); - handle2node[top].is_top = True - for src_handle, _, arguments in predicates: - src = handle2node[src_handle].id - for relation, tgt_handle in arguments: - tgt = handle2node[tgt_handle].id - graph.add_edge(src, tgt, relation) - return graph - -def read(fp, reify = False, text = None): - for instance in read_instances(fp): - yield instance2graph(instance, reify, text), None diff --git a/spaces/lyf/faster-whisper-webui/src/whisper/abstractWhisperContainer.py b/spaces/lyf/faster-whisper-webui/src/whisper/abstractWhisperContainer.py deleted file mode 100644 index d14fb23d24256e3f1c12d8ae1db6ece891d49ec8..0000000000000000000000000000000000000000 --- a/spaces/lyf/faster-whisper-webui/src/whisper/abstractWhisperContainer.py +++ /dev/null @@ -1,122 +0,0 @@ -import abc -from typing import List -from src.config import ModelConfig, VadInitialPromptMode - -from src.hooks.progressListener import ProgressListener -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -class AbstractWhisperCallback: - @abc.abstractmethod - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - raise NotImplementedError() - - def _get_initial_prompt(self, initial_prompt: str, initial_prompt_mode: VadInitialPromptMode, - prompt: str, segment_index: int): - if (initial_prompt_mode == VadInitialPromptMode.PREPEND_ALL_SEGMENTS): - return self._concat_prompt(initial_prompt, prompt) - elif (initial_prompt_mode == VadInitialPromptMode.PREPREND_FIRST_SEGMENT): - return self._concat_prompt(initial_prompt, prompt) if segment_index == 0 else prompt - else: - raise ValueError(f"Unknown initial prompt mode {initial_prompt_mode}") - - def _concat_prompt(self, prompt1, prompt2): - if (prompt1 is None): - return prompt2 - elif (prompt2 is None): - return prompt1 - else: - return prompt1 + " " + prompt2 - -class AbstractWhisperContainer: - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - self.model_name = model_name - self.device = device - self.compute_type = compute_type - self.download_root = download_root - self.cache = cache - - # Will be created on demand - self.model = None - - # List of known models - self.models = models - - def get_model(self): - if self.model is None: - - if (self.cache is None): - self.model = self._create_model() - else: - model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '') - self.model = self.cache.get(model_key, self._create_model) - return self.model - - @abc.abstractmethod - def _create_model(self): - raise NotImplementedError() - - def ensure_downloaded(self): - pass - - @abc.abstractmethod - def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, - initial_prompt_mode: VadInitialPromptMode = VadInitialPromptMode.PREPREND_FIRST_SEGMENT, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - initial_prompt: str - The initial prompt to use for the transcription. - initial_prompt_mode: VadInitialPromptMode - The mode to use for the initial prompt. If set to PREPEND_FIRST_SEGMENT, the initial prompt will be prepended to the first segment of audio. - If set to PREPEND_ALL_SEGMENTS, the initial prompt will be prepended to all segments of audio. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - raise NotImplementedError() - - # This is required for multiprocessing - def __getstate__(self): - return { - "model_name": self.model_name, - "device": self.device, - "download_root": self.download_root, - "models": self.models, - "compute_type": self.compute_type - } - - def __setstate__(self, state): - self.model_name = state["model_name"] - self.device = state["device"] - self.download_root = state["download_root"] - self.models = state["models"] - self.compute_type = state["compute_type"] - self.model = None - # Depickled objects must use the global cache - self.cache = GLOBAL_MODEL_CACHE \ No newline at end of file diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/esrgan_model.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/esrgan_model.py deleted file mode 100644 index 3d746d0e29418d9e8f35fa9c1e3a315d694075be..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/esrgan_model.py +++ /dev/null @@ -1,83 +0,0 @@ -import torch -from collections import OrderedDict - -from basicsr.utils.registry import MODEL_REGISTRY -from .srgan_model import SRGANModel - - -@MODEL_REGISTRY.register() -class ESRGANModel(SRGANModel): - """ESRGAN model for single image super-resolution.""" - - def optimize_parameters(self, current_iter): - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, self.gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, self.gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - # gan loss (relativistic gan) - real_d_pred = self.net_d(self.gt).detach() - fake_g_pred = self.net_d(self.output) - l_g_real = self.cri_gan(real_d_pred - torch.mean(fake_g_pred), False, is_disc=False) - l_g_fake = self.cri_gan(fake_g_pred - torch.mean(real_d_pred), True, is_disc=False) - l_g_gan = (l_g_real + l_g_fake) / 2 - - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # gan loss (relativistic gan) - - # In order to avoid the error in distributed training: - # "Error detected in CudnnBatchNormBackward: RuntimeError: one of - # the variables needed for gradient computation has been modified by - # an inplace operation", - # we separate the backwards for real and fake, and also detach the - # tensor for calculating mean. - - # real - fake_d_pred = self.net_d(self.output).detach() - real_d_pred = self.net_d(self.gt) - l_d_real = self.cri_gan(real_d_pred - torch.mean(fake_d_pred), True, is_disc=True) * 0.5 - l_d_real.backward() - # fake - fake_d_pred = self.net_d(self.output.detach()) - l_d_fake = self.cri_gan(fake_d_pred - torch.mean(real_d_pred.detach()), False, is_disc=True) * 0.5 - l_d_fake.backward() - self.optimizer_d.step() - - loss_dict['l_d_real'] = l_d_real - loss_dict['l_d_fake'] = l_d_fake - loss_dict['out_d_real'] = torch.mean(real_d_pred.detach()) - loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach()) - - self.log_dict = self.reduce_loss_dict(loss_dict) - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/adversarial/discriminators/mpd.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/adversarial/discriminators/mpd.py deleted file mode 100644 index 8debd1fa72d77ca03df680facb60bdf79638cade..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/adversarial/discriminators/mpd.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ...modules import NormConv2d -from .base import MultiDiscriminator, MultiDiscriminatorOutputType - - -def get_padding(kernel_size: int, dilation: int = 1) -> int: - return int((kernel_size * dilation - dilation) / 2) - - -class PeriodDiscriminator(nn.Module): - """Period sub-discriminator. - - Args: - period (int): Period between samples of audio. - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - n_layers (int): Number of convolutional layers. - kernel_sizes (list of int): Kernel sizes for convolutions. - stride (int): Stride for convolutions. - filters (int): Initial number of filters in convolutions. - filters_scale (int): Multiplier of number of filters as we increase depth. - max_filters (int): Maximum number of filters. - norm (str): Normalization method. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - """ - def __init__(self, period: int, in_channels: int = 1, out_channels: int = 1, - n_layers: int = 5, kernel_sizes: tp.List[int] = [5, 3], stride: int = 3, - filters: int = 8, filters_scale: int = 4, max_filters: int = 1024, - norm: str = 'weight_norm', activation: str = 'LeakyReLU', - activation_params: dict = {'negative_slope': 0.2}): - super().__init__() - self.period = period - self.n_layers = n_layers - self.activation = getattr(torch.nn, activation)(**activation_params) - self.convs = nn.ModuleList() - in_chs = in_channels - for i in range(self.n_layers): - out_chs = min(filters * (filters_scale ** (i + 1)), max_filters) - eff_stride = 1 if i == self.n_layers - 1 else stride - self.convs.append(NormConv2d(in_chs, out_chs, kernel_size=(kernel_sizes[0], 1), stride=(eff_stride, 1), - padding=((kernel_sizes[0] - 1) // 2, 0), norm=norm)) - in_chs = out_chs - self.conv_post = NormConv2d(in_chs, out_channels, kernel_size=(kernel_sizes[1], 1), stride=1, - padding=((kernel_sizes[1] - 1) // 2, 0), norm=norm) - - def forward(self, x: torch.Tensor): - fmap = [] - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), 'reflect') - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for conv in self.convs: - x = conv(x) - x = self.activation(x) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - # x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(MultiDiscriminator): - """Multi-Period (MPD) Discriminator. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - periods (Sequence[int]): Periods between samples of audio for the sub-discriminators. - **kwargs: Additional args for `PeriodDiscriminator` - """ - def __init__(self, in_channels: int = 1, out_channels: int = 1, - periods: tp.Sequence[int] = [2, 3, 5, 7, 11], **kwargs): - super().__init__() - self.discriminators = nn.ModuleList([ - PeriodDiscriminator(p, in_channels, out_channels, **kwargs) for p in periods - ]) - - @property - def num_discriminators(self): - return len(self.discriminators) - - def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType: - logits = [] - fmaps = [] - for disc in self.discriminators: - logit, fmap = disc(x) - logits.append(logit) - fmaps.append(fmap) - return logits, fmaps diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/train.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/train.py deleted file mode 100644 index 22dd117830bb403829d0a60b1b95e120d1e6978b..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/train.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Entry point for dora to launch solvers for running training loops. -See more info on how to use dora: https://github.com/facebookresearch/dora -""" - -import logging -import multiprocessing -import os -import sys -import typing as tp - -from dora import git_save, hydra_main, XP -import flashy -import hydra -import omegaconf - -from .environment import AudioCraftEnvironment -from .utils.cluster import get_slurm_parameters - -logger = logging.getLogger(__name__) - - -def resolve_config_dset_paths(cfg): - """Enable Dora to load manifest from git clone repository.""" - # manifest files for the different splits - for key, value in cfg.datasource.items(): - if isinstance(value, str): - cfg.datasource[key] = git_save.to_absolute_path(value) - - -def get_solver(cfg): - from . import solvers - # Convert batch size to batch size for each GPU - assert cfg.dataset.batch_size % flashy.distrib.world_size() == 0 - cfg.dataset.batch_size //= flashy.distrib.world_size() - for split in ['train', 'valid', 'evaluate', 'generate']: - if hasattr(cfg.dataset, split) and hasattr(cfg.dataset[split], 'batch_size'): - assert cfg.dataset[split].batch_size % flashy.distrib.world_size() == 0 - cfg.dataset[split].batch_size //= flashy.distrib.world_size() - resolve_config_dset_paths(cfg) - solver = solvers.get_solver(cfg) - return solver - - -def get_solver_from_xp(xp: XP, override_cfg: tp.Optional[tp.Union[dict, omegaconf.DictConfig]] = None, - restore: bool = True, load_best: bool = True, - ignore_state_keys: tp.List[str] = [], disable_fsdp: bool = True): - """Given a XP, return the Solver object. - - Args: - xp (XP): Dora experiment for which to retrieve the solver. - override_cfg (dict or None): If not None, should be a dict used to - override some values in the config of `xp`. This will not impact - the XP signature or folder. The format is different - than the one used in Dora grids, nested keys should actually be nested dicts, - not flattened, e.g. `{'optim': {'batch_size': 32}}`. - restore (bool): If `True` (the default), restore state from the last checkpoint. - load_best (bool): If `True` (the default), load the best state from the checkpoint. - ignore_state_keys (list[str]): List of sources to ignore when loading the state, e.g. `optimizer`. - disable_fsdp (bool): if True, disables FSDP entirely. This will - also automatically skip loading the EMA. For solver specific - state sources, like the optimizer, you might want to - use along `ignore_state_keys=['optimizer']`. Must be used with `load_best=True`. - """ - logger.info(f"Loading solver from XP {xp.sig}. " - f"Overrides used: {xp.argv}") - cfg = xp.cfg - if override_cfg is not None: - cfg = omegaconf.OmegaConf.merge(cfg, omegaconf.DictConfig(override_cfg)) - if disable_fsdp and cfg.fsdp.use: - cfg.fsdp.use = False - assert load_best is True - # ignoring some keys that were FSDP sharded like model, ema, and best_state. - # fsdp_best_state will be used in that case. When using a specific solver, - # one is responsible for adding the relevant keys, e.g. 'optimizer'. - # We could make something to automatically register those inside the solver, but that - # seem overkill at this point. - ignore_state_keys = ignore_state_keys + ['model', 'ema', 'best_state'] - - try: - with xp.enter(): - solver = get_solver(cfg) - if restore: - solver.restore(load_best=load_best, ignore_state_keys=ignore_state_keys) - return solver - finally: - hydra.core.global_hydra.GlobalHydra.instance().clear() - - -def get_solver_from_sig(sig: str, *args, **kwargs): - """Return Solver object from Dora signature, i.e. to play with it from a notebook. - See `get_solver_from_xp` for more information. - """ - xp = main.get_xp_from_sig(sig) - return get_solver_from_xp(xp, *args, **kwargs) - - -def init_seed_and_system(cfg): - import numpy as np - import torch - import random - from audiocraft.modules.transformer import set_efficient_attention_backend - - multiprocessing.set_start_method(cfg.mp_start_method) - logger.debug('Setting mp start method to %s', cfg.mp_start_method) - random.seed(cfg.seed) - np.random.seed(cfg.seed) - # torch also initialize cuda seed if available - torch.manual_seed(cfg.seed) - torch.set_num_threads(cfg.num_threads) - os.environ['MKL_NUM_THREADS'] = str(cfg.num_threads) - os.environ['OMP_NUM_THREADS'] = str(cfg.num_threads) - logger.debug('Setting num threads to %d', cfg.num_threads) - set_efficient_attention_backend(cfg.efficient_attention_backend) - logger.debug('Setting efficient attention backend to %s', cfg.efficient_attention_backend) - - -@hydra_main(config_path='../config', config_name='config', version_base='1.1') -def main(cfg): - init_seed_and_system(cfg) - - # Setup logging both to XP specific folder, and to stderr. - log_name = '%s.log.{rank}' % cfg.execute_only if cfg.execute_only else 'solver.log.{rank}' - flashy.setup_logging(level=str(cfg.logging.level).upper(), log_name=log_name) - # Initialize distributed training, no need to specify anything when using Dora. - flashy.distrib.init() - solver = get_solver(cfg) - if cfg.show: - solver.show() - return - - if cfg.execute_only: - assert cfg.execute_inplace or cfg.continue_from is not None, \ - "Please explicitly specify the checkpoint to continue from with continue_from= " + \ - "when running with execute_only or set execute_inplace to True." - solver.restore(replay_metrics=False) # load checkpoint - solver.run_one_stage(cfg.execute_only) - return - - return solver.run() - - -main.dora.dir = AudioCraftEnvironment.get_dora_dir() -main._base_cfg.slurm = get_slurm_parameters(main._base_cfg.slurm) - -if main.dora.shared is not None and not os.access(main.dora.shared, os.R_OK): - print("No read permission on dora.shared folder, ignoring it.", file=sys.stderr) - main.dora.shared = None - -if __name__ == '__main__': - main() diff --git a/spaces/matthoffner/starchat-ui/types/data.ts b/spaces/matthoffner/starchat-ui/types/data.ts deleted file mode 100644 index d57323721fbbf2ead31fcc33334717d75de1f3f6..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/types/data.ts +++ /dev/null @@ -1,4 +0,0 @@ -export interface KeyValuePair { - key: string; - value: any; -} diff --git a/spaces/mayordp/DeepFakeAI/DeepFakeAI/uis/components/execution.py b/spaces/mayordp/DeepFakeAI/DeepFakeAI/uis/components/execution.py deleted file mode 100644 index 23de9f5d50b365eeeee50db56af8cc78e6eccf73..0000000000000000000000000000000000000000 --- a/spaces/mayordp/DeepFakeAI/DeepFakeAI/uis/components/execution.py +++ /dev/null @@ -1,64 +0,0 @@ -from typing import List, Optional -import gradio -import onnxruntime - -import DeepFakeAI.globals -from DeepFakeAI import wording -from DeepFakeAI.face_analyser import clear_face_analyser -from DeepFakeAI.processors.frame.core import clear_frame_processors_modules -from DeepFakeAI.uis.typing import Update -from DeepFakeAI.utilities import encode_execution_providers, decode_execution_providers - -EXECUTION_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None -EXECUTION_THREAD_COUNT_SLIDER : Optional[gradio.Slider] = None -EXECUTION_QUEUE_COUNT_SLIDER : Optional[gradio.Slider] = None - - -def render() -> None: - global EXECUTION_PROVIDERS_CHECKBOX_GROUP - global EXECUTION_THREAD_COUNT_SLIDER - global EXECUTION_QUEUE_COUNT_SLIDER - - with gradio.Box(): - EXECUTION_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup( - label = wording.get('execution_providers_checkbox_group_label'), - choices = encode_execution_providers(onnxruntime.get_available_providers()), - value = encode_execution_providers(DeepFakeAI.globals.execution_providers) - ) - EXECUTION_THREAD_COUNT_SLIDER = gradio.Slider( - label = wording.get('execution_thread_count_slider_label'), - value = DeepFakeAI.globals.execution_thread_count, - step = 1, - minimum = 1, - maximum = 128 - ) - EXECUTION_QUEUE_COUNT_SLIDER = gradio.Slider( - label = wording.get('execution_queue_count_slider_label'), - value = DeepFakeAI.globals.execution_queue_count, - step = 1, - minimum = 1, - maximum = 16 - ) - - -def listen() -> None: - EXECUTION_PROVIDERS_CHECKBOX_GROUP.change(update_execution_providers, inputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP, outputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP) - EXECUTION_THREAD_COUNT_SLIDER.change(update_execution_thread_count, inputs = EXECUTION_THREAD_COUNT_SLIDER, outputs = EXECUTION_THREAD_COUNT_SLIDER) - EXECUTION_QUEUE_COUNT_SLIDER.change(update_execution_queue_count, inputs = EXECUTION_QUEUE_COUNT_SLIDER, outputs = EXECUTION_QUEUE_COUNT_SLIDER) - - -def update_execution_providers(execution_providers : List[str]) -> Update: - clear_face_analyser() - clear_frame_processors_modules() - DeepFakeAI.globals.execution_providers = decode_execution_providers(execution_providers) - return gradio.update(value = execution_providers) - - -def update_execution_thread_count(execution_thread_count : int = 1) -> Update: - DeepFakeAI.globals.execution_thread_count = execution_thread_count - return gradio.update(value = execution_thread_count) - - -def update_execution_queue_count(execution_queue_count : int = 1) -> Update: - DeepFakeAI.globals.execution_queue_count = execution_queue_count - return gradio.update(value = execution_queue_count) diff --git a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h b/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h deleted file mode 100644 index ad1311a78f61303616504eb991aaa9c4a93d9948..0000000000000000000000000000000000000000 --- a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h +++ /dev/null @@ -1,33 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/merve/data-leak/public/fill-in-the-blank/init-sent.js b/spaces/merve/data-leak/public/fill-in-the-blank/init-sent.js deleted file mode 100644 index 263a35a62a0fa9f2064834bc78a93222c8040897..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/fill-in-the-blank/init-sent.js +++ /dev/null @@ -1,136 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initSent = async function(sent, sel){ - var isHamlet = sent.class == 'hamlet' - var isMobile = innerWidth < 900 - - var sel = d3.select('.' + sent.class) - .st({opacity: .5, marginBottom: isHamlet ? '' : 40}) - - - // Load completitions - var str = sent.str - while (str.includes('__')) str = str.replace('__', '_') - str = str.replace('_', 'things') - - var tokens = tokenizer.tokenizeCLS(str) - .filter(d => d < 30522) - - var topTokens = await post('embed_group_top', {tokens}) - topTokens.forEach(sent => { - sent.forEach(d => d.str = tokenizer.vocab[d.i]) - }) - - var displayTokens = tokens - .slice(1) - .map((vocabIndex, i) => { - return {i, str: bertLargeVocab[vocabIndex].replace('##', '')} - }) - displayTokens.pop() - - - sel.html('').st({opacity: 1}) - if (!sel.node()) return - - var divSel = sel.append('div') - .st({position: 'relative'}) - var svgSel = divSel.append('svg') - .st({position: 'absolute', top: 0, zIndex: -10}) - - var tokenSel = divSel - .append('div.token-container') - .st({padding: 20, paddingLeft: 0, paddingRight: 0, fontSize: 20}) - .appendMany('button.token', displayTokens) - .text(d => d.str) - .on('click', drawToken) - - var connectionPath = svgSel.append('path').at({fill: 'none', stroke: '#000', strokeWidth: 1}) - - var padding = 5 - var width = divSel.node().offsetWidth - var botWidth = isMobile ? width - padding*2 : 580 - - var botTextSel = divSel.append('div.top-sents') - .translate([width/2 - botWidth/2 - padding + .5, 15]) - .st({ - width: botWidth, - height: 170, - outline: '1px solid #000', - padding: padding, - // position: 'absolute', - background: '#fff', - overflowY: 'scroll', - fontSize: isMobile ? 10 : '', - }) - - if (isHamlet){ - divSel.append('div.caption') - .text(`BERT's predictions for what should fill in the hidden word`) - .st({fontWeight: '', lineHeight: '1.1em', fontSize: 14, textAlign: 'center', width: '100%', marginTop: 20}) - } - - var curIndex = -1 - function drawToken(token){ - var node = tokenSel.filter(d => d == token).node() - var x = node.offsetLeft + node.offsetWidth/2 - var y = node.offsetTop + node.offsetHeight - - var y1 = botTextSel.node().offsetTop - - connectionPath.at({d: ['M', x, y, 'L', width/2, y1 + 15].join(' ')}) - - var completionSel = botTextSel.html('').appendMany('span', topTokens[token.i + 1]) - .st({display: 'inline-block', fontFamily: 'monospace', width: isMobile ? '47%' : '31%', borderBottom: '1px solid #ccc', margin: 4, fontSize: innerWidth < 350 ? 12 : isMobile ? 13 : 14 }) - - completionSel.append('span') - .st({color: '#ccc'}) - .html(d => { - var str = d3.format('.3f')(d.p*100) + '% ' - if (str.length < 8) str = ' ' + str - return str - }) - - completionSel.append('span') - .text(d => d.str.replace('▁', '')) - - - tokenSel - .text(d => d.str) - .classed('active', false) - .filter(d => d == token) - .classed('active', true) - .text(d => d.str.split('').map(d => '_').join('')) - } - - var i = displayTokens.length - (isHamlet ? 2 : 2) - if (tokens.includes(2477)) i = tokens.indexOf(2477) - 1 - drawToken(displayTokens[i]) - - var topTokensSel = sel.append('div.top-tokens') -} - - - - - - - - - - - -if (window.init) init() diff --git a/spaces/mfrashad/ClothingGAN/netdissect/tool/makesample.py b/spaces/mfrashad/ClothingGAN/netdissect/tool/makesample.py deleted file mode 100644 index 36276267677360d8238a8dbf71e9753dcc327681..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/netdissect/tool/makesample.py +++ /dev/null @@ -1,169 +0,0 @@ -''' -A simple tool to generate sample of output of a GAN, -subject to filtering, sorting, or intervention. -''' - -import torch, numpy, os, argparse, numbers, sys, shutil -from PIL import Image -from torch.utils.data import TensorDataset -from netdissect.zdataset import standard_z_sample -from netdissect.progress import default_progress, verbose_progress -from netdissect.autoeval import autoimport_eval -from netdissect.workerpool import WorkerBase, WorkerPool -from netdissect.nethook import edit_layers, retain_layers - -def main(): - parser = argparse.ArgumentParser(description='GAN sample making utility') - parser.add_argument('--model', type=str, default=None, - help='constructor for the model to test') - parser.add_argument('--pthfile', type=str, default=None, - help='filename of .pth file for the model') - parser.add_argument('--outdir', type=str, default='images', - help='directory for image output') - parser.add_argument('--size', type=int, default=100, - help='number of images to output') - parser.add_argument('--test_size', type=int, default=None, - help='number of images to test') - parser.add_argument('--layer', type=str, default=None, - help='layer to inspect') - parser.add_argument('--seed', type=int, default=1, - help='seed') - parser.add_argument('--maximize_units', type=int, nargs='+', default=None, - help='units to maximize') - parser.add_argument('--ablate_units', type=int, nargs='+', default=None, - help='units to ablate') - parser.add_argument('--quiet', action='store_true', default=False, - help='silences console output') - if len(sys.argv) == 1: - parser.print_usage(sys.stderr) - sys.exit(1) - args = parser.parse_args() - verbose_progress(not args.quiet) - - # Instantiate the model - model = autoimport_eval(args.model) - if args.pthfile is not None: - data = torch.load(args.pthfile) - if 'state_dict' in data: - meta = {} - for key in data: - if isinstance(data[key], numbers.Number): - meta[key] = data[key] - data = data['state_dict'] - model.load_state_dict(data) - # Unwrap any DataParallel-wrapped model - if isinstance(model, torch.nn.DataParallel): - model = next(model.children()) - # Examine first conv in model to determine input feature size. - first_layer = [c for c in model.modules() - if isinstance(c, (torch.nn.Conv2d, torch.nn.ConvTranspose2d, - torch.nn.Linear))][0] - # 4d input if convolutional, 2d input if first layer is linear. - if isinstance(first_layer, (torch.nn.Conv2d, torch.nn.ConvTranspose2d)): - z_channels = first_layer.in_channels - spatialdims = (1, 1) - else: - z_channels = first_layer.in_features - spatialdims = () - # Instrument the model if needed - if args.maximize_units is not None: - retain_layers(model, [args.layer]) - model.cuda() - - # Get the sample of z vectors - if args.maximize_units is None: - indexes = torch.arange(args.size) - z_sample = standard_z_sample(args.size, z_channels, seed=args.seed) - z_sample = z_sample.view(tuple(z_sample.shape) + spatialdims) - else: - # By default, if maximizing units, get a 'top 5%' sample. - if args.test_size is None: - args.test_size = args.size * 20 - z_universe = standard_z_sample(args.test_size, z_channels, - seed=args.seed) - z_universe = z_universe.view(tuple(z_universe.shape) + spatialdims) - indexes = get_highest_znums(model, z_universe, args.maximize_units, - args.size, seed=args.seed) - z_sample = z_universe[indexes] - - if args.ablate_units: - edit_layers(model, [args.layer]) - dims = max(2, max(args.ablate_units) + 1) # >=2 to avoid broadcast - model.ablation[args.layer] = torch.zeros(dims) - model.ablation[args.layer][args.ablate_units] = 1 - - save_znum_images(args.outdir, model, z_sample, indexes, - args.layer, args.ablate_units) - copy_lightbox_to(args.outdir) - - -def get_highest_znums(model, z_universe, max_units, size, - batch_size=100, seed=1): - # The model should have been instrumented already - retained_items = list(model.retained.items()) - assert len(retained_items) == 1 - layer = retained_items[0][0] - # By default, a 10% sample - progress = default_progress() - num_units = None - with torch.no_grad(): - # Pass 1: collect max activation stats - z_loader = torch.utils.data.DataLoader(TensorDataset(z_universe), - batch_size=batch_size, num_workers=2, - pin_memory=True) - scores = [] - for [z] in progress(z_loader, desc='Finding max activations'): - z = z.cuda() - model(z) - feature = model.retained[layer] - num_units = feature.shape[1] - max_feature = feature[:, max_units, ...].view( - feature.shape[0], len(max_units), -1).max(2)[0] - total_feature = max_feature.sum(1) - scores.append(total_feature.cpu()) - scores = torch.cat(scores, 0) - highest = (-scores).sort(0)[1][:size].sort(0)[0] - return highest - - -def save_znum_images(dirname, model, z_sample, indexes, layer, ablated_units, - name_template="image_{}.png", lightbox=False, batch_size=100, seed=1): - progress = default_progress() - os.makedirs(dirname, exist_ok=True) - with torch.no_grad(): - # Pass 2: now generate images - z_loader = torch.utils.data.DataLoader(TensorDataset(z_sample), - batch_size=batch_size, num_workers=2, - pin_memory=True) - saver = WorkerPool(SaveImageWorker) - if ablated_units is not None: - dims = max(2, max(ablated_units) + 1) # >=2 to avoid broadcast - mask = torch.zeros(dims) - mask[ablated_units] = 1 - model.ablation[layer] = mask[None,:,None,None].cuda() - for batch_num, [z] in enumerate(progress(z_loader, - desc='Saving images')): - z = z.cuda() - start_index = batch_num * batch_size - im = ((model(z) + 1) / 2 * 255).clamp(0, 255).byte().permute( - 0, 2, 3, 1).cpu() - for i in range(len(im)): - index = i + start_index - if indexes is not None: - index = indexes[index].item() - filename = os.path.join(dirname, name_template.format(index)) - saver.add(im[i].numpy(), filename) - saver.join() - -def copy_lightbox_to(dirname): - srcdir = os.path.realpath( - os.path.join(os.getcwd(), os.path.dirname(__file__))) - shutil.copy(os.path.join(srcdir, 'lightbox.html'), - os.path.join(dirname, '+lightbox.html')) - -class SaveImageWorker(WorkerBase): - def work(self, data, filename): - Image.fromarray(data).save(filename, optimize=True, quality=100) - -if __name__ == '__main__': - main() diff --git a/spaces/mikefish/CharacterMaker/README.md b/spaces/mikefish/CharacterMaker/README.md deleted file mode 100644 index 900a26e4fed995658f61ff6dc73c7df99822b962..0000000000000000000000000000000000000000 --- a/spaces/mikefish/CharacterMaker/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CharacterMaker -emoji: 👀 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/mikonvergence/theaTRON/README.md b/spaces/mikonvergence/theaTRON/README.md deleted file mode 100644 index 9a998b4a8c06eb4954e9ffd55ad0d7f12e328d62..0000000000000000000000000000000000000000 --- a/spaces/mikonvergence/theaTRON/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: theaTRON -emoji: 🎭 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/mkoot007/Text2Image/app.py b/spaces/mkoot007/Text2Image/app.py deleted file mode 100644 index 12867dfe2ae81f36e896ac8d0424fe867da1a775..0000000000000000000000000000000000000000 --- a/spaces/mkoot007/Text2Image/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import gradio as gr -from diffusers import StableDiffusionPipeline -import torch # Import the torch library -model_id = "prompthero/openjourney" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float32) - -def generate_image(prompt): - image = pipe(prompt).images[0] - return image - -iface = gr.Interface( - fn=generate_image, - inputs=gr.Textbox(prompt="Enter a prompt:"), - outputs=gr.Image(), - title="Image Generation Model", - description="Generate images from text prompts using the OpenJourney model.", -) - -iface.launch() diff --git a/spaces/mnauf/detect-bees/utils/aws/__init__.py b/spaces/mnauf/detect-bees/utils/aws/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/hubert_feature_reader.py b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/hubert_feature_reader.py deleted file mode 100644 index 09442206e19abf854f2f02754ec7c6f8bc564200..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/hubert_feature_reader.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import fairseq -import soundfile as sf -import torch.nn.functional as F - - -class HubertFeatureReader: - """ - Wrapper class to run inference on HuBERT model. - Helps extract features for a given audio file. - """ - - def __init__(self, checkpoint_path, layer, max_chunk=1600000): - ( - model, - cfg, - task, - ) = fairseq.checkpoint_utils.load_model_ensemble_and_task( - [checkpoint_path] - ) - self.model = model[0].eval().cuda() - self.task = task - self.layer = layer - self.max_chunk = max_chunk - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - assert sr == self.task.cfg.sample_rate, sr - if ref_len is not None and abs(ref_len - len(wav)) > 160: - print(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, file_path, ref_len=None): - x = self.read_audio(file_path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - x = F.layer_norm(x, x.shape) - x = x.view(1, -1) - - feat = [] - for start in range(0, x.size(1), self.max_chunk): - x_chunk = x[:, start: start + self.max_chunk] - feat_chunk, _ = self.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - output_layer=self.layer, - ) - feat.append(feat_chunk) - return torch.cat(feat, 1).squeeze(0) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/indexed_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/indexed_dataset.py deleted file mode 100644 index 23afb43356557d65c0e8f441ff9cdc890136ddbf..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/indexed_dataset.py +++ /dev/null @@ -1,585 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import shutil -import struct -from functools import lru_cache - -import numpy as np -import torch -from fairseq.dataclass.constants import DATASET_IMPL_CHOICES -from fairseq.data.fasta_dataset import FastaDataset -from fairseq.file_io import PathManager -from fairseq.data.huffman import HuffmanMMapIndexedDataset, HuffmanMMapIndex - -from . import FairseqDataset - -from typing import Union - - -def best_fitting_int_dtype( - max_int_to_represent, -) -> Union[np.uint16, np.uint32, np.int64]: - - if max_int_to_represent is None: - return np.uint32 # Safe guess - elif max_int_to_represent < 65500: - return np.uint16 - elif max_int_to_represent < 4294967295: - return np.uint32 - else: - return np.int64 - # we avoid np.uint64 because it doesn't save space and its type promotion behaves unexpectedly - # https://github.com/numpy/numpy/issues/5745 - - -def get_available_dataset_impl(): - return list(map(str, DATASET_IMPL_CHOICES)) - - -def infer_dataset_impl(path): - if IndexedRawTextDataset.exists(path): - return "raw" - elif IndexedDataset.exists(path): - with open(index_file_path(path), "rb") as f: - magic = f.read(8) - if magic == IndexedDataset._HDR_MAGIC: - return "cached" - elif magic == MMapIndexedDataset.Index._HDR_MAGIC[:8]: - return "mmap" - elif magic == HuffmanMMapIndex._HDR_MAGIC[:8]: - return "huffman" - else: - return None - elif FastaDataset.exists(path): - return "fasta" - else: - return None - - -def make_builder(out_file, impl, vocab_size=None): - if impl == "mmap": - return MMapIndexedDatasetBuilder( - out_file, dtype=best_fitting_int_dtype(vocab_size) - ) - elif impl == "fasta": - raise NotImplementedError - elif impl == "huffman": - raise ValueError("Use HuffmanCodeBuilder directly as it has a different interface.") - else: - return IndexedDatasetBuilder(out_file) - - -def make_dataset(path, impl, fix_lua_indexing=False, dictionary=None): - if impl == "raw" and IndexedRawTextDataset.exists(path): - assert dictionary is not None - return IndexedRawTextDataset(path, dictionary) - elif impl == "lazy" and IndexedDataset.exists(path): - return IndexedDataset(path, fix_lua_indexing=fix_lua_indexing) - elif impl == "cached" and IndexedDataset.exists(path): - return IndexedCachedDataset(path, fix_lua_indexing=fix_lua_indexing) - elif impl == "mmap" and MMapIndexedDataset.exists(path): - return MMapIndexedDataset(path) - elif impl == "fasta" and FastaDataset.exists(path): - from fairseq.data.fasta_dataset import EncodedFastaDataset - - return EncodedFastaDataset(path, dictionary) - elif impl == "huffman" and HuffmanMMapIndexedDataset.exists(path): - return HuffmanMMapIndexedDataset(path) - return None - - -def dataset_exists(path, impl): - if impl == "raw": - return IndexedRawTextDataset.exists(path) - elif impl == "mmap": - return MMapIndexedDataset.exists(path) - elif impl == "huffman": - return HuffmanMMapIndexedDataset.exists(path) - else: - return IndexedDataset.exists(path) - - -def read_longs(f, n): - a = np.empty(n, dtype=np.int64) - f.readinto(a) - return a - - -def write_longs(f, a): - f.write(np.array(a, dtype=np.int64)) - - -_code_to_dtype = { - 1: np.uint8, - 2: np.int8, - 3: np.int16, - 4: np.int32, - 5: np.int64, - 6: np.float, - 7: np.double, - 8: np.uint16, - 9: np.uint32, - 10: np.uint64, -} - - -def _dtype_header_code(dtype) -> int: - for k in _code_to_dtype.keys(): - if _code_to_dtype[k] == dtype: - return k - raise ValueError(dtype) - - -def index_file_path(prefix_path): - return prefix_path + ".idx" - - -def data_file_path(prefix_path): - return prefix_path + ".bin" - - -class IndexedDataset(FairseqDataset): - """Loader for TorchNet IndexedDataset""" - - _HDR_MAGIC = b"TNTIDX\x00\x00" - - def __init__(self, path, fix_lua_indexing=False): - super().__init__() - self.path = path - self.fix_lua_indexing = fix_lua_indexing - self.data_file = None - self.read_index(path) - - def read_index(self, path): - with open(index_file_path(path), "rb") as f: - magic = f.read(8) - assert magic == self._HDR_MAGIC, ( - "Index file doesn't match expected format. " - "Make sure that --dataset-impl is configured properly." - ) - version = f.read(8) - assert struct.unpack("= self._len: - raise IndexError("index out of range") - - def __del__(self): - if self.data_file: - self.data_file.close() - - @lru_cache(maxsize=8) - def __getitem__(self, i) -> torch.Tensor: - if not self.data_file: - self.read_data(self.path) - self.check_index(i) - tensor_size = self.sizes[self.dim_offsets[i] : self.dim_offsets[i + 1]] - a = np.empty(tensor_size, dtype=self.dtype) - self.data_file.seek(self.data_offsets[i] * self.element_size) - self.data_file.readinto(a) - item = torch.from_numpy(a).long() - if self.fix_lua_indexing: - item -= 1 # subtract 1 for 0-based indexing - return item - - def __len__(self): - return self._len - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - @staticmethod - def exists(path): - return PathManager.exists(index_file_path(path)) and PathManager.exists( - data_file_path(path) - ) - - @property - def supports_prefetch(self): - return False # avoid prefetching to save memory - - -class IndexedCachedDataset(IndexedDataset): - def __init__(self, path, fix_lua_indexing=False): - super().__init__(path, fix_lua_indexing=fix_lua_indexing) - self.cache = None - self.cache_index = {} - - @property - def supports_prefetch(self): - return True - - def prefetch(self, indices): - if all(i in self.cache_index for i in indices): - return - if not self.data_file: - self.read_data(self.path) - indices = sorted(set(indices)) - total_size = 0 - for i in indices: - total_size += self.data_offsets[i + 1] - self.data_offsets[i] - self.cache = np.empty(total_size, dtype=self.dtype) - ptx = 0 - self.cache_index.clear() - for i in indices: - self.cache_index[i] = ptx - size = self.data_offsets[i + 1] - self.data_offsets[i] - a = self.cache[ptx : ptx + size] - self.data_file.seek(self.data_offsets[i] * self.element_size) - self.data_file.readinto(a) - ptx += size - if self.data_file: - # close and delete data file after prefetch so we can pickle - self.data_file.close() - self.data_file = None - - @lru_cache(maxsize=8) - def __getitem__(self, i): - self.check_index(i) - tensor_size = self.sizes[self.dim_offsets[i] : self.dim_offsets[i + 1]] - a = np.empty(tensor_size, dtype=self.dtype) - ptx = self.cache_index[i] - np.copyto(a, self.cache[ptx : ptx + a.size]) - item = torch.from_numpy(a).long() - if self.fix_lua_indexing: - item -= 1 # subtract 1 for 0-based indexing - return item - - -class IndexedRawTextDataset(FairseqDataset): - """Takes a text file as input and binarizes it in memory at instantiation. - Original lines are also kept in memory""" - - def __init__(self, path, dictionary, append_eos=True, reverse_order=False): - self.tokens_list = [] - self.lines = [] - self.sizes = [] - self.append_eos = append_eos - self.reverse_order = reverse_order - self.read_data(path, dictionary) - self.size = len(self.tokens_list) - - def read_data(self, path, dictionary): - with open(path, "r", encoding="utf-8") as f: - for line in f: - self.lines.append(line.strip("\n")) - tokens = dictionary.encode_line( - line, - add_if_not_exist=False, - append_eos=self.append_eos, - reverse_order=self.reverse_order, - ).long() - self.tokens_list.append(tokens) - self.sizes.append(len(tokens)) - self.sizes = np.array(self.sizes) - - def check_index(self, i): - if i < 0 or i >= self.size: - raise IndexError("index out of range") - - @lru_cache(maxsize=8) - def __getitem__(self, i): - self.check_index(i) - return self.tokens_list[i] - - def get_original_text(self, i): - self.check_index(i) - return self.lines[i] - - def __del__(self): - pass - - def __len__(self): - return self.size - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - @staticmethod - def exists(path): - return PathManager.exists(path) - - -class IndexedDatasetBuilder: - element_sizes = { - np.uint8: 1, - np.int8: 1, - np.int16: 2, - np.int32: 4, - np.int64: 8, - np.float: 4, - np.double: 8, - } - - def __init__(self, out_file, dtype=np.int32): - self.out_file = open(out_file, "wb") - self.dtype = dtype - self.data_offsets = [0] - self.dim_offsets = [0] - self.sizes = [] - self.element_size = self.element_sizes[self.dtype] - - def add_item(self, tensor): - # +1 for Lua compatibility - bytes = self.out_file.write(np.array(tensor.numpy() + 1, dtype=self.dtype)) - self.data_offsets.append(self.data_offsets[-1] + bytes / self.element_size) - for s in tensor.size(): - self.sizes.append(s) - self.dim_offsets.append(self.dim_offsets[-1] + len(tensor.size())) - - def merge_file_(self, another_file): - index = IndexedDataset(another_file) - assert index.dtype == self.dtype - - begin = self.data_offsets[-1] - for offset in index.data_offsets[1:]: - self.data_offsets.append(begin + offset) - self.sizes.extend(index.sizes) - begin = self.dim_offsets[-1] - for dim_offset in index.dim_offsets[1:]: - self.dim_offsets.append(begin + dim_offset) - - with open(data_file_path(another_file), "rb") as f: - while True: - data = f.read(1024) - if data: - self.out_file.write(data) - else: - break - - def finalize(self, index_file): - self.out_file.close() - index = open(index_file, "wb") - index.write(b"TNTIDX\x00\x00") - index.write(struct.pack(" str: - local_index_path = PathManager.get_local_path(index_file_path(path)) - local_data_path = PathManager.get_local_path(data_file_path(path)) - - assert local_index_path.endswith(".idx") and local_data_path.endswith(".bin"), ( - "PathManager.get_local_path does not return files with expected patterns: " - f"{local_index_path} and {local_data_path}" - ) - - local_path = local_data_path[:-4] # stripping surfix ".bin" - assert local_path == local_index_path[:-4] # stripping surfix ".idx" - return local_path - - -class MMapIndexedDatasetBuilder: - def __init__(self, out_file, dtype=np.int64): - self._data_file = open(out_file, "wb") - self._dtype = dtype - self._sizes = [] - - def add_item(self, tensor): - np_array = np.array(tensor.numpy(), dtype=self._dtype) - self._data_file.write(np_array.tobytes(order="C")) - self._sizes.append(np_array.size) - - def merge_file_(self, another_file): - # Concatenate index - index = MMapIndexedDataset.Index(index_file_path(another_file)) - assert index.dtype == self._dtype - - for size in index.sizes: - self._sizes.append(size) - - # Concatenate data - with open(data_file_path(another_file), "rb") as f: - shutil.copyfileobj(f, self._data_file) - - def finalize(self, index_file): - self._data_file.close() - - with MMapIndexedDataset.Index.writer(index_file, self._dtype) as index: - index.write(self._sizes) diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/memory/redismem.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/memory/redismem.py deleted file mode 100644 index 082a812c5362cc9f19e35bf1bb10269b558f7724..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/memory/redismem.py +++ /dev/null @@ -1,156 +0,0 @@ -"""Redis memory provider.""" -from __future__ import annotations - -from typing import Any - -import numpy as np -import redis -from colorama import Fore, Style -from redis.commands.search.field import TextField, VectorField -from redis.commands.search.indexDefinition import IndexDefinition, IndexType -from redis.commands.search.query import Query - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.logs import logger -from autogpt.memory.base import MemoryProviderSingleton - -SCHEMA = [ - TextField("data"), - VectorField( - "embedding", - "HNSW", - {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"}, - ), -] - - -class RedisMemory(MemoryProviderSingleton): - def __init__(self, cfg): - """ - Initializes the Redis memory provider. - - Args: - cfg: The config object. - - Returns: None - """ - redis_host = cfg.redis_host - redis_port = cfg.redis_port - redis_password = cfg.redis_password - self.dimension = 1536 - self.redis = redis.Redis( - host=redis_host, - port=redis_port, - password=redis_password, - db=0, # Cannot be changed - ) - self.cfg = cfg - - # Check redis connection - try: - self.redis.ping() - except redis.ConnectionError as e: - logger.typewriter_log( - "FAILED TO CONNECT TO REDIS", - Fore.RED, - Style.BRIGHT + str(e) + Style.RESET_ALL, - ) - logger.double_check( - "Please ensure you have setup and configured Redis properly for use. " - + f"You can check out {Fore.CYAN + Style.BRIGHT}" - f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}" - " to ensure you've set up everything correctly." - ) - exit(1) - - if cfg.wipe_redis_on_start: - self.redis.flushall() - try: - self.redis.ft(f"{cfg.memory_index}").create_index( - fields=SCHEMA, - definition=IndexDefinition( - prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH - ), - ) - except Exception as e: - print("Error creating Redis search index: ", e) - existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num") - self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0 - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. - - Args: - data: The data to add. - - Returns: Message indicating that the data has been added. - """ - if "Command Error:" in data: - return "" - vector = create_embedding_with_ada(data) - vector = np.array(vector).astype(np.float32).tobytes() - data_dict = {b"data": data, "embedding": vector} - pipe = self.redis.pipeline() - pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict) - _text = ( - f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}" - ) - self.vec_num += 1 - pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num) - pipe.execute() - return _text - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.redis.flushall() - return "Obliviated" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: A list of the most relevant data. - """ - query_embedding = create_embedding_with_ada(data) - base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]" - query = ( - Query(base_query) - .return_fields("data", "vector_score") - .sort_by("vector_score") - .dialect(2) - ) - query_vector = np.array(query_embedding).astype(np.float32).tobytes() - - try: - results = self.redis.ft(f"{self.cfg.memory_index}").search( - query, query_params={"vector": query_vector} - ) - except Exception as e: - print("Error calling Redis search: ", e) - return None - return [result.data for result in results.docs] - - def get_stats(self): - """ - Returns: The stats of the memory index. - """ - return self.redis.ft(f"{self.cfg.memory_index}").info() diff --git a/spaces/multimodalart/redirectme/index.html b/spaces/multimodalart/redirectme/index.html deleted file mode 100644 index cb549687f427ca9b1f1d8983b3b8c13597cec13e..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/redirectme/index.html +++ /dev/null @@ -1,20 +0,0 @@ - - - - - - - My static Space - - - -
    -

    Welcome to your static Space!

    -

    You can modify this app directly by editing index.html in the Files and versions tab.

    -

    - Also don't forget to check the - Spaces documentation. -

    -
    - - diff --git a/spaces/multimodalart/stable-diffusion-inpainting/app.py b/spaces/multimodalart/stable-diffusion-inpainting/app.py deleted file mode 100644 index 877d030b86d89ef36fcb16eb5dbef1581a6481a6..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/stable-diffusion-inpainting/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import gradio as gr -#test -from io import BytesIO -import requests -import PIL -from PIL import Image -import numpy as np -import os -import uuid -import torch -from torch import autocast -import cv2 -from matplotlib import pyplot as plt -from diffusers import DiffusionPipeline -from torchvision import transforms -from clipseg.models.clipseg import CLIPDensePredT - -auth_token = os.environ.get("API_TOKEN") or True - -def download_image(url): - response = requests.get(url) - return PIL.Image.open(BytesIO(response.content)).convert("RGB") - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = DiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - revision="fp16", - torch_dtype=torch.float16, - use_auth_token=auth_token, -).to(device) - -model = CLIPDensePredT(version='ViT-B/16', reduce_dim=64) -model.eval() -model.load_state_dict(torch.load('./clipseg/weights/rd64-uni.pth', map_location=torch.device('cuda')), strict=False) - -transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - transforms.Resize((512, 512)), -]) - -def predict(radio, dict, word_mask, prompt=""): - if(radio == "draw a mask above"): - with autocast("cuda"): - init_image = dict["image"].convert("RGB").resize((512, 512)) - mask = dict["mask"].convert("RGB").resize((512, 512)) - else: - img = transform(dict["image"]).unsqueeze(0) - word_masks = [word_mask] - with torch.no_grad(): - preds = model(img.repeat(len(word_masks),1,1,1), word_masks)[0] - init_image = dict['image'].convert('RGB').resize((512, 512)) - filename = f"{uuid.uuid4()}.png" - plt.imsave(filename,torch.sigmoid(preds[0][0])) - img2 = cv2.imread(filename) - gray_image = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) - (thresh, bw_image) = cv2.threshold(gray_image, 100, 255, cv2.THRESH_BINARY) - cv2.cvtColor(bw_image, cv2.COLOR_BGR2RGB) - mask = Image.fromarray(np.uint8(bw_image)).convert('RGB') - os.remove(filename) - #with autocast("cuda"): - output = pipe(prompt = prompt, image=init_image, mask_image=mask, strength=0.8) - return output.images[0] - -# examples = [[dict(image="init_image.png", mask="mask_image.png"), "A panda sitting on a bench"]] -css = ''' -.container {max-width: 1150px;margin: auto;padding-top: 1.5rem} -#image_upload{min-height:400px} -#image_upload [data-testid="image"], #image_upload [data-testid="image"] > div{min-height: 400px} -#mask_radio .gr-form{background:transparent; border: none} -#word_mask{margin-top: .75em !important} -#word_mask textarea:disabled{opacity: 0.3} -.footer {margin-bottom: 45px;margin-top: 35px;text-align: center;border-bottom: 1px solid #e5e5e5} -.footer>p {font-size: .8rem; display: inline-block; padding: 0 10px;transform: translateY(10px);background: white} -.dark .footer {border-color: #303030} -.dark .footer>p {background: #0b0f19} -.acknowledgments h4{margin: 1.25em 0 .25em 0;font-weight: bold;font-size: 115%} -#image_upload .touch-none{display: flex} -''' -def swap_word_mask(radio_option): - if(radio_option == "type what to mask below"): - return gr.update(interactive=True, placeholder="A cat") - else: - return gr.update(interactive=False, placeholder="Disabled") - -image_blocks = gr.Blocks(css=css) -with image_blocks as demo: - gr.HTML( - """ -
    -
    - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    - Stable Diffusion Multi Inpainting -

    -
    -

    - Inpaint Stable Diffusion by either drawing a mask or typing what to replace -

    -
    - """ - ) - with gr.Row(): - with gr.Column(): - image = gr.Image(source='upload', tool='sketch', elem_id="image_upload", type="pil", label="Upload").style(height=400) - with gr.Box(elem_id="mask_radio").style(border=False): - radio = gr.Radio(["draw a mask above", "type what to mask below"], value="draw a mask above", show_label=False, interactive=True).style(container=False) - word_mask = gr.Textbox(label = "What to find in your image", interactive=False, elem_id="word_mask", placeholder="Disabled").style(container=False) - prompt = gr.Textbox(label = 'Your prompt (what you want to add in place of what you are removing)') - radio.change(fn=swap_word_mask, inputs=radio, outputs=word_mask,show_progress=False) - radio.change(None, inputs=[], outputs=image_blocks, _js = """ - () => { - css_style = document.styleSheets[document.styleSheets.length - 1] - last_item = css_style.cssRules[css_style.cssRules.length - 1] - last_item.style.display = ["flex", ""].includes(last_item.style.display) ? "none" : "flex"; - }""") - btn = gr.Button("Run") - with gr.Column(): - result = gr.Image(label="Result") - btn.click(fn=predict, inputs=[radio, image, word_mask, prompt], outputs=result) - gr.HTML( - """ - -
    -

    LICENSE

    -The model is licensed with a CreativeML Open RAIL-M license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license

    -

    Biases and content acknowledgment

    -Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the model card

    -
    - """ - ) -demo.launch() \ No newline at end of file diff --git a/spaces/mushroomsolutions/Medical-Image-Classification/README.md b/spaces/mushroomsolutions/Medical-Image-Classification/README.md deleted file mode 100644 index 7d27eb0aabdb716f8e86140416349a31a1952898..0000000000000000000000000000000000000000 --- a/spaces/mushroomsolutions/Medical-Image-Classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Medical Image Classification With MONAI -emoji: 🔥 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -duplicated_from: ClassCat/Medical-Image-Classification-with-MONAI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/requirements/Dockerfile b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/requirements/Dockerfile deleted file mode 100644 index 5f6eb68cec066e614333abe68a3f5edc4227fea2..0000000000000000000000000000000000000000 --- a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/requirements/Dockerfile +++ /dev/null @@ -1,117 +0,0 @@ -# Arguments to pass to the image -ARG VERSION_DATE=23.01 -ARG FROM_IMAGE=nvcr.io/nvidia/pytorch - -# Import RAPIDS container as the BASE Image (cuda base image) -FROM ${FROM_IMAGE}:${VERSION_DATE}-py3 - -# Ubuntu needs noninteractive to be forced -ENV DEBIAN_FRONTEND noninteractive -ENV PROJ_LIB="/usr/share/proj" -ENV CPLUS_INCLUDE_PATH="/usr/include/gdal" -ENV C_INCLUDE_PATH="/usr/include/gdal" - -# System dependencies -# System dependencies -RUN apt-get update && \ - apt-get -y install software-properties-common && \ - add-apt-repository ppa:ubuntugis/ubuntugis-unstable && \ - curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash && \ - apt-get update && apt-get -y dist-upgrade && \ - apt-get -y install build-essential \ - libsm6 \ - libxext6 \ - libxrender-dev \ - libfontconfig1 \ - bzip2 \ - diffutils \ - file \ - build-essential \ - make \ - swig \ - libnetcdf-dev \ - libacl1-dev \ - libgeos++-dev \ - libgeos-dev \ - libsqlite3-dev \ - libx11-dev \ - libproj-dev \ - proj-data \ - proj-bin \ - libspatialindex-dev \ - wget \ - vim \ - curl \ - git \ - procps \ - gcc \ - g++ \ - bzip2 \ - libssl-dev \ - libzmq3-dev \ - libpng-dev \ - libfreetype6-dev \ - locales \ - git-lfs && \ - apt-get -y install gdal-bin libgdal-dev && \ - apt-get -y autoremove && \ - rm -rf /var/cache/apt /var/lib/apt/lists/* - -# Install shiftc -WORKDIR /app -RUN git clone --single-branch --branch master https://github.com/pkolano/shift.git && \ - cd shift/c && \ - make nolustre && \ - cd ../ && \ - install -m 755 perl/shiftc /usr/local/bin/ && \ - install -m 755 c/shift-bin /usr/local/bin/ && \ - install -m 755 perl/shift-mgr /usr/local/bin/ && \ - install -m 644 etc/shiftrc /etc/ && \ - install -m 755 perl/shift-aux /usr/local/bin/ && \ - install -m 755 c/shift-bin /usr/local/bin/ && \ - export LC_ALL=en_US.UTF-8 && \ - export LANG=en_US.UTF-8 && \ - locale-gen en_US.UTF-8 && \ - rm -rf /app - -# Pip -RUN pip --no-cache-dir install omegaconf \ - pytorch-lightning \ - Lightning \ - transformers \ - datasets \ - webdataset \ - 'huggingface_hub[cli,torch]' \ - torchgeo \ - rasterio \ - rioxarray \ - xarray \ - xarray-spatial \ - geopandas \ - opencv-python \ - opencv-python-headless \ - opencv-contrib-python \ - opencv-contrib-python-headless \ - tifffile \ - webcolors \ - Pillow \ - seaborn \ - xgboost \ - tiler \ - segmentation-models \ - timm \ - supervision \ - pytest \ - coveralls \ - rtree \ - sphinx \ - sphinx_rtd_theme \ - yacs \ - termcolor \ - segmentation-models-pytorch \ - pytorch-caney \ - GDAL==`ogrinfo --version | grep -Eo '[0-9]\.[0-9]\.[0-9]+'` - -HEALTHCHECK NONE -ENTRYPOINT [] -CMD ["/bin/bash"] diff --git "a/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/01_\360\237\217\240_Home.py" "b/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/01_\360\237\217\240_Home.py" deleted file mode 100644 index a34f3f52600e2711be174866fe852927de2017c5..0000000000000000000000000000000000000000 --- "a/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/01_\360\237\217\240_Home.py" +++ /dev/null @@ -1,71 +0,0 @@ -import whisper -import os -import pandas as pd -import plotly_express as px -import nltk -import plotly.graph_objects as go -from optimum.onnxruntime import ORTModelForSequenceClassification -from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification -from sentence_transformers import SentenceTransformer, CrossEncoder, util -import streamlit as st -import en_core_web_lg - -nltk.download('punkt') - -from nltk import sent_tokenize - -auth_token = os.environ.get("auth_token") - -st.sidebar.header("Home") - -asr_model_options = ['tiny.en','base.en','small.en'] - -asr_model_name = st.sidebar.selectbox("Whisper Model Options", options=asr_model_options, key="sbox") - -st.markdown("## Earnings Call Analysis Whisperer") - -twitter_link = """ -[![](https://img.shields.io/twitter/follow/nickmuchi?label=@nickmuchi&style=social)](https://twitter.com/nickmuchi) -""" - -st.markdown(twitter_link) - -st.markdown( - """ - This app assists finance analysts with transcribing and analysis Earnings Calls by carrying out the following tasks: - - Transcribing earnings calls using Open AI's Whisper API, takes approx 3mins to transcribe a 1hr call less than 25mb in size. - - Analysing the sentiment of transcribed text using the quantized version of [FinBert-Tone](https://huggingface.co/nickmuchi/quantized-optimum-finbert-tone). - - Summarization of the call with [philschmid/flan-t5-base-samsum](https://huggingface.co/philschmid/flan-t5-base-samsum) model with entity extraction - - Question Answering Search engine powered by Langchain and [Sentence Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). - - **👇 Enter a YouTube Earnings Call URL below and navigate to the sidebar tabs** - -""" -) - -if 'sbox' not in st.session_state: - st.session_state.sbox = asr_model_name - -if "earnings_passages" not in st.session_state: - st.session_state["earnings_passages"] = '' - -if "sen_df" not in st.session_state: - st.session_state['sen_df'] = '' - -url_input = st.text_input( - label="Enter YouTube URL, example below is McDonalds Earnings Call Q1 2023", - value="https://www.youtube.com/watch?v=4p6o5kkZYyA") - -if 'url' not in st.session_state: - st.session_state['url'] = "" - -st.session_state['url'] = url_input - -st.markdown( - "

    OR

    ", - unsafe_allow_html=True -) - -upload_wav = st.file_uploader("Upload a .wav/.mp3/.mp4 audio file ",key="upload",type=['.wav','.mp3','.mp4']) - -st.markdown("![visitors](https://visitor-badge.glitch.me/badge?page_id=nickmuchi.earnings-call-whisperer)") diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/keypoints.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/keypoints.py deleted file mode 100644 index b93ebed4f6554e67ba9bde8d3af90e8dbb3246b6..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/keypoints.py +++ /dev/null @@ -1,235 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Any, List, Tuple, Union -import torch -from torch.nn import functional as F - - -class Keypoints: - """ - Stores keypoint **annotation** data. GT Instances have a `gt_keypoints` property - containing the x,y location and visibility flag of each keypoint. This tensor has shape - (N, K, 3) where N is the number of instances and K is the number of keypoints per instance. - - The visibility flag follows the COCO format and must be one of three integers: - - * v=0: not labeled (in which case x=y=0) - * v=1: labeled but not visible - * v=2: labeled and visible - """ - - def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[List[float]]]): - """ - Arguments: - keypoints: A Tensor, numpy array, or list of the x, y, and visibility of each keypoint. - The shape should be (N, K, 3) where N is the number of - instances, and K is the number of keypoints per instance. - """ - device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device("cpu") - keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device) - assert keypoints.dim() == 3 and keypoints.shape[2] == 3, keypoints.shape - self.tensor = keypoints - - def __len__(self) -> int: - return self.tensor.size(0) - - def to(self, *args: Any, **kwargs: Any) -> "Keypoints": - return type(self)(self.tensor.to(*args, **kwargs)) - - @property - def device(self) -> torch.device: - return self.tensor.device - - def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch.Tensor: - """ - Convert keypoint annotations to a heatmap of one-hot labels for training, - as described in :paper:`Mask R-CNN`. - - Arguments: - boxes: Nx4 tensor, the boxes to draw the keypoints to - - Returns: - heatmaps: - A tensor of shape (N, K), each element is integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: - A tensor of shape (N, K) containing whether each keypoint is in the roi or not. - """ - return _keypoints_to_heatmap(self.tensor, boxes, heatmap_size) - - def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Keypoints": - """ - Create a new `Keypoints` by indexing on this `Keypoints`. - - The following usage are allowed: - - 1. `new_kpts = kpts[3]`: return a `Keypoints` which contains only one instance. - 2. `new_kpts = kpts[2:10]`: return a slice of key points. - 3. `new_kpts = kpts[vector]`, where vector is a torch.ByteTensor - with `length = len(kpts)`. Nonzero elements in the vector will be selected. - - Note that the returned Keypoints might share storage with this Keypoints, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Keypoints([self.tensor[item]]) - return Keypoints(self.tensor[item]) - - def __repr__(self) -> str: - s = self.__class__.__name__ + "(" - s += "num_instances={})".format(len(self.tensor)) - return s - - @staticmethod - def cat(keypoints_list: List["Keypoints"]) -> "Keypoints": - """ - Concatenates a list of Keypoints into a single Keypoints - - Arguments: - keypoints_list (list[Keypoints]) - - Returns: - Keypoints: the concatenated Keypoints - """ - assert isinstance(keypoints_list, (list, tuple)) - assert len(keypoints_list) > 0 - assert all(isinstance(keypoints, Keypoints) for keypoints in keypoints_list) - - cat_kpts = type(keypoints_list[0])( - torch.cat([kpts.tensor for kpts in keypoints_list], dim=0) - ) - return cat_kpts - - -# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop) -def _keypoints_to_heatmap( - keypoints: torch.Tensor, rois: torch.Tensor, heatmap_size: int -) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Encode keypoint locations into a target heatmap for use in SoftmaxWithLoss across space. - - Maps keypoints from the half-open interval [x1, x2) on continuous image coordinates to the - closed interval [0, heatmap_size - 1] on discrete image coordinates. We use the - continuous-discrete conversion from Heckbert 1990 ("What is the coordinate of a pixel?"): - d = floor(c) and c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - - Arguments: - keypoints: tensor of keypoint locations in of shape (N, K, 3). - rois: Nx4 tensor of rois in xyxy format - heatmap_size: integer side length of square heatmap. - - Returns: - heatmaps: A tensor of shape (N, K) containing an integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: A tensor of shape (N, K) containing whether each keypoint is in - the roi or not. - """ - - if rois.numel() == 0: - return rois.new().long(), rois.new().long() - offset_x = rois[:, 0] - offset_y = rois[:, 1] - scale_x = heatmap_size / (rois[:, 2] - rois[:, 0]) - scale_y = heatmap_size / (rois[:, 3] - rois[:, 1]) - - offset_x = offset_x[:, None] - offset_y = offset_y[:, None] - scale_x = scale_x[:, None] - scale_y = scale_y[:, None] - - x = keypoints[..., 0] - y = keypoints[..., 1] - - x_boundary_inds = x == rois[:, 2][:, None] - y_boundary_inds = y == rois[:, 3][:, None] - - x = (x - offset_x) * scale_x - x = x.floor().long() - y = (y - offset_y) * scale_y - y = y.floor().long() - - x[x_boundary_inds] = heatmap_size - 1 - y[y_boundary_inds] = heatmap_size - 1 - - valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size) - vis = keypoints[..., 2] > 0 - valid = (valid_loc & vis).long() - - lin_ind = y * heatmap_size + x - heatmaps = lin_ind * valid - - return heatmaps, valid - - -@torch.jit.script_if_tracing -def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> torch.Tensor: - """ - Extract predicted keypoint locations from heatmaps. - - Args: - maps (Tensor): (#ROIs, #keypoints, POOL_H, POOL_W). The predicted heatmap of logits for - each ROI and each keypoint. - rois (Tensor): (#ROIs, 4). The box of each ROI. - - Returns: - Tensor of shape (#ROIs, #keypoints, 4) with the last dimension corresponding to - (x, y, logit, score) for each keypoint. - - When converting discrete pixel indices in an NxN image to a continuous keypoint coordinate, - we maintain consistency with :meth:`Keypoints.to_heatmap` by using the conversion from - Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - """ - - offset_x = rois[:, 0] - offset_y = rois[:, 1] - - widths = (rois[:, 2] - rois[:, 0]).clamp(min=1) - heights = (rois[:, 3] - rois[:, 1]).clamp(min=1) - widths_ceil = widths.ceil() - heights_ceil = heights.ceil() - - num_rois, num_keypoints = maps.shape[:2] - xy_preds = maps.new_zeros(rois.shape[0], num_keypoints, 4) - - width_corrections = widths / widths_ceil - height_corrections = heights / heights_ceil - - keypoints_idx = torch.arange(num_keypoints, device=maps.device) - - for i in range(num_rois): - outsize = (int(heights_ceil[i]), int(widths_ceil[i])) - roi_map = F.interpolate(maps[[i]], size=outsize, mode="bicubic", align_corners=False) - - # Although semantically equivalent, `reshape` is used instead of `squeeze` due - # to limitation during ONNX export of `squeeze` in scripting mode - roi_map = roi_map.reshape(roi_map.shape[1:]) # keypoints x H x W - - # softmax over the spatial region - max_score, _ = roi_map.view(num_keypoints, -1).max(1) - max_score = max_score.view(num_keypoints, 1, 1) - tmp_full_resolution = (roi_map - max_score).exp_() - tmp_pool_resolution = (maps[i] - max_score).exp_() - # Produce scores over the region H x W, but normalize with POOL_H x POOL_W, - # so that the scores of objects of different absolute sizes will be more comparable - roi_map_scores = tmp_full_resolution / tmp_pool_resolution.sum((1, 2), keepdim=True) - - w = roi_map.shape[2] - pos = roi_map.view(num_keypoints, -1).argmax(1) - - x_int = pos % w - y_int = (pos - x_int) // w - - assert ( - roi_map_scores[keypoints_idx, y_int, x_int] - == roi_map_scores.view(num_keypoints, -1).max(1)[0] - ).all() - - x = (x_int.float() + 0.5) * width_corrections[i] - y = (y_int.float() + 0.5) * height_corrections[i] - - xy_preds[i, :, 0] = x + offset_x[i] - xy_preds[i, :, 1] = y + offset_y[i] - xy_preds[i, :, 2] = roi_map[keypoints_idx, y_int, x_int] - xy_preds[i, :, 3] = roi_map_scores[keypoints_idx, y_int, x_int] - - return xy_preds diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/TOOL_APPLY_NET.md b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/TOOL_APPLY_NET.md deleted file mode 100644 index ca8e1ddafc7b1003ba98cce2826157ab995a2443..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/TOOL_APPLY_NET.md +++ /dev/null @@ -1,203 +0,0 @@ -# Apply Net - -`apply_net` is a tool to print or visualize DensePose results on a set of images. -It has two modes: `dump` to save DensePose model results to a pickle file -and `show` to visualize them on images. - -The `image.jpg` file that is used as an example in this doc can be found [here](http://images.cocodataset.org/train2017/000000117508.jpg) - -## Dump Mode - -The general command form is: -```bash -python apply_net.py dump [-h] [-v] [--output ] -``` - -There are three mandatory arguments: - - ``, configuration file for a given model; - - ``, model file with trained parameters - - ``, input image file name, pattern or folder - -One can additionally provide `--output` argument to define the output file name, -which defaults to `output.pkl`. - - -Examples: - -1. Dump results of the [R_50_FPN_s1x](https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl) DensePose model for images in a folder `images` to file `dump.pkl`: -```bash -python apply_net.py dump configs/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \ -images --output dump.pkl -v -``` - -2. Dump results of the [R_50_FPN_s1x](https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl) DensePose model for images with file name matching a pattern `image*.jpg` to file `results.pkl`: -```bash -python apply_net.py dump configs/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \ -"image*.jpg" --output results.pkl -v -``` - -If you want to load the pickle file generated by the above command: -``` -# make sure DensePose is in your PYTHONPATH, or use the following line to add it: -sys.path.append("/your_detectron2_path/detectron2_repo/projects/DensePose/") - -f = open('/your_result_path/results.pkl', 'rb') -data = pickle.load(f) -``` - -The file `results.pkl` contains the list of results per image, for each image the result is a dictionary. - -**If you use a [IUV model](DENSEPOSE_IUV.md#-model-zoo-and-baselines)**, the dumped data will have the following format: - -``` -data: [{'file_name': '/your_path/image1.jpg', - 'scores': tensor([0.9884]), - 'pred_boxes_XYXY': tensor([[ 69.6114, 0.0000, 706.9797, 706.0000]]), - 'pred_densepose': [DensePoseChartResultWithConfidences(labels=tensor(...), uv=tensor(...), sigma_1=None, - sigma_2=None, kappa_u=None, kappa_v=None, fine_segm_confidence=None, coarse_segm_confidence=None), - DensePoseChartResultWithConfidences, ...] - } - {'file_name': '/your_path/image2.jpg', - 'scores': tensor([0.9999, 0.5373, 0.3991]), - 'pred_boxes_XYXY': tensor([[ 59.5734, 7.7535, 579.9311, 932.3619], - [612.9418, 686.1254, 612.9999, 704.6053], - [164.5081, 407.4034, 598.3944, 920.4266]]), - 'pred_densepose': [DensePoseChartResultWithConfidences(labels=tensor(...), uv=tensor(...), sigma_1=None, - sigma_2=None, kappa_u=None, kappa_v=None, fine_segm_confidence=None, coarse_segm_confidence=None), - DensePoseChartResultWithConfidences, ...] - }] -``` - -`DensePoseChartResultWithConfidences` contains the following fields: -- `labels` - a tensor of size `[H, W]` of type `torch.long` which contains fine segmentation labels (previously called `I`) -- `uv` - a tensor of size `[2, H, W]` of type `torch.float` which contains `U` and `V` coordinates -- various optional confidence-related fields (`sigma_1`, `sigma_2`, `kappa_u`, `kappa_v`, `fine_segm_confidence`, `coarse_segm_confidence`) - - -**If you use a [CSE model](DENSEPOSE_CSE.md#-model-zoo-and-baselines)**, the dumped data will have the following format: -``` -data: [{'file_name': '/your_path/image1.jpg', - 'scores': tensor([0.9984, 0.9961]), - 'pred_boxes_XYXY': tensor([[480.0093, 461.0796, 698.3614, 696.1011], - [78.1589, 168.6614, 307.1287, 653.8522]]), - 'pred_densepose': DensePoseEmbeddingPredictorOutput(embedding=tensor(...), coarse_segm=tensor(...))} - {'file_name': '/your_path/image2.jpg', - 'scores': tensor([0.9189, 0.9491]), - 'pred_boxes_XYXY': tensor([[734.9685, 534.2003, 287.3923, 254.8859], - [434.2853, 765.1219, 132.1029, 867.9283]]), - 'pred_densepose': DensePoseEmbeddingPredictorOutput(embedding=tensor(...), coarse_segm=tensor(...))}] -``` - -`DensePoseEmbeddingPredictorOutput` contains the following fields: -- `embedding` - a tensor of size `[N, D, sz, sz]` of type `torch.float`, which contains embeddings of size `D` of the `N` detections in the image -- `coarse_segm` - a tensor of size `[N, 2, sz, sz]` of type `torch.float` which contains segmentation scores of the `N` detections in the image; e.g. a mask can be obtained by `coarse_segm.argmax(dim=1)` - -`sz` is a fixed size for the tensors; you can resize them to the size of the bounding box, if needed - -We can use the following code, to parse the outputs of the first -detected instance on the first image (IUV model). -``` -img_id, instance_id = 0, 0 # Look at the first image and the first detected instance -bbox_xyxy = data[img_id]['pred_boxes_XYXY'][instance_id] -result = data[img_id]['pred_densepose'][instance_id] -uv = result.uv -``` -The array `bbox_xyxy` contains (x0, y0, x1, y1) of the bounding box. - - -## Visualization Mode - -The general command form is: -```bash -python apply_net.py show [-h] [-v] [--min_score ] [--nms_thresh ] [--output ] -``` - -There are four mandatory arguments: - - ``, configuration file for a given model; - - ``, model file with trained parameters - - ``, input image file name, pattern or folder - - ``, visualizations specifier; currently available visualizations are: - * `bbox` - bounding boxes of detected persons; - * `dp_segm` - segmentation masks for detected persons; - * `dp_u` - each body part is colored according to the estimated values of the - U coordinate in part parameterization; - * `dp_v` - each body part is colored according to the estimated values of the - V coordinate in part parameterization; - * `dp_contour` - plots contours with color-coded U and V coordinates; - * `dp_iuv_texture` - transfers the texture from a given texture image file to detected instances, in IUV mode; - * `dp_vertex` - plots the rainbow visualization of the closest vertices prediction for a given mesh, in CSE mode; - * `dp_cse_texture` - transfers the texture from a given list of texture image files (one from each human or animal mesh) to detected instances, in CSE mode - - -One can additionally provide the following optional arguments: - - `--min_score` to only show detections with sufficient scores that are not lower than provided value - - `--nms_thresh` to additionally apply non-maximum suppression to detections at a given threshold - - `--output` to define visualization file name template, which defaults to `output.png`. - To distinguish output file names for different images, the tool appends 1-based entry index, - e.g. output.0001.png, output.0002.png, etc... -- `--texture_atlas` to define the texture atlas image for IUV texture transfer -- `--texture_atlases_map` to define the texture atlas images map (a dictionary `{mesh name: texture atlas image}`) for CSE texture transfer - - -The following examples show how to output results of a DensePose model -with ResNet-50 FPN backbone using different visualizations for image `image.jpg`: - -1. Show bounding box and segmentation: -```bash -python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \ -image.jpg bbox,dp_segm -v -``` -![Bounding Box + Segmentation Visualization](https://dl.fbaipublicfiles.com/densepose/web/apply_net/res_bbox_dp_segm.jpg) - -2. Show bounding box and estimated U coordinates for body parts: -```bash -python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \ -image.jpg bbox,dp_u -v -``` -![Bounding Box + U Coordinate Visualization](https://dl.fbaipublicfiles.com/densepose/web/apply_net/res_bbox_dp_u.jpg) - -3. Show bounding box and estimated V coordinates for body parts: -```bash -python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \ -image.jpg bbox,dp_v -v -``` -![Bounding Box + V Coordinate Visualization](https://dl.fbaipublicfiles.com/densepose/web/apply_net/res_bbox_dp_v.jpg) - -4. Show bounding box and estimated U and V coordinates via contour plots: -```bash -python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \ -image.jpg dp_contour,bbox -v -``` -![Bounding Box + Contour Visualization](https://dl.fbaipublicfiles.com/densepose/web/apply_net/res_bbox_dp_contour.jpg) - -5. Show bounding box and texture transfer: -```bash -python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \ -image.jpg dp_iuv_texture,bbox --texture_atlas texture_from_SURREAL.jpg -v -``` -![Bounding Box + IUV Texture Transfer Visualization](https://dl.fbaipublicfiles.com/densepose/web/apply_net/res_bbox_dp_iuv_texture.jpg) - -6. Show bounding box and CSE rainbow visualization: -```bash -python apply_net.py show configs/cse/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/cse/densepose_rcnn_R_50_FPN_s1x/251155172/model_final_c4ea5f.pkl \ -image.jpg dp_vertex,bbox -v -``` -![Bounding Box + CSE Rainbow Visualization](https://dl.fbaipublicfiles.com/densepose/web/apply_net/res_bbox_dp_vertex.jpg) - -7. Show bounding box and CSE texture transfer: -```bash -python apply_net.py show configs/cse/densepose_rcnn_R_50_FPN_s1x.yaml \ -https://dl.fbaipublicfiles.com/densepose/cse/densepose_rcnn_R_50_FPN_s1x/251155172/model_final_c4ea5f.pkl \ -image.jpg dp_cse_texture,bbox --texture_atlases_map '{"smpl_27554": "smpl_uvSnapshot_colors.jpg"}' -v -``` -![Bounding Box + CSE Texture Transfer Visualization](https://dl.fbaipublicfiles.com/densepose/web/apply_net/res_bbox_dp_cse_texture.jpg) - -The texture files can be found in the `doc/images` folder diff --git a/spaces/nomic-ai/Chinese-Vicuna_guanaco_belle_merge_v1.0/style.css b/spaces/nomic-ai/Chinese-Vicuna_guanaco_belle_merge_v1.0/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/Chinese-Vicuna_guanaco_belle_merge_v1.0/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nomic-ai/tatsu-lab_alpaca/index.html b/spaces/nomic-ai/tatsu-lab_alpaca/index.html deleted file mode 100644 index e51e7fb0c795d1522425894630ee36def924e2ab..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/tatsu-lab_alpaca/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - tatsu-lab/alpaca - - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/nomic-ai/wikitext/index.html b/spaces/nomic-ai/wikitext/index.html deleted file mode 100644 index 812a032cc5b0f0075f44774d91df851e45ccc478..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/wikitext/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - wikitext - - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/nota-ai/compressed-stable-diffusion/demo.py b/spaces/nota-ai/compressed-stable-diffusion/demo.py deleted file mode 100644 index c6eb56c450bc469453a584383b99468a78543a5f..0000000000000000000000000000000000000000 --- a/spaces/nota-ai/compressed-stable-diffusion/demo.py +++ /dev/null @@ -1,111 +0,0 @@ -from diffusers import StableDiffusionPipeline, UNet2DConditionModel -import torch -import copy - -import time - -ORIGINAL_CHECKPOINT_ID = "CompVis/stable-diffusion-v1-4" -COMPRESSED_UNET_ID = "nota-ai/bk-sdm-small" - -DEVICE='cuda' -# DEVICE='cpu' - -class SdmCompressionDemo: - def __init__(self, device) -> None: - self.device = device - self.torch_dtype = torch.float16 if 'cuda' in self.device else torch.float32 - - self.pipe_original = StableDiffusionPipeline.from_pretrained(ORIGINAL_CHECKPOINT_ID, - torch_dtype=self.torch_dtype) - self.pipe_compressed = copy.deepcopy(self.pipe_original) - self.pipe_compressed.unet = UNet2DConditionModel.from_pretrained(COMPRESSED_UNET_ID, - subfolder="unet", - torch_dtype=self.torch_dtype) - if 'cuda' in self.device: - self.pipe_original = self.pipe_original.to(self.device) - self.pipe_compressed = self.pipe_compressed.to(self.device) - self.device_msg = 'Tested on GPU.' if 'cuda' in self.device else 'Tested on CPU.' - - def _count_params(self, model): - return sum(p.numel() for p in model.parameters()) - - def get_sdm_params(self, pipe): - params_unet = self._count_params(pipe.unet) - params_text_enc = self._count_params(pipe.text_encoder) - params_image_dec = self._count_params(pipe.vae.decoder) - params_total = params_unet + params_text_enc + params_image_dec - return f"Total {(params_total/1e6):.1f}M (U-Net {(params_unet/1e6):.1f}M)" - - - def generate_image(self, pipe, text, negative, guidance_scale, steps, seed): - generator = torch.Generator(self.device).manual_seed(seed) - start = time.time() - result = pipe(text, negative_prompt = negative, generator = generator, - guidance_scale = guidance_scale, num_inference_steps = steps) - test_time = time.time() - start - - image = result.images[0] - nsfw_detected = result.nsfw_content_detected[0] - print(f"text {text} | Processed time: {test_time} sec | nsfw_flag {nsfw_detected}") - print(f"negative {negative} | guidance_scale {guidance_scale} | steps {steps} ") - print("===========") - - return image, nsfw_detected, format(test_time, ".2f") - - def error_msg(self, nsfw_detected): - if nsfw_detected: - return self.device_msg+" Black images are returned when potential harmful content is detected. Try different prompts or seeds." - else: - return self.device_msg - - def check_invalid_input(self, text): - if text == '': - return True - - def infer_original_model(self, text, negative, guidance_scale, steps, seed): - print(f"=== ORIG model --- seed {seed}") - if self.check_invalid_input(text): - return None, "Please enter the input prompt.", None - output_image, nsfw_detected, test_time = self.generate_image(self.pipe_original, - text, negative, guidance_scale, steps, seed) - - return output_image, self.error_msg(nsfw_detected), test_time - - def infer_compressed_model(self, text, negative, guidance_scale, steps, seed): - print(f"=== COMPRESSED model --- seed {seed}") - if self.check_invalid_input(text): - return None, "Please enter the input prompt.", None - output_image, nsfw_detected, test_time = self.generate_image(self.pipe_compressed, - text, negative, guidance_scale, steps, seed) - - return output_image, self.error_msg(nsfw_detected), test_time - - - def get_example_list(self): - return [ - 'a tropical bird sitting on a branch of a tree', - 'many decorative umbrellas hanging up', - 'an orange cat staring off with pretty eyes', - 'beautiful woman face with fancy makeup', - 'a decorated living room with a stylish feel', - 'a black vase holding a bouquet of roses', - 'very elegant bedroom featuring natural wood', - 'buffet-style food including cake and cheese', - 'a tall castle sitting under a cloudy sky', - 'closeup of a brown bear sitting in a grassy area', - 'a large basket with many fresh vegetables', - 'house being built with lots of wood', - 'a close up of a pizza with several toppings', - 'a golden vase with many different flows', - 'a statue of a lion face attached to brick wall', - 'something that looks particularly interesting', - 'table filled with a variety of different dishes', - 'a cinematic view of a large snowy peak', - 'a grand city in the year 2100, hyper realistic', - 'a blue eyed baby girl looking at the camera', - ] - - - - - \ No newline at end of file diff --git a/spaces/nyx-ai/stylegan2-flax-tpu/stylegan2/ops.py b/spaces/nyx-ai/stylegan2-flax-tpu/stylegan2/ops.py deleted file mode 100644 index fe128bb4c8f426c4a8aef987fee31c6e5145e63f..0000000000000000000000000000000000000000 --- a/spaces/nyx-ai/stylegan2-flax-tpu/stylegan2/ops.py +++ /dev/null @@ -1,674 +0,0 @@ -import jax -import jax.numpy as jnp -from jax import random -import flax.linen as nn -from jax import jit -import numpy as np -from functools import partial -from typing import Any -import h5py - - -#------------------------------------------------------ -# Other -#------------------------------------------------------ -def minibatch_stddev_layer(x, group_size=None, num_new_features=1): - if group_size is None: - group_size = x.shape[0] - else: - # Minibatch must be divisible by (or smaller than) group_size. - group_size = min(group_size, x.shape[0]) - - G = group_size - F = num_new_features - _, H, W, C = x.shape - c = C // F - - # [NHWC] Cast to FP32. - y = x.astype(jnp.float32) - # [GnHWFc] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = jnp.reshape(y, newshape=(G, -1, H, W, F, c)) - # [GnHWFc] Subtract mean over group. - y -= jnp.mean(y, axis=0) - # [nHWFc] Calc variance over group. - y = jnp.mean(jnp.square(y), axis=0) - # [nHWFc] Calc stddev over group. - y = jnp.sqrt(y + 1e-8) - # [nF] Take average over channels and pixels. - y = jnp.mean(y, axis=(1, 2, 4)) - # [nF] Cast back to original data type. - y = y.astype(x.dtype) - # [n11F] Add missing dimensions. - y = jnp.reshape(y, newshape=(-1, 1, 1, F)) - # [NHWC] Replicate over group and pixels. - y = jnp.tile(y, (G, H, W, 1)) - return jnp.concatenate((x, y), axis=3) - - -#------------------------------------------------------ -# Activation -#------------------------------------------------------ -def apply_activation(x, activation='linear', alpha=0.2, gain=np.sqrt(2)): - gain = jnp.array(gain, dtype=x.dtype) - if activation == 'relu': - return jax.nn.relu(x) * gain - if activation == 'leaky_relu': - return jax.nn.leaky_relu(x, negative_slope=alpha) * gain - return x - - -#------------------------------------------------------ -# Weights -#------------------------------------------------------ -def get_weight(shape, lr_multiplier=1, bias=True, param_dict=None, layer_name='', key=None): - if param_dict is None: - w = random.normal(key, shape=shape, dtype=jnp.float32) / lr_multiplier - if bias: b = jnp.zeros(shape=(shape[-1],), dtype=jnp.float32) - else: - w = jnp.array(param_dict[layer_name]['weight']).astype(jnp.float32) - if bias: b = jnp.array(param_dict[layer_name]['bias']).astype(jnp.float32) - - if bias: return w, b - return w - - -def equalize_lr_weight(w, lr_multiplier=1): - """ - Equalized learning rate, see: https://arxiv.org/pdf/1710.10196.pdf. - - Args: - w (tensor): Weight parameter. Shape [kernel, kernel, fmaps_in, fmaps_out] - for convolutions and shape [in, out] for MLPs. - lr_multiplier (float): Learning rate multiplier. - - Returns: - (tensor): Scaled weight parameter. - """ - in_features = np.prod(w.shape[:-1]) - gain = lr_multiplier / np.sqrt(in_features) - w *= gain - return w - - -def equalize_lr_bias(b, lr_multiplier=1): - """ - Equalized learning rate, see: https://arxiv.org/pdf/1710.10196.pdf. - - Args: - b (tensor): Bias parameter. - lr_multiplier (float): Learning rate multiplier. - - Returns: - (tensor): Scaled bias parameter. - """ - gain = lr_multiplier - b *= gain - return b - - -#------------------------------------------------------ -# Normalization -#------------------------------------------------------ -def normalize_2nd_moment(x, eps=1e-8): - return x * jax.lax.rsqrt(jnp.mean(jnp.square(x), axis=1, keepdims=True) + eps) - - -#------------------------------------------------------ -# Upsampling -#------------------------------------------------------ -def setup_filter(f, normalize=True, flip_filter=False, gain=1, separable=None): - """ - Convenience function to setup 2D FIR filter for `upfirdn2d()`. - - Args: - f (tensor): Tensor or python list of the shape. - normalize (bool): Normalize the filter so that it retains the magnitude. - for constant input signal (DC)? (default: True). - flip_filter (bool): Flip the filter? (default: False). - gain (int): Overall scaling factor for signal magnitude (default: 1). - separable: Return a separable filter? (default: select automatically). - - Returns: - (tensor): Output filter of shape [filter_height, filter_width] or [filter_taps] - """ - # Validate. - if f is None: - f = 1 - f = jnp.array(f, dtype=jnp.float32) - assert f.ndim in [0, 1, 2] - assert f.size > 0 - if f.ndim == 0: - f = f[jnp.newaxis] - - # Separable? - if separable is None: - separable = (f.ndim == 1 and f.size >= 8) - if f.ndim == 1 and not separable: - f = jnp.outer(f, f) - assert f.ndim == (1 if separable else 2) - - # Apply normalize, flip, gain, and device. - if normalize: - f /= jnp.sum(f) - if flip_filter: - for i in range(f.ndim): - f = jnp.flip(f, axis=i) - f = f * (gain ** (f.ndim / 2)) - return f - - -def upfirdn2d(x, f, padding=(2, 1, 2, 1), up=1, down=1, strides=(1, 1), flip_filter=False, gain=1): - - if f is None: - f = jnp.ones((1, 1), dtype=jnp.float32) - - B, H, W, C = x.shape - padx0, padx1, pady0, pady1 = padding - - # upsample by inserting zeros - x = jnp.reshape(x, newshape=(B, H, 1, W, 1, C)) - x = jnp.pad(x, pad_width=((0, 0), (0, 0), (0, up - 1), (0, 0), (0, up - 1), (0, 0))) - x = jnp.reshape(x, newshape=(B, H * up, W * up, C)) - - # padding - x = jnp.pad(x, pad_width=((0, 0), (max(pady0, 0), max(pady1, 0)), (max(padx0, 0), max(padx1, 0)), (0, 0))) - x = x[:, max(-pady0, 0) : x.shape[1] - max(-pady1, 0), max(-padx0, 0) : x.shape[2] - max(-padx1, 0)] - - # setup filter - f = f * (gain ** (f.ndim / 2)) - if not flip_filter: - for i in range(f.ndim): - f = jnp.flip(f, axis=i) - - # convole filter - f = jnp.repeat(jnp.expand_dims(f, axis=(-2, -1)), repeats=C, axis=-1) - if f.ndim == 4: - x = jax.lax.conv_general_dilated(x, - f.astype(x.dtype), - window_strides=strides or (1,) * (x.ndim - 2), - padding='valid', - dimension_numbers=nn.linear._conv_dimension_numbers(x.shape), - feature_group_count=C) - else: - x = jax.lax.conv_general_dilated(x, - jnp.expand_dims(f, axis=0).astype(x.dtype), - window_strides=strides or (1,) * (x.ndim - 2), - padding='valid', - dimension_numbers=nn.linear._conv_dimension_numbers(x.shape), - feature_group_count=C) - x = jax.lax.conv_general_dilated(x, - jnp.expand_dims(f, axis=1).astype(x.dtype), - window_strides=strides or (1,) * (x.ndim - 2), - padding='valid', - dimension_numbers=nn.linear._conv_dimension_numbers(x.shape), - feature_group_count=C) - x = x[:, ::down, ::down] - return x - - -def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1): - if f.ndim == 1: - fh, fw = f.shape[0], f.shape[0] - elif f.ndim == 2: - fh, fw = f.shape[0], f.shape[1] - else: - raise ValueError('Invalid filter shape:', f.shape) - padx0 = padding + (fw + up - 1) // 2 - padx1 = padding + (fw - up) // 2 - pady0 = padding + (fh + up - 1) // 2 - pady1 = padding + (fh - up) // 2 - return upfirdn2d(x, f=f, up=up, padding=(padx0, padx1, pady0, pady1), flip_filter=flip_filter, gain=gain * up * up) - - -#------------------------------------------------------ -# Linear -#------------------------------------------------------ -class LinearLayer(nn.Module): - """ - Linear Layer. - - Attributes: - in_features (int): Input dimension. - out_features (int): Output dimension. - use_bias (bool): If True, use bias. - bias_init (int): Bias init. - lr_multiplier (float): Learning rate multiplier. - activation (str): Activation function: 'relu', 'lrelu', etc. - param_dict (h5py.Group): Parameter dict with pretrained parameters. - layer_name (str): Layer name. - dtype (str): Data type. - rng (jax.random.PRNGKey): Random seed for initialization. - """ - in_features: int - out_features: int - use_bias: bool=True - bias_init: int=0 - lr_multiplier: float=1 - activation: str='linear' - param_dict: h5py.Group=None - layer_name: str=None - dtype: str='float32' - rng: Any=random.PRNGKey(0) - - @nn.compact - def __call__(self, x): - """ - Run Linear Layer. - - Args: - x (tensor): Input tensor of shape [N, in_features]. - - Returns: - (tensor): Output tensor of shape [N, out_features]. - """ - w_shape = [self.in_features, self.out_features] - params = get_weight(w_shape, self.lr_multiplier, self.use_bias, self.param_dict, self.layer_name, self.rng) - - if self.use_bias: - w, b = params - else: - w = params - - w = self.param(name='weight', init_fn=lambda *_ : w) - w = equalize_lr_weight(w, self.lr_multiplier) - x = jnp.matmul(x, w.astype(x.dtype)) - - if self.use_bias: - b = self.param(name='bias', init_fn=lambda *_ : b) - b = equalize_lr_bias(b, self.lr_multiplier) - x += b.astype(x.dtype) - x += self.bias_init - - x = apply_activation(x, activation=self.activation) - return x - - -#------------------------------------------------------ -# Convolution -#------------------------------------------------------ -def conv_downsample_2d(x, w, k=None, factor=2, gain=1, padding=0): - """ - Fused downsample convolution. - - Padding is performed only once at the beginning, not between the operations. - The fused op is considerably more efficient than performing the same calculation - using standard TensorFlow ops. It supports gradients of arbitrary order. - - Args: - x (tensor): Input tensor of the shape [N, H, W, C]. - w (tensor): Weight tensor of the shape [filterH, filterW, inChannels, outChannels]. - Grouped convolution can be performed by inChannels = x.shape[0] // numGroups. - k (tensor): FIR filter of the shape [firH, firW] or [firN]. - The default is `[1] * factor`, which corresponds to average pooling. - factor (int): Downsampling factor (default: 2). - gain (float): Scaling factor for signal magnitude (default: 1.0). - padding (int): Number of pixels to pad or crop the output on each side (default: 0). - - Returns: - (tensor): Output of the shape [N, H // factor, W // factor, C]. - """ - assert isinstance(factor, int) and factor >= 1 - assert isinstance(padding, int) - - # Check weight shape. - ch, cw, _inC, _outC = w.shape - assert cw == ch - - # Setup filter kernel. - k = setup_filter(k, gain=gain) - assert k.shape[0] == k.shape[1] - - # Execute. - pad0 = (k.shape[0] - factor + cw) // 2 + padding * factor - pad1 = (k.shape[0] - factor + cw - 1) // 2 + padding * factor - x = upfirdn2d(x=x, f=k, padding=(pad0, pad0, pad1, pad1)) - - x = jax.lax.conv_general_dilated(x, - w, - window_strides=(factor, factor), - padding='VALID', - dimension_numbers=nn.linear._conv_dimension_numbers(x.shape)) - return x - - -def upsample_conv_2d(x, w, k=None, factor=2, gain=1, padding=0): - """ - Fused upsample convolution. - - Padding is performed only once at the beginning, not between the operations. - The fused op is considerably more efficient than performing the same calculation - using standard TensorFlow ops. It supports gradients of arbitrary order. - - Args: - x (tensor): Input tensor of the shape [N, H, W, C]. - w (tensor): Weight tensor of the shape [filterH, filterW, inChannels, outChannels]. - Grouped convolution can be performed by inChannels = x.shape[0] // numGroups. - k (tensor): FIR filter of the shape [firH, firW] or [firN]. - The default is [1] * factor, which corresponds to nearest-neighbor upsampling. - factor (int): Integer upsampling factor (default: 2). - gain (float): Scaling factor for signal magnitude (default: 1.0). - padding (int): Number of pixels to pad or crop the output on each side (default: 0). - - Returns: - (tensor): Output of the shape [N, H * factor, W * factor, C]. - """ - assert isinstance(factor, int) and factor >= 1 - assert isinstance(padding, int) - - # Check weight shape. - ch, cw, _inC, _outC = w.shape - inC = w.shape[2] - outC = w.shape[3] - assert cw == ch - - # Fast path for 1x1 convolution. - if cw == 1 and ch == 1: - x = jax.lax.conv_general_dilated(x, - w, - window_strides=(1, 1), - padding='VALID', - dimension_numbers=nn.linear._conv_dimension_numbers(x.shape)) - k = setup_filter(k, gain=gain * (factor ** 2)) - pad0 = (k.shape[0] + factor - cw) // 2 + padding - pad1 = (k.shape[0] - factor) // 2 + padding - x = upfirdn2d(x, f=k, up=factor, padding=(pad0, pad1, pad0, pad1)) - return x - - # Setup filter kernel. - k = setup_filter(k, gain=gain * (factor ** 2)) - assert k.shape[0] == k.shape[1] - - # Determine data dimensions. - stride = (factor, factor) - output_shape = ((x.shape[1] - 1) * factor + ch, (x.shape[2] - 1) * factor + cw) - num_groups = x.shape[3] // inC - - # Transpose weights. - w = jnp.reshape(w, (ch, cw, inC, num_groups, -1)) - w = jnp.transpose(w[::-1, ::-1], (0, 1, 4, 3, 2)) - w = jnp.reshape(w, (ch, cw, -1, num_groups * inC)) - - # Execute. - x = gradient_based_conv_transpose(lhs=x, - rhs=w, - strides=stride, - padding='VALID', - output_padding=(0, 0, 0, 0), - output_shape=output_shape, - ) - - pad0 = (k.shape[0] + factor - cw) // 2 + padding - pad1 = (k.shape[0] - factor - cw + 3) // 2 + padding - x = upfirdn2d(x=x, f=k, padding=(pad0, pad1, pad0, pad1)) - return x - - -def conv2d(x, w, up=False, down=False, resample_kernel=None, padding=0): - assert not (up and down) - kernel = w.shape[0] - assert w.shape[1] == kernel - assert kernel >= 1 and kernel % 2 == 1 - - num_groups = x.shape[3] // w.shape[2] - - w = w.astype(x.dtype) - if up: - x = upsample_conv_2d(x, w, k=resample_kernel, padding=padding) - elif down: - x = conv_downsample_2d(x, w, k=resample_kernel, padding=padding) - else: - padding_mode = {0: 'SAME', -(kernel // 2): 'VALID'}[padding] - x = jax.lax.conv_general_dilated(x, - w, - window_strides=(1, 1), - padding=padding_mode, - dimension_numbers=nn.linear._conv_dimension_numbers(x.shape), - feature_group_count=num_groups) - return x - - -def modulated_conv2d_layer(x, w, s, fmaps, kernel, up=False, down=False, demodulate=True, resample_kernel=None, fused_modconv=False): - assert not (up and down) - assert kernel >= 1 and kernel % 2 == 1 - - # Get weight. - wshape = (kernel, kernel, x.shape[3], fmaps) - if x.dtype.name == 'float16' and not fused_modconv and demodulate: - w *= jnp.sqrt(1 / np.prod(wshape[:-1])) / jnp.max(jnp.abs(w), axis=(0, 1, 2)) # Pre-normalize to avoid float16 overflow. - ww = w[jnp.newaxis] # [BkkIO] Introduce minibatch dimension. - - # Modulate. - if x.dtype.name == 'float16' and not fused_modconv and demodulate: - s *= 1 / jnp.max(jnp.abs(s)) # Pre-normalize to avoid float16 overflow. - ww *= s[:, jnp.newaxis, jnp.newaxis, :, jnp.newaxis].astype(w.dtype) # [BkkIO] Scale input feature maps. - - # Demodulate. - if demodulate: - d = jax.lax.rsqrt(jnp.sum(jnp.square(ww), axis=(1, 2, 3)) + 1e-8) # [BO] Scaling factor. - ww *= d[:, jnp.newaxis, jnp.newaxis, jnp.newaxis, :] # [BkkIO] Scale output feature maps. - - # Reshape/scale input. - if fused_modconv: - x = jnp.transpose(x, axes=(0, 3, 1, 2)) - x = jnp.reshape(x, (1, -1, x.shape[2], x.shape[3])) # Fused => reshape minibatch to convolution groups. - x = jnp.transpose(x, axes=(0, 2, 3, 1)) - w = jnp.reshape(jnp.transpose(ww, (1, 2, 3, 0, 4)), (ww.shape[1], ww.shape[2], ww.shape[3], -1)) - else: - x *= s[:, jnp.newaxis, jnp.newaxis].astype(x.dtype) # [BIhw] Not fused => scale input activations. - - # 2D convolution. - x = conv2d(x, w.astype(x.dtype), up=up, down=down, resample_kernel=resample_kernel) - - # Reshape/scale output. - if fused_modconv: - x = jnp.transpose(x, axes=(0, 3, 1, 2)) - x = jnp.reshape(x, (-1, fmaps, x.shape[2], x.shape[3])) # Fused => reshape convolution groups back to minibatch. - x = jnp.transpose(x, axes=(0, 2, 3, 1)) - elif demodulate: - x *= d[:, jnp.newaxis, jnp.newaxis].astype(x.dtype) # [BOhw] Not fused => scale output activations. - - return x - - -def _deconv_output_length(input_length, filter_size, padding, output_padding=None, stride=0, dilation=1): - """ - Taken from: https://github.com/google/jax/pull/5772/commits - - Determines the output length of a transposed convolution given the input length. - Function modified from Keras. - Arguments: - input_length: Integer. - filter_size: Integer. - padding: one of `"SAME"`, `"VALID"`, or a 2-integer tuple. - output_padding: Integer, amount of padding along the output dimension. Can - be set to `None` in which case the output length is inferred. - stride: Integer. - dilation: Integer. - Returns: - The output length (integer). - """ - if input_length is None: - return None - - # Get the dilated kernel size - filter_size = filter_size + (filter_size - 1) * (dilation - 1) - - # Infer length if output padding is None, else compute the exact length - if output_padding is None: - if padding == 'VALID': - length = input_length * stride + max(filter_size - stride, 0) - elif padding == 'SAME': - length = input_length * stride - else: - length = ((input_length - 1) * stride + filter_size - padding[0] - padding[1]) - - else: - if padding == 'SAME': - pad = filter_size // 2 - total_pad = pad * 2 - elif padding == 'VALID': - total_pad = 0 - else: - total_pad = padding[0] + padding[1] - - length = ((input_length - 1) * stride + filter_size - total_pad + output_padding) - return length - - -def _compute_adjusted_padding(input_size, output_size, kernel_size, stride, padding, dilation=1): - """ - Taken from: https://github.com/google/jax/pull/5772/commits - - Computes adjusted padding for desired ConvTranspose `output_size`. - Ported from DeepMind Haiku. - """ - kernel_size = (kernel_size - 1) * dilation + 1 - if padding == 'VALID': - expected_input_size = (output_size - kernel_size + stride) // stride - if input_size != expected_input_size: - raise ValueError(f'The expected input size with the current set of input ' - f'parameters is {expected_input_size} which doesn\'t ' - f'match the actual input size {input_size}.') - padding_before = 0 - elif padding == 'SAME': - expected_input_size = (output_size + stride - 1) // stride - if input_size != expected_input_size: - raise ValueError(f'The expected input size with the current set of input ' - f'parameters is {expected_input_size} which doesn\'t ' - f'match the actual input size {input_size}.') - padding_needed = max(0, (input_size - 1) * stride + kernel_size - output_size) - padding_before = padding_needed // 2 - else: - padding_before = padding[0] # type: ignore[assignment] - - expanded_input_size = (input_size - 1) * stride + 1 - padded_out_size = output_size + kernel_size - 1 - pad_before = kernel_size - 1 - padding_before - pad_after = padded_out_size - expanded_input_size - pad_before - return (pad_before, pad_after) - - -def _flip_axes(x, axes): - """ - Taken from: https://github.com/google/jax/blob/master/jax/_src/lax/lax.py - - Flip ndarray 'x' along each axis specified in axes tuple. - """ - for axis in axes: - x = jnp.flip(x, axis) - return x - - -def gradient_based_conv_transpose(lhs, - rhs, - strides, - padding, - output_padding, - output_shape=None, - dilation=None, - dimension_numbers=None, - transpose_kernel=True, - feature_group_count=1, - precision=None): - """ - Taken from: https://github.com/google/jax/pull/5772/commits - - Convenience wrapper for calculating the N-d transposed convolution. - Much like `conv_transpose`, this function calculates transposed convolutions - via fractionally strided convolution rather than calculating the gradient - (transpose) of a forward convolution. However, the latter is more common - among deep learning frameworks, such as TensorFlow, PyTorch, and Keras. - This function provides the same set of APIs to help reproduce results in these frameworks. - Args: - lhs: a rank `n+2` dimensional input array. - rhs: a rank `n+2` dimensional array of kernel weights. - strides: sequence of `n` integers, amounts to strides of the corresponding forward convolution. - padding: `"SAME"`, `"VALID"`, or a sequence of `n` integer 2-tuples that controls - the before-and-after padding for each `n` spatial dimension of - the corresponding forward convolution. - output_padding: A sequence of integers specifying the amount of padding along - each spacial dimension of the output tensor, used to disambiguate the output shape of - transposed convolutions when the stride is larger than 1. - (see a detailed description at https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html) - The amount of output padding along a given dimension must - be lower than the stride along that same dimension. - If set to `None` (default), the output shape is inferred. - If both `output_padding` and `output_shape` are specified, they have to be mutually compatible. - output_shape: Output shape of the spatial dimensions of a transpose - convolution. Can be `None` or an iterable of `n` integers. If a `None` value is given (default), - the shape is automatically calculated. - Similar to `output_padding`, `output_shape` is also for disambiguating the output shape - when stride > 1 (see also - https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose) - If both `output_padding` and `output_shape` are specified, they have to be mutually compatible. - dilation: `None`, or a sequence of `n` integers, giving the - dilation factor to apply in each spatial dimension of `rhs`. Dilated convolution - is also known as atrous convolution. - dimension_numbers: tuple of dimension descriptors as in lax.conv_general_dilated. Defaults to tensorflow convention. - transpose_kernel: if `True` flips spatial axes and swaps the input/output - channel axes of the kernel. This makes the output of this function identical - to the gradient-derived functions like keras.layers.Conv2DTranspose and - torch.nn.ConvTranspose2d applied to the same kernel. - Although for typical use in neural nets this is unnecessary - and makes input/output channel specification confusing, you need to set this to `True` - in order to match the behavior in many deep learning frameworks, such as TensorFlow, Keras, and PyTorch. - precision: Optional. Either ``None``, which means the default precision for - the backend, a ``lax.Precision`` enum value (``Precision.DEFAULT``, - ``Precision.HIGH`` or ``Precision.HIGHEST``) or a tuple of two - ``lax.Precision`` enums indicating precision of ``lhs``` and ``rhs``. - Returns: - Transposed N-d convolution. - """ - assert len(lhs.shape) == len(rhs.shape) and len(lhs.shape) >= 2 - ndims = len(lhs.shape) - one = (1,) * (ndims - 2) - # Set dimensional layout defaults if not specified. - if dimension_numbers is None: - if ndims == 2: - dimension_numbers = ('NC', 'IO', 'NC') - elif ndims == 3: - dimension_numbers = ('NHC', 'HIO', 'NHC') - elif ndims == 4: - dimension_numbers = ('NHWC', 'HWIO', 'NHWC') - elif ndims == 5: - dimension_numbers = ('NHWDC', 'HWDIO', 'NHWDC') - else: - raise ValueError('No 4+ dimensional dimension_number defaults.') - dn = jax.lax.conv_dimension_numbers(lhs.shape, rhs.shape, dimension_numbers) - k_shape = np.take(rhs.shape, dn.rhs_spec) - k_sdims = k_shape[2:] # type: ignore[index] - i_shape = np.take(lhs.shape, dn.lhs_spec) - i_sdims = i_shape[2:] # type: ignore[index] - - # Calculate correct output shape given padding and strides. - if dilation is None: - dilation = (1,) * (rhs.ndim - 2) - - if output_padding is None: - output_padding = [None] * (rhs.ndim - 2) # type: ignore[list-item] - - if isinstance(padding, str): - if padding in {'SAME', 'VALID'}: - padding = [padding] * (rhs.ndim - 2) # type: ignore[list-item] - else: - raise ValueError(f"`padding` must be 'VALID' or 'SAME'. Passed: {padding}.") - - inferred_output_shape = tuple(map(_deconv_output_length, i_sdims, k_sdims, padding, output_padding, strides, dilation)) - - if output_shape is None: - output_shape = inferred_output_shape # type: ignore[assignment] - else: - if not output_shape == inferred_output_shape: - raise ValueError(f'`output_padding` and `output_shape` are not compatible.' - f'Inferred output shape from `output_padding`: {inferred_output_shape}, ' - f'but got `output_shape` {output_shape}') - - pads = tuple(map(_compute_adjusted_padding, i_sdims, output_shape, k_sdims, strides, padding, dilation)) - - if transpose_kernel: - # flip spatial dims and swap input / output channel axes - rhs = _flip_axes(rhs, np.array(dn.rhs_spec)[2:]) - rhs = np.swapaxes(rhs, dn.rhs_spec[0], dn.rhs_spec[1]) - return jax.lax.conv_general_dilated(lhs, rhs, one, pads, strides, dilation, dn, feature_group_count, precision=precision) - - diff --git a/spaces/openbio/calculator/utils/gradio.py b/spaces/openbio/calculator/utils/gradio.py deleted file mode 100644 index 283064f9232f318e2b454525c25dabd672028f2d..0000000000000000000000000000000000000000 --- a/spaces/openbio/calculator/utils/gradio.py +++ /dev/null @@ -1,8 +0,0 @@ -get_window_url_params = """ - function() { - const params = new URLSearchParams(window.location.search); - const url_params = Object.fromEntries(params); - console.log('url_params', url_params) - return url_params; - } - """ diff --git a/spaces/osanseviero/Apocalyptify_webcam/README.md b/spaces/osanseviero/Apocalyptify_webcam/README.md deleted file mode 100644 index 842888476ba12b889c1ad6585e43ca9cad031cf5..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/Apocalyptify_webcam/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Apocalyptify_webcam -emoji: 🧟‍♀️ -colorFrom: pink -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/osanseviero/gradio_auth/app.py b/spaces/osanseviero/gradio_auth/app.py deleted file mode 100644 index 3aed1eabf162497d82dcbd3cd78b981ba02f7c79..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/gradio_auth/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def reverse(text): - return text[::-1] - -demo = gr.Interface(reverse, "text", "text") -demo.launch(debug=True, enable_queue=False, auth=("username", "password"), auth_message="Try this") \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/stable_diffusion/overview.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/stable_diffusion/overview.md deleted file mode 100644 index 82b2597a7043294cff1e235614be612ed4d35d0b..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/stable_diffusion/overview.md +++ /dev/null @@ -1,180 +0,0 @@ - - -# Stable Diffusion pipelines - -Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. - -Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. - -For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI [announcement](https://stability.ai/blog/stable-diffusion-announcement) and our own [blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) for more technical details. - -You can find the original codebase for Stable Diffusion v1.0 at [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) and Stable Diffusion v2.0 at [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion) as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations. Explore these organizations to find the best checkpoint for your use-case! - -The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: - -
    -
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Pipeline - - Supported tasks - - Space -
    - StableDiffusion - text-to-image -
    - StableDiffusionImg2Img - image-to-image -
    - StableDiffusionInpaint - inpainting -
    - StableDiffusionDepth2Img - depth-to-image -
    - StableDiffusionImageVariation - image variation -
    - StableDiffusionPipelineSafe - filtered text-to-image -
    - StableDiffusion2 - text-to-image, inpainting, depth-to-image, super-resolution -
    - StableDiffusionXL - text-to-image, image-to-image -
    - StableDiffusionLatentUpscale - super-resolution -
    - StableDiffusionUpscale - super-resolution
    - StableDiffusionLDM3D - text-to-rgb, text-to-depth -
    -
    -
    - -## Tips - -To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. - -### Explore tradeoff between speed and quality - -[`StableDiffusionPipeline`] uses the [`PNDMScheduler`] by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the [`EulerDiscreteScheduler`] instead of the default: - -```py -from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler - -pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") -pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) - -# or -euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") -pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) -``` - -### Reuse pipeline components to save memory - -To save memory and use the same components across multiple pipelines, use the `.components` method to avoid loading weights into RAM more than once. - -```py -from diffusers import ( - StableDiffusionPipeline, - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipeline, -) - -text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") -img2img = StableDiffusionImg2ImgPipeline(**text2img.components) -inpaint = StableDiffusionInpaintPipeline(**text2img.components) - -# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline -``` \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/resnet_flax.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/resnet_flax.py deleted file mode 100644 index 9a391f4b947e74beda03f26e376141b2b3c21502..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/resnet_flax.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import flax.linen as nn -import jax -import jax.numpy as jnp - - -class FlaxUpsample2D(nn.Module): - out_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.conv = nn.Conv( - self.out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - def __call__(self, hidden_states): - batch, height, width, channels = hidden_states.shape - hidden_states = jax.image.resize( - hidden_states, - shape=(batch, height * 2, width * 2, channels), - method="nearest", - ) - hidden_states = self.conv(hidden_states) - return hidden_states - - -class FlaxDownsample2D(nn.Module): - out_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.conv = nn.Conv( - self.out_channels, - kernel_size=(3, 3), - strides=(2, 2), - padding=((1, 1), (1, 1)), # padding="VALID", - dtype=self.dtype, - ) - - def __call__(self, hidden_states): - # pad = ((0, 0), (0, 1), (0, 1), (0, 0)) # pad height and width dim - # hidden_states = jnp.pad(hidden_states, pad_width=pad) - hidden_states = self.conv(hidden_states) - return hidden_states - - -class FlaxResnetBlock2D(nn.Module): - in_channels: int - out_channels: int = None - dropout_prob: float = 0.0 - use_nin_shortcut: bool = None - dtype: jnp.dtype = jnp.float32 - - def setup(self): - out_channels = self.in_channels if self.out_channels is None else self.out_channels - - self.norm1 = nn.GroupNorm(num_groups=32, epsilon=1e-5) - self.conv1 = nn.Conv( - out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - self.time_emb_proj = nn.Dense(out_channels, dtype=self.dtype) - - self.norm2 = nn.GroupNorm(num_groups=32, epsilon=1e-5) - self.dropout = nn.Dropout(self.dropout_prob) - self.conv2 = nn.Conv( - out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - use_nin_shortcut = self.in_channels != out_channels if self.use_nin_shortcut is None else self.use_nin_shortcut - - self.conv_shortcut = None - if use_nin_shortcut: - self.conv_shortcut = nn.Conv( - out_channels, - kernel_size=(1, 1), - strides=(1, 1), - padding="VALID", - dtype=self.dtype, - ) - - def __call__(self, hidden_states, temb, deterministic=True): - residual = hidden_states - hidden_states = self.norm1(hidden_states) - hidden_states = nn.swish(hidden_states) - hidden_states = self.conv1(hidden_states) - - temb = self.time_emb_proj(nn.swish(temb)) - temb = jnp.expand_dims(jnp.expand_dims(temb, 1), 1) - hidden_states = hidden_states + temb - - hidden_states = self.norm2(hidden_states) - hidden_states = nn.swish(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - residual = self.conv_shortcut(residual) - - return hidden_states + residual diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/transformer_2d.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/transformer_2d.py deleted file mode 100644 index c96aef65f33953b3c27b906dc3add6fe683806e3..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/transformer_2d.py +++ /dev/null @@ -1,365 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Any, Dict, Optional - -import torch -import torch.nn.functional as F -from torch import nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..models.embeddings import ImagePositionalEmbeddings -from ..utils import BaseOutput, deprecate -from .attention import BasicTransformerBlock -from .embeddings import PatchEmbed -from .lora import LoRACompatibleConv, LoRACompatibleLinear -from .modeling_utils import ModelMixin - - -@dataclass -class Transformer2DModelOutput(BaseOutput): - """ - The output of [`Transformer2DModel`]. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [`Transformer2DModel`] is discrete): - The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability - distributions for the unnoised latent pixels. - """ - - sample: torch.FloatTensor - - -class Transformer2DModel(ModelMixin, ConfigMixin): - """ - A 2D Transformer model for image-like data. - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. - in_channels (`int`, *optional*): - The number of channels in the input and output (specify if the input is **continuous**). - num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The number of `encoder_hidden_states` dimensions to use. - sample_size (`int`, *optional*): The width of the latent images (specify if the input is **discrete**). - This is fixed during training since it is used to learn a number of position embeddings. - num_vector_embeds (`int`, *optional*): - The number of classes of the vector embeddings of the latent pixels (specify if the input is **discrete**). - Includes the class for the masked latent pixel. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to use in feed-forward. - num_embeds_ada_norm ( `int`, *optional*): - The number of diffusion steps used during training. Pass if at least one of the norm_layers is - `AdaLayerNorm`. This is fixed during training since it is used to learn a number of embeddings that are - added to the hidden states. - - During inference, you can denoise for up to but not more steps than `num_embeds_ada_norm`. - attention_bias (`bool`, *optional*): - Configure if the `TransformerBlocks` attention should contain a bias parameter. - """ - - @register_to_config - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - out_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - sample_size: Optional[int] = None, - num_vector_embeds: Optional[int] = None, - patch_size: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - use_linear_projection: bool = False, - only_cross_attention: bool = False, - double_self_attention: bool = False, - upcast_attention: bool = False, - norm_type: str = "layer_norm", - norm_elementwise_affine: bool = True, - attention_type: str = "default", - ): - super().__init__() - self.use_linear_projection = use_linear_projection - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - - # 1. Transformer2DModel can process both standard continuous images of shape `(batch_size, num_channels, width, height)` as well as quantized image embeddings of shape `(batch_size, num_image_vectors)` - # Define whether input is continuous or discrete depending on configuration - self.is_input_continuous = (in_channels is not None) and (patch_size is None) - self.is_input_vectorized = num_vector_embeds is not None - self.is_input_patches = in_channels is not None and patch_size is not None - - if norm_type == "layer_norm" and num_embeds_ada_norm is not None: - deprecation_message = ( - f"The configuration file of this model: {self.__class__} is outdated. `norm_type` is either not set or" - " incorrectly set to `'layer_norm'`.Make sure to set `norm_type` to `'ada_norm'` in the config." - " Please make sure to update the config accordingly as leaving `norm_type` might led to incorrect" - " results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it" - " would be very nice if you could open a Pull request for the `transformer/config.json` file" - ) - deprecate("norm_type!=num_embeds_ada_norm", "1.0.0", deprecation_message, standard_warn=False) - norm_type = "ada_norm" - - if self.is_input_continuous and self.is_input_vectorized: - raise ValueError( - f"Cannot define both `in_channels`: {in_channels} and `num_vector_embeds`: {num_vector_embeds}. Make" - " sure that either `in_channels` or `num_vector_embeds` is None." - ) - elif self.is_input_vectorized and self.is_input_patches: - raise ValueError( - f"Cannot define both `num_vector_embeds`: {num_vector_embeds} and `patch_size`: {patch_size}. Make" - " sure that either `num_vector_embeds` or `num_patches` is None." - ) - elif not self.is_input_continuous and not self.is_input_vectorized and not self.is_input_patches: - raise ValueError( - f"Has to define `in_channels`: {in_channels}, `num_vector_embeds`: {num_vector_embeds}, or patch_size:" - f" {patch_size}. Make sure that `in_channels`, `num_vector_embeds` or `num_patches` is not None." - ) - - # 2. Define input layers - if self.is_input_continuous: - self.in_channels = in_channels - - self.norm = torch.nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True) - if use_linear_projection: - self.proj_in = LoRACompatibleLinear(in_channels, inner_dim) - else: - self.proj_in = LoRACompatibleConv(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - assert sample_size is not None, "Transformer2DModel over discrete input must provide sample_size" - assert num_vector_embeds is not None, "Transformer2DModel over discrete input must provide num_embed" - - self.height = sample_size - self.width = sample_size - self.num_vector_embeds = num_vector_embeds - self.num_latent_pixels = self.height * self.width - - self.latent_image_embedding = ImagePositionalEmbeddings( - num_embed=num_vector_embeds, embed_dim=inner_dim, height=self.height, width=self.width - ) - elif self.is_input_patches: - assert sample_size is not None, "Transformer2DModel over patched input must provide sample_size" - - self.height = sample_size - self.width = sample_size - - self.patch_size = patch_size - self.pos_embed = PatchEmbed( - height=sample_size, - width=sample_size, - patch_size=patch_size, - in_channels=in_channels, - embed_dim=inner_dim, - ) - - # 3. Define transformers blocks - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - cross_attention_dim=cross_attention_dim, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - attention_bias=attention_bias, - only_cross_attention=only_cross_attention, - double_self_attention=double_self_attention, - upcast_attention=upcast_attention, - norm_type=norm_type, - norm_elementwise_affine=norm_elementwise_affine, - attention_type=attention_type, - ) - for d in range(num_layers) - ] - ) - - # 4. Define output layers - self.out_channels = in_channels if out_channels is None else out_channels - if self.is_input_continuous: - # TODO: should use out_channels for continuous projections - if use_linear_projection: - self.proj_out = LoRACompatibleLinear(inner_dim, in_channels) - else: - self.proj_out = LoRACompatibleConv(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - self.norm_out = nn.LayerNorm(inner_dim) - self.out = nn.Linear(inner_dim, self.num_vector_embeds - 1) - elif self.is_input_patches: - self.norm_out = nn.LayerNorm(inner_dim, elementwise_affine=False, eps=1e-6) - self.proj_out_1 = nn.Linear(inner_dim, 2 * inner_dim) - self.proj_out_2 = nn.Linear(inner_dim, patch_size * patch_size * self.out_channels) - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - encoder_hidden_states: Optional[torch.Tensor] = None, - timestep: Optional[torch.LongTensor] = None, - class_labels: Optional[torch.LongTensor] = None, - cross_attention_kwargs: Dict[str, Any] = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - return_dict: bool = True, - ): - """ - The [`Transformer2DModel`] forward method. - - Args: - hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous): - Input `hidden_states`. - encoder_hidden_states ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `torch.LongTensor`, *optional*): - Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`. - class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*): - Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in - `AdaLayerZeroNorm`. - encoder_attention_mask ( `torch.Tensor`, *optional*): - Cross-attention mask applied to `encoder_hidden_states`. Two formats supported: - - * Mask `(batch, sequence_length)` True = keep, False = discard. - * Bias `(batch, 1, sequence_length)` 0 = keep, -10000 = discard. - - If `ndim == 2`: will be interpreted as a mask, then converted into a bias consistent with the format - above. This bias will be added to the cross-attention scores. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain - tuple. - - Returns: - If `return_dict` is True, an [`~models.transformer_2d.Transformer2DModelOutput`] is returned, otherwise a - `tuple` where the first element is the sample tensor. - """ - # ensure attention_mask is a bias, and give it a singleton query_tokens dimension. - # we may have done this conversion already, e.g. if we came here via UNet2DConditionModel#forward. - # we can tell by counting dims; if ndim == 2: it's a mask rather than a bias. - # expects mask of shape: - # [batch, key_tokens] - # adds singleton query_tokens dimension: - # [batch, 1, key_tokens] - # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes: - # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn) - # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn) - if attention_mask is not None and attention_mask.ndim == 2: - # assume that mask is expressed as: - # (1 = keep, 0 = discard) - # convert mask into a bias that can be added to attention scores: - # (keep = +0, discard = -10000.0) - attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0 - attention_mask = attention_mask.unsqueeze(1) - - # convert encoder_attention_mask to a bias the same way we do for attention_mask - if encoder_attention_mask is not None and encoder_attention_mask.ndim == 2: - encoder_attention_mask = (1 - encoder_attention_mask.to(hidden_states.dtype)) * -10000.0 - encoder_attention_mask = encoder_attention_mask.unsqueeze(1) - - # Retrieve lora scale. - lora_scale = cross_attention_kwargs.get("scale", 1.0) if cross_attention_kwargs is not None else 1.0 - - # 1. Input - if self.is_input_continuous: - batch, _, height, width = hidden_states.shape - residual = hidden_states - - hidden_states = self.norm(hidden_states) - if not self.use_linear_projection: - hidden_states = self.proj_in(hidden_states, scale=lora_scale) - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim) - else: - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim) - hidden_states = self.proj_in(hidden_states, scale=lora_scale) - - elif self.is_input_vectorized: - hidden_states = self.latent_image_embedding(hidden_states) - elif self.is_input_patches: - hidden_states = self.pos_embed(hidden_states) - - # 2. Blocks - for block in self.transformer_blocks: - if self.training and self.gradient_checkpointing: - hidden_states = torch.utils.checkpoint.checkpoint( - block, - hidden_states, - attention_mask, - encoder_hidden_states, - encoder_attention_mask, - timestep, - cross_attention_kwargs, - class_labels, - use_reentrant=False, - ) - else: - hidden_states = block( - hidden_states, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - class_labels=class_labels, - ) - - # 3. Output - if self.is_input_continuous: - if not self.use_linear_projection: - hidden_states = hidden_states.reshape(batch, height, width, inner_dim).permute(0, 3, 1, 2).contiguous() - hidden_states = self.proj_out(hidden_states, scale=lora_scale) - else: - hidden_states = self.proj_out(hidden_states, scale=lora_scale) - hidden_states = hidden_states.reshape(batch, height, width, inner_dim).permute(0, 3, 1, 2).contiguous() - - output = hidden_states + residual - elif self.is_input_vectorized: - hidden_states = self.norm_out(hidden_states) - logits = self.out(hidden_states) - # (batch, self.num_vector_embeds - 1, self.num_latent_pixels) - logits = logits.permute(0, 2, 1) - - # log(p(x_0)) - output = F.log_softmax(logits.double(), dim=1).float() - elif self.is_input_patches: - # TODO: cleanup! - conditioning = self.transformer_blocks[0].norm1.emb( - timestep, class_labels, hidden_dtype=hidden_states.dtype - ) - shift, scale = self.proj_out_1(F.silu(conditioning)).chunk(2, dim=1) - hidden_states = self.norm_out(hidden_states) * (1 + scale[:, None]) + shift[:, None] - hidden_states = self.proj_out_2(hidden_states) - - # unpatchify - height = width = int(hidden_states.shape[1] ** 0.5) - hidden_states = hidden_states.reshape( - shape=(-1, height, width, self.patch_size, self.patch_size, self.out_channels) - ) - hidden_states = torch.einsum("nhwpqc->nchpwq", hidden_states) - output = hidden_states.reshape( - shape=(-1, self.out_channels, height * self.patch_size, width * self.patch_size) - ) - - if not return_dict: - return (output,) - - return Transformer2DModelOutput(sample=output) diff --git a/spaces/pakooo/Text2Image/app.py b/spaces/pakooo/Text2Image/app.py deleted file mode 100644 index eea2d10c803e7132510131db54e70164cef0e5df..0000000000000000000000000000000000000000 --- a/spaces/pakooo/Text2Image/app.py +++ /dev/null @@ -1,367 +0,0 @@ -from transformers import pipeline -import gradio as gr -import random -import string -import paddlehub as hub -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer -from loguru import logger - -from utils import get_tmt_client, getTextTrans_tmt -tmt_client = get_tmt_client() - -# language_translation_model = hub.Module(directory=f'./baidu_translate') -def getTextTrans(text, source='zh', target='en'): - return getTextTrans_tmt(tmt_client, text, source, target) - # def is_chinese(string): - # for ch in string: - # if u'\u4e00' <= ch <= u'\u9fff': - # return True - # return False - - # if not is_chinese(text) and target == 'en': - # return text - - # try: - # text_translation = language_translation_model.translate(text, source, target) - # return text_translation - # except Exception as e: - # return text - -space_ids = { - "spaces/stabilityai/stable-diffusion": "SD 2.1", - "spaces/runwayml/stable-diffusion-v1-5": "SD 1.5", - "spaces/stabilityai/stable-diffusion-1": "SD 1.0", - "dalle_mini_tab": "Dalle mini", - "spaces/IDEA-CCNL/Taiyi-Stable-Diffusion-Chinese": "Taiyi(太乙)", - } - -tab_actions = [] -tab_titles = [] - -extend_prompt_1 = True -extend_prompt_2 = True -extend_prompt_3 = True - -do_dreamlike_photoreal = False -if do_dreamlike_photoreal: - def add_random_noise(prompt, noise_level=0.1): - # Get the percentage of characters to add as noise - percentage_noise = noise_level * 5 - # Get the number of characters to add as noise - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - # Get the indices of the characters to add noise to - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - # Add noise to the selected characters - prompt_list = list(prompt) - for index in noise_indices: - prompt_list[index] = random.choice(string.ascii_letters + string.punctuation) - new_prompt = "".join(prompt_list) - return new_prompt - - dreamlike_photoreal_2_0 = gr.Interface.load("models/dreamlike-art/dreamlike-photoreal-2.0") - dreamlike_image = gr.Image(label="Dreamlike Photoreal 2.0") - - tab_actions.append(dreamlike_image) - tab_titles.append("Dreamlike_2.0") - -for space_id in space_ids.keys(): - print(space_id, space_ids[space_id]) - try: - tab_title = space_ids[space_id] - tab_titles.append(tab_title) - if (tab_title == 'Dalle mini'): - tab_content = gr.Blocks(elem_id='dalle_mini') - tab_actions.append(tab_content) - else: - tab_content = gr.Interface.load(space_id) - tab_actions.append(tab_content) - except Exception as e: - logger.info(f"load_fail__{space_id}_{e}") - -start_work = """async() => { - function isMobile() { - try { - document.createEvent("TouchEvent"); return true; - } catch(e) { - return false; - } - } - function getClientHeight() - { - var clientHeight=0; - if(document.body.clientHeight&&document.documentElement.clientHeight) { - var clientHeight = (document.body.clientHeightdocument.documentElement.clientHeight)?document.body.clientHeight:document.documentElement.clientHeight; - } - return clientHeight; - } - - function setNativeValue(element, value) { - const valueSetter = Object.getOwnPropertyDescriptor(element.__proto__, 'value').set; - const prototype = Object.getPrototypeOf(element); - const prototypeValueSetter = Object.getOwnPropertyDescriptor(prototype, 'value').set; - - if (valueSetter && valueSetter !== prototypeValueSetter) { - prototypeValueSetter.call(element, value); - } else { - valueSetter.call(element, value); - } - } - window['tab_advanced'] = 0; - - var gradioEl = document.querySelector('body > gradio-app').shadowRoot; - if (!gradioEl) { - gradioEl = document.querySelector('body > gradio-app'); - } - - if (typeof window['gradioEl'] === 'undefined') { - window['gradioEl'] = gradioEl; - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - tabitems_title = window['gradioEl'].querySelectorAll('#tab_demo')[0].children[0].children[0].children; - window['dalle_mini_block'] = null; - window['dalle_mini_iframe'] = null; - for (var i = 0; i < tabitems.length; i++) { - if (tabitems_title[i].innerText.indexOf('SD') >= 0) { - tabitems[i].childNodes[0].children[0].style.display='none'; - for (var j = 0; j < tabitems[i].childNodes[0].children[1].children.length; j++) { - if (j != 1) { - tabitems[i].childNodes[0].children[1].children[j].style.display='none'; - } - } - if (tabitems_title[i].innerText.indexOf('SD 1') >= 0) { - for (var j = 0; j < 4; j++) { - tabitems[i].childNodes[0].children[1].children[3].children[1].children[j].children[2].removeAttribute("disabled"); - } - } else if (tabitems_title[i].innerText.indexOf('SD 2') >= 0) { - tabitems[i].children[0].children[1].children[3].children[0].click(); - } - } else if (tabitems_title[i].innerText.indexOf('Taiyi') >= 0) { - tabitems[i].children[0].children[0].children[1].style.display='none'; - tabitems[i].children[0].children[0].children[0].children[0].children[1].style.display='none'; - } else if (tabitems_title[i].innerText.indexOf('Dreamlike') >= 0) { - tabitems[i].childNodes[0].children[0].children[1].style.display='none'; - } else if (tabitems_title[i].innerText.indexOf('Dalle mini') >= 0) { - window['dalle_mini_block']= tabitems[i]; - } - } - - tab_demo = window['gradioEl'].querySelectorAll('#tab_demo')[0]; - tab_demo.style.display = "block"; - tab_demo.setAttribute('style', 'height: 100%;'); - const page1 = window['gradioEl'].querySelectorAll('#page_1')[0]; - const page2 = window['gradioEl'].querySelectorAll('#page_2')[0]; - - btns_1 = window['gradioEl'].querySelector('#input_col1_row3').children; - btns_1_split = 100 / btns_1.length; - for (var i = 0; i < btns_1.length; i++) { - btns_1[i].setAttribute('style', 'min-width:0px;width:' + btns_1_split + '%;'); - } - page1.style.display = "none"; - page2.style.display = "block"; - prompt_work = window['gradioEl'].querySelectorAll('#prompt_work'); - for (var i = 0; i < prompt_work.length; i++) { - prompt_work[i].style.display='none'; - } - - window['prevPrompt'] = ''; - window['doCheckPrompt'] = 0; - window['checkPrompt'] = function checkPrompt() { - try { - prompt_work = window['gradioEl'].querySelectorAll('#prompt_work'); - if (prompt_work.length > 0 && prompt_work[0].children.length > 1) { - prompt_work[0].children[1].style.display='none'; - prompt_work[0].style.display='block'; - } - text_value = window['gradioEl'].querySelectorAll('#prompt_work')[0].querySelectorAll('textarea')[0].value; - progress_bar = window['gradioEl'].querySelectorAll('.progress-bar'); - if (window['doCheckPrompt'] === 0 && window['prevPrompt'] !== text_value && progress_bar.length == 0) { - console.log('_____new prompt___[' + text_value + ']_'); - window['doCheckPrompt'] = 1; - window['prevPrompt'] = text_value; - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - for (var i = 0; i < tabitems.length; i++) { - if (tabitems_title[i].innerText.indexOf('Dalle mini') >= 0) { - if (window['dalle_mini_block']) { - if (window['dalle_mini_iframe'] === null) { - window['dalle_mini_iframe'] = document.createElement('iframe'); - window['dalle_mini_iframe'].height = 1000; - window['dalle_mini_iframe'].width = '100%'; - window['dalle_mini_iframe'].id = 'dalle_iframe'; - window['dalle_mini_block'].appendChild(window['dalle_mini_iframe']); - } - window['dalle_mini_iframe'].src = 'https://yizhangliu-dalleclone.hf.space/index.html?prompt=' + encodeURI(text_value); - console.log('dalle_mini'); - } - continue; - } - inputText = null; - if (tabitems_title[i].innerText.indexOf('SD') >= 0) { - text_value = window['gradioEl'].querySelectorAll('#prompt_work')[0].querySelectorAll('textarea')[0].value; - inputText = tabitems[i].children[0].children[1].children[0].querySelectorAll('.gr-text-input')[0]; - } else if (tabitems_title[i].innerText.indexOf('Taiyi') >= 0) { - text_value = window['gradioEl'].querySelectorAll('#prompt_work_zh')[0].querySelectorAll('textarea')[0].value; - inputText = tabitems[i].children[0].children[0].children[1].querySelectorAll('.gr-text-input')[0]; - } - if (inputText) { - setNativeValue(inputText, text_value); - inputText.dispatchEvent(new Event('input', { bubbles: true })); - } - } - - setTimeout(function() { - btns = window['gradioEl'].querySelectorAll('button'); - for (var i = 0; i < btns.length; i++) { - if (['Generate image','Run', '生成图像(Generate)'].includes(btns[i].innerText)) { - btns[i].click(); - } - } - window['doCheckPrompt'] = 0; - }, 10); - } - } catch(e) { - } - } - window['checkPrompt_interval'] = window.setInterval("window.checkPrompt()", 100); - } - - return false; -}""" - -switch_tab_advanced = """async() => { - window['tab_advanced'] = 1 - window['tab_advanced']; - if (window['tab_advanced']==0) { - action = 'none'; - } else { - action = 'block'; - } - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - tabitems_title = window['gradioEl'].querySelectorAll('#tab_demo')[0].children[0].children[0].children; - for (var i = 0; i < tabitems.length; i++) { - if (tabitems_title[i].innerText.indexOf('SD') >= 0) { - //tabitems[i].childNodes[0].children[1].children[0].style.display=action; - //tabitems[i].childNodes[0].children[1].children[4].style.display=action; - for (var j = 0; j < tabitems[i].childNodes[0].children[1].children.length; j++) { - if (j != 1) { - tabitems[i].childNodes[0].children[1].children[j].style.display=action; - } - } - } else if (tabitems_title[i].innerText.indexOf('Taiyi') >= 0) { - tabitems[i].children[0].children[0].children[1].style.display=action; - } - } - return false; -}""" - -def prompt_extend(prompt, PM): - prompt_en = getTextTrans(prompt, source='zh', target='en') - if PM == 1: - extend_prompt_en = extend_prompt_pipe(prompt_en+',', num_return_sequences=1)[0]["generated_text"] - elif PM == 2: - extend_prompt_en = extend_prompt_microsoft(prompt_en) - elif PM == 3: - extend_prompt_en = MagicPrompt(prompt_en) - - if (prompt != prompt_en): - logger.info(f"extend_prompt__1_PM=[{PM}]_") - extend_prompt_out = getTextTrans(extend_prompt_en, source='en', target='zh') - else: - logger.info(f"extend_prompt__2_PM=[{PM}]_") - extend_prompt_out = extend_prompt_en - - return extend_prompt_out - -def prompt_extend_1(prompt): - extend_prompt_out = prompt_extend(prompt, 1) - return extend_prompt_out - -def prompt_extend_2(prompt): - extend_prompt_out = prompt_extend(prompt, 2) - return extend_prompt_out - -def prompt_extend_3(prompt): - extend_prompt_out = prompt_extend(prompt, 3) - return extend_prompt_out - -def prompt_draw_1(prompt, noise_level): - prompt_en = getTextTrans(prompt, source='zh', target='en') - if (prompt != prompt_en): - logger.info(f"draw_prompt______1__") - prompt_zh = prompt - else: - logger.info(f"draw_prompt______2__") - prompt_zh = getTextTrans(prompt, source='en', target='zh') - - prompt_with_noise = add_random_noise(prompt_en, noise_level) - dreamlike_output = dreamlike_photoreal_2_0(prompt_with_noise) - return prompt_en, prompt_zh, dreamlike_output - -def prompt_draw_2(prompt): - prompt_en = getTextTrans(prompt, source='zh', target='en') - if (prompt != prompt_en): - logger.info(f"draw_prompt______1__") - prompt_zh = prompt - else: - logger.info(f"draw_prompt______2__") - prompt_zh = getTextTrans(prompt, source='en', target='zh') - return prompt_en, prompt_zh - -with gr.Blocks(title='VM8 Text2Image') as demo: - with gr.Group(elem_id="page_1", visible=True) as page_1: - with gr.Box(): - with gr.Row(): - start_button = gr.Button("Let's GO!", elem_id="start-btn", visible=True) - start_button.click(fn=None, inputs=[], outputs=[], _js=start_work) - - with gr.Group(elem_id="page_2", visible=False) as page_2: - with gr.Row(elem_id="prompt_row0"): - with gr.Column(id="input_col1"): - with gr.Row(elem_id="input_col1_row1"): - prompt_input0 = gr.Textbox(lines=2, label="Original prompt", visible=True) - with gr.Row(elem_id="input_col1_row2"): - prompt_work = gr.Textbox(lines=1, label="prompt_work", elem_id="prompt_work", visible=True) - with gr.Row(elem_id="input_col1_row3"): - with gr.Column(elem_id="input_col1_row2_col0"): - draw_btn_0 = gr.Button(value = "Generate(original)", elem_id="draw-btn-0") - if extend_prompt_1: - with gr.Column(elem_id="input_col1_row2_col1"): - extend_btn_1 = gr.Button(value = "Extend_1",elem_id="extend-btn-1") - if extend_prompt_2: - with gr.Column(elem_id="input_col1_row2_col2"): - extend_btn_2 = gr.Button(value = "Extend_2",elem_id="extend-btn-2") - if extend_prompt_3: - with gr.Column(elem_id="input_col1_row2_col3"): - extend_btn_3 = gr.Button(value = "Extend_3",elem_id="extend-btn-3") - with gr.Column(id="input_col2"): - prompt_input1 = gr.Textbox(lines=2, label="Extend prompt", visible=True) - draw_btn_1 = gr.Button(value = "Generate(extend)", elem_id="draw-btn-1") - with gr.Row(elem_id="prompt_row1"): - with gr.Column(id="input_col3"): - with gr.Row(elem_id="input_col3_row2"): - prompt_work_zh = gr.Textbox(lines=1, label="prompt_work_zh", elem_id="prompt_work_zh", visible=False) - with gr.Row(elem_id='tab_demo', visible=True).style(height=200): - tab_demo = gr.TabbedInterface(tab_actions, tab_titles) - if do_dreamlike_photoreal: - with gr.Row(): - noise_level=gr.Slider(minimum=0.1, maximum=3, step=0.1, label="Dreamlike noise Level: [Higher noise level produces more diverse outputs, while lower noise level produces similar outputs.]") - with gr.Row(): - switch_tab_advanced_btn = gr.Button(value = "Switch_tab_advanced", elem_id="switch_tab_advanced_btn") - switch_tab_advanced_btn.click(fn=None, inputs=[], outputs=[], _js=switch_tab_advanced) - - if extend_prompt_1: - extend_btn_1.click(fn=prompt_extend_1, inputs=[prompt_input0], outputs=[prompt_input1]) - if extend_prompt_2: - extend_btn_2.click(fn=prompt_extend_2, inputs=[prompt_input0], outputs=[prompt_input1]) - if extend_prompt_3: - extend_btn_3.click(fn=prompt_extend_3, inputs=[prompt_input0], outputs=[prompt_input1]) - - if do_dreamlike_photoreal: - draw_btn_0.click(fn=prompt_draw_1, inputs=[prompt_input0, noise_level], outputs=[prompt_work, prompt_work_zh, dreamlike_image]) - draw_btn_1.click(fn=prompt_draw_1, inputs=[prompt_input1, noise_level], outputs=[prompt_work, prompt_work_zh, dreamlike_image]) - else: - draw_btn_0.click(fn=prompt_draw_2, inputs=[prompt_input0], outputs=[prompt_work, prompt_work_zh]) - draw_btn_1.click(fn=prompt_draw_2, inputs=[prompt_input1], outputs=[prompt_work, prompt_work_zh]) - -demo.queue() -demo.launch() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/upload.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/upload.py deleted file mode 100644 index caf15f04a603a4d95a52fe0e004a57958054b332..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/upload.py +++ /dev/null @@ -1,206 +0,0 @@ -""" -distutils.command.upload - -Implements the Distutils 'upload' subcommand (upload package to a package -index). -""" - -import os -import io -import hashlib -import logging -from base64 import standard_b64encode -from urllib.request import urlopen, Request, HTTPError -from urllib.parse import urlparse -from ..errors import DistutilsError, DistutilsOptionError -from ..core import PyPIRCCommand -from ..spawn import spawn - - -# PyPI Warehouse supports MD5, SHA256, and Blake2 (blake2-256) -# https://bugs.python.org/issue40698 -_FILE_CONTENT_DIGESTS = { - "md5_digest": getattr(hashlib, "md5", None), - "sha256_digest": getattr(hashlib, "sha256", None), - "blake2_256_digest": getattr(hashlib, "blake2b", None), -} - - -class upload(PyPIRCCommand): - description = "upload binary package to PyPI" - - user_options = PyPIRCCommand.user_options + [ - ('sign', 's', 'sign files to upload using gpg'), - ('identity=', 'i', 'GPG identity used to sign files'), - ] - - boolean_options = PyPIRCCommand.boolean_options + ['sign'] - - def initialize_options(self): - PyPIRCCommand.initialize_options(self) - self.username = '' - self.password = '' - self.show_response = 0 - self.sign = False - self.identity = None - - def finalize_options(self): - PyPIRCCommand.finalize_options(self) - if self.identity and not self.sign: - raise DistutilsOptionError("Must use --sign for --identity to have meaning") - config = self._read_pypirc() - if config != {}: - self.username = config['username'] - self.password = config['password'] - self.repository = config['repository'] - self.realm = config['realm'] - - # getting the password from the distribution - # if previously set by the register command - if not self.password and self.distribution.password: - self.password = self.distribution.password - - def run(self): - if not self.distribution.dist_files: - msg = ( - "Must create and upload files in one command " - "(e.g. setup.py sdist upload)" - ) - raise DistutilsOptionError(msg) - for command, pyversion, filename in self.distribution.dist_files: - self.upload_file(command, pyversion, filename) - - def upload_file(self, command, pyversion, filename): # noqa: C901 - # Makes sure the repository URL is compliant - schema, netloc, url, params, query, fragments = urlparse(self.repository) - if params or query or fragments: - raise AssertionError("Incompatible url %s" % self.repository) - - if schema not in ('http', 'https'): - raise AssertionError("unsupported schema " + schema) - - # Sign if requested - if self.sign: - gpg_args = ["gpg", "--detach-sign", "-a", filename] - if self.identity: - gpg_args[2:2] = ["--local-user", self.identity] - spawn(gpg_args, dry_run=self.dry_run) - - # Fill in the data - send all the meta-data in case we need to - # register a new release - f = open(filename, 'rb') - try: - content = f.read() - finally: - f.close() - - meta = self.distribution.metadata - data = { - # action - ':action': 'file_upload', - 'protocol_version': '1', - # identify release - 'name': meta.get_name(), - 'version': meta.get_version(), - # file content - 'content': (os.path.basename(filename), content), - 'filetype': command, - 'pyversion': pyversion, - # additional meta-data - 'metadata_version': '1.0', - 'summary': meta.get_description(), - 'home_page': meta.get_url(), - 'author': meta.get_contact(), - 'author_email': meta.get_contact_email(), - 'license': meta.get_licence(), - 'description': meta.get_long_description(), - 'keywords': meta.get_keywords(), - 'platform': meta.get_platforms(), - 'classifiers': meta.get_classifiers(), - 'download_url': meta.get_download_url(), - # PEP 314 - 'provides': meta.get_provides(), - 'requires': meta.get_requires(), - 'obsoletes': meta.get_obsoletes(), - } - - data['comment'] = '' - - # file content digests - for digest_name, digest_cons in _FILE_CONTENT_DIGESTS.items(): - if digest_cons is None: - continue - try: - data[digest_name] = digest_cons(content).hexdigest() - except ValueError: - # hash digest not available or blocked by security policy - pass - - if self.sign: - with open(filename + ".asc", "rb") as f: - data['gpg_signature'] = (os.path.basename(filename) + ".asc", f.read()) - - # set up the authentication - user_pass = (self.username + ":" + self.password).encode('ascii') - # The exact encoding of the authentication string is debated. - # Anyway PyPI only accepts ascii for both username or password. - auth = "Basic " + standard_b64encode(user_pass).decode('ascii') - - # Build up the MIME payload for the POST data - boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254' - sep_boundary = b'\r\n--' + boundary.encode('ascii') - end_boundary = sep_boundary + b'--\r\n' - body = io.BytesIO() - for key, value in data.items(): - title = '\r\nContent-Disposition: form-data; name="%s"' % key - # handle multiple entries for the same name - if not isinstance(value, list): - value = [value] - for value in value: - if type(value) is tuple: - title += '; filename="%s"' % value[0] - value = value[1] - else: - value = str(value).encode('utf-8') - body.write(sep_boundary) - body.write(title.encode('utf-8')) - body.write(b"\r\n\r\n") - body.write(value) - body.write(end_boundary) - body = body.getvalue() - - msg = "Submitting {} to {}".format(filename, self.repository) - self.announce(msg, logging.INFO) - - # build the Request - headers = { - 'Content-type': 'multipart/form-data; boundary=%s' % boundary, - 'Content-length': str(len(body)), - 'Authorization': auth, - } - - request = Request(self.repository, data=body, headers=headers) - # send the data - try: - result = urlopen(request) - status = result.getcode() - reason = result.msg - except HTTPError as e: - status = e.code - reason = e.msg - except OSError as e: - self.announce(str(e), logging.ERROR) - raise - - if status == 200: - self.announce( - 'Server response ({}): {}'.format(status, reason), logging.INFO - ) - if self.show_response: - text = self._read_pypi_response(result) - msg = '\n'.join(('-' * 75, text, '-' * 75)) - self.announce(msg, logging.INFO) - else: - msg = 'Upload failed ({}): {}'.format(status, reason) - self.announce(msg, logging.ERROR) - raise DistutilsError(msg) diff --git a/spaces/playgrdstar/ancient-chinese-calligraphy/README.md b/spaces/playgrdstar/ancient-chinese-calligraphy/README.md deleted file mode 100644 index 3a9e837c04be55ff1ee9adad34c256e8a1981e4e..0000000000000000000000000000000000000000 --- a/spaces/playgrdstar/ancient-chinese-calligraphy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ancient Chinese Calligraphy -emoji: 📉 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/File-d0b52941.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/File-d0b52941.js deleted file mode 100644 index 779da21023d513ac2eb809272c6503d88de900f8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/File-d0b52941.js +++ /dev/null @@ -1,2 +0,0 @@ -const{SvelteComponent:c,append:s,attr:t,detach:h,init:d,insert:_,noop:r,safe_not_equal:u,svg_element:i}=window.__gradio__svelte__internal;function w(a){let e,n,o;return{c(){e=i("svg"),n=i("path"),o=i("polyline"),t(n,"d","M13 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V9z"),t(o,"points","13 2 13 9 20 9"),t(e,"xmlns","http://www.w3.org/2000/svg"),t(e,"width","100%"),t(e,"height","100%"),t(e,"viewBox","0 0 24 24"),t(e,"fill","none"),t(e,"stroke","currentColor"),t(e,"stroke-width","1.5"),t(e,"stroke-linecap","round"),t(e,"stroke-linejoin","round"),t(e,"class","feather feather-file")},m(l,p){_(l,e,p),s(e,n),s(e,o)},p:r,i:r,o:r,d(l){l&&h(e)}}}class f extends c{constructor(e){super(),d(this,e,null,w,u,{})}}export{f as F}; -//# sourceMappingURL=File-d0b52941.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/frequencies/test_freq_code.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/frequencies/test_freq_code.py deleted file mode 100644 index e961fdc295c960b1695a7a5ef06f1915d227b6a1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/frequencies/test_freq_code.py +++ /dev/null @@ -1,97 +0,0 @@ -import numpy as np -import pytest - -from pandas._libs.tslibs import ( - Period, - Resolution, - to_offset, -) -from pandas._libs.tslibs.dtypes import _attrname_to_abbrevs - - -@pytest.mark.parametrize( - "freqstr,exp_freqstr", - [("D", "D"), ("W", "D"), ("M", "D"), ("S", "S"), ("T", "S"), ("H", "S")], -) -def test_get_to_timestamp_base(freqstr, exp_freqstr): - off = to_offset(freqstr) - per = Period._from_ordinal(1, off) - exp_code = to_offset(exp_freqstr)._period_dtype_code - - result_code = per._dtype._get_to_timestamp_base() - assert result_code == exp_code - - -@pytest.mark.parametrize( - "freqstr,expected", - [ - ("A", "year"), - ("Q", "quarter"), - ("M", "month"), - ("D", "day"), - ("H", "hour"), - ("T", "minute"), - ("S", "second"), - ("L", "millisecond"), - ("U", "microsecond"), - ("N", "nanosecond"), - ], -) -def test_get_attrname_from_abbrev(freqstr, expected): - assert Resolution.get_reso_from_freqstr(freqstr).attrname == expected - - -@pytest.mark.parametrize("freq", ["D", "H", "T", "S", "L", "U", "N"]) -def test_get_freq_roundtrip2(freq): - obj = Resolution.get_reso_from_freqstr(freq) - result = _attrname_to_abbrevs[obj.attrname] - assert freq == result - - -@pytest.mark.parametrize( - "args,expected", - [ - ((1.5, "T"), (90, "S")), - ((62.4, "T"), (3744, "S")), - ((1.04, "H"), (3744, "S")), - ((1, "D"), (1, "D")), - ((0.342931, "H"), (1234551600, "U")), - ((1.2345, "D"), (106660800, "L")), - ], -) -def test_resolution_bumping(args, expected): - # see gh-14378 - off = to_offset(str(args[0]) + args[1]) - assert off.n == expected[0] - assert off._prefix == expected[1] - - -@pytest.mark.parametrize( - "args", - [ - (0.5, "N"), - # Too much precision in the input can prevent. - (0.3429324798798269273987982, "H"), - ], -) -def test_cat(args): - msg = "Invalid frequency" - - with pytest.raises(ValueError, match=msg): - to_offset(str(args[0]) + args[1]) - - -@pytest.mark.parametrize( - "freqstr,expected", - [ - ("1H", "2021-01-01T09:00:00"), - ("1D", "2021-01-02T08:00:00"), - ("1W", "2021-01-03T08:00:00"), - ("1M", "2021-01-31T08:00:00"), - ("1Y", "2021-12-31T08:00:00"), - ], -) -def test_compatibility(freqstr, expected): - ts_np = np.datetime64("2021-01-01T08:00:00.00") - do = to_offset(freqstr) - assert ts_np + do == np.datetime64(expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/setuptools_build.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/setuptools_build.py deleted file mode 100644 index f460c4003f32fea2008eaf7ce590e1dd6a4e36e9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/setuptools_build.py +++ /dev/null @@ -1,195 +0,0 @@ -import sys -import textwrap -from typing import List, Optional, Sequence - -# Shim to wrap setup.py invocation with setuptools -# Note that __file__ is handled via two {!r} *and* %r, to ensure that paths on -# Windows are correctly handled (it should be "C:\\Users" not "C:\Users"). -_SETUPTOOLS_SHIM = textwrap.dedent( - """ - exec(compile(''' - # This is -- a caller that pip uses to run setup.py - # - # - It imports setuptools before invoking setup.py, to enable projects that directly - # import from `distutils.core` to work with newer packaging standards. - # - It provides a clear error message when setuptools is not installed. - # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so - # setuptools doesn't think the script is `-c`. This avoids the following warning: - # manifest_maker: standard file '-c' not found". - # - It generates a shim setup.py, for handling setup.cfg-only projects. - import os, sys, tokenize - - try: - import setuptools - except ImportError as error: - print( - "ERROR: Can not execute `setup.py` since setuptools is not available in " - "the build environment.", - file=sys.stderr, - ) - sys.exit(1) - - __file__ = %r - sys.argv[0] = __file__ - - if os.path.exists(__file__): - filename = __file__ - with tokenize.open(__file__) as f: - setup_py_code = f.read() - else: - filename = "" - setup_py_code = "from setuptools import setup; setup()" - - exec(compile(setup_py_code, filename, "exec")) - ''' % ({!r},), "", "exec")) - """ -).rstrip() - - -def make_setuptools_shim_args( - setup_py_path: str, - global_options: Sequence[str] = None, - no_user_config: bool = False, - unbuffered_output: bool = False, -) -> List[str]: - """ - Get setuptools command arguments with shim wrapped setup file invocation. - - :param setup_py_path: The path to setup.py to be wrapped. - :param global_options: Additional global options. - :param no_user_config: If True, disables personal user configuration. - :param unbuffered_output: If True, adds the unbuffered switch to the - argument list. - """ - args = [sys.executable] - if unbuffered_output: - args += ["-u"] - args += ["-c", _SETUPTOOLS_SHIM.format(setup_py_path)] - if global_options: - args += global_options - if no_user_config: - args += ["--no-user-cfg"] - return args - - -def make_setuptools_bdist_wheel_args( - setup_py_path: str, - global_options: Sequence[str], - build_options: Sequence[str], - destination_dir: str, -) -> List[str]: - # NOTE: Eventually, we'd want to also -S to the flags here, when we're - # isolating. Currently, it breaks Python in virtualenvs, because it - # relies on site.py to find parts of the standard library outside the - # virtualenv. - args = make_setuptools_shim_args( - setup_py_path, global_options=global_options, unbuffered_output=True - ) - args += ["bdist_wheel", "-d", destination_dir] - args += build_options - return args - - -def make_setuptools_clean_args( - setup_py_path: str, - global_options: Sequence[str], -) -> List[str]: - args = make_setuptools_shim_args( - setup_py_path, global_options=global_options, unbuffered_output=True - ) - args += ["clean", "--all"] - return args - - -def make_setuptools_develop_args( - setup_py_path: str, - global_options: Sequence[str], - install_options: Sequence[str], - no_user_config: bool, - prefix: Optional[str], - home: Optional[str], - use_user_site: bool, -) -> List[str]: - assert not (use_user_site and prefix) - - args = make_setuptools_shim_args( - setup_py_path, - global_options=global_options, - no_user_config=no_user_config, - ) - - args += ["develop", "--no-deps"] - - args += install_options - - if prefix: - args += ["--prefix", prefix] - if home is not None: - args += ["--install-dir", home] - - if use_user_site: - args += ["--user", "--prefix="] - - return args - - -def make_setuptools_egg_info_args( - setup_py_path: str, - egg_info_dir: Optional[str], - no_user_config: bool, -) -> List[str]: - args = make_setuptools_shim_args(setup_py_path, no_user_config=no_user_config) - - args += ["egg_info"] - - if egg_info_dir: - args += ["--egg-base", egg_info_dir] - - return args - - -def make_setuptools_install_args( - setup_py_path: str, - global_options: Sequence[str], - install_options: Sequence[str], - record_filename: str, - root: Optional[str], - prefix: Optional[str], - header_dir: Optional[str], - home: Optional[str], - use_user_site: bool, - no_user_config: bool, - pycompile: bool, -) -> List[str]: - assert not (use_user_site and prefix) - assert not (use_user_site and root) - - args = make_setuptools_shim_args( - setup_py_path, - global_options=global_options, - no_user_config=no_user_config, - unbuffered_output=True, - ) - args += ["install", "--record", record_filename] - args += ["--single-version-externally-managed"] - - if root is not None: - args += ["--root", root] - if prefix is not None: - args += ["--prefix", prefix] - if home is not None: - args += ["--home", home] - if use_user_site: - args += ["--user", "--prefix="] - - if pycompile: - args += ["--compile"] - else: - args += ["--no-compile"] - - if header_dir: - args += ["--install-headers", header_dir] - - args += install_options - - return args diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/formatters/terminal256.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/formatters/terminal256.py deleted file mode 100644 index b5eab1400563dadbd4f5f7deb7c12c1c8c23e066..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/formatters/terminal256.py +++ /dev/null @@ -1,338 +0,0 @@ -""" - pygments.formatters.terminal256 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for 256-color terminal output with ANSI sequences. - - RGB-to-XTERM color conversion routines adapted from xterm256-conv - tool (http://frexx.de/xterm-256-notes/data/xterm256-conv2.tar.bz2) - by Wolfgang Frisch. - - Formatter version 1. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -# TODO: -# - Options to map style's bold/underline/italic/border attributes -# to some ANSI attrbutes (something like 'italic=underline') -# - An option to output "style RGB to xterm RGB/index" conversion table -# - An option to indicate that we are running in "reverse background" -# xterm. This means that default colors are white-on-black, not -# black-on-while, so colors like "white background" need to be converted -# to "white background, black foreground", etc... - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.console import codes -from pip._vendor.pygments.style import ansicolors - - -__all__ = ['Terminal256Formatter', 'TerminalTrueColorFormatter'] - - -class EscapeSequence: - def __init__(self, fg=None, bg=None, bold=False, underline=False, italic=False): - self.fg = fg - self.bg = bg - self.bold = bold - self.underline = underline - self.italic = italic - - def escape(self, attrs): - if len(attrs): - return "\x1b[" + ";".join(attrs) + "m" - return "" - - def color_string(self): - attrs = [] - if self.fg is not None: - if self.fg in ansicolors: - esc = codes[self.fg.replace('ansi','')] - if ';01m' in esc: - self.bold = True - # extract fg color code. - attrs.append(esc[2:4]) - else: - attrs.extend(("38", "5", "%i" % self.fg)) - if self.bg is not None: - if self.bg in ansicolors: - esc = codes[self.bg.replace('ansi','')] - # extract fg color code, add 10 for bg. - attrs.append(str(int(esc[2:4])+10)) - else: - attrs.extend(("48", "5", "%i" % self.bg)) - if self.bold: - attrs.append("01") - if self.underline: - attrs.append("04") - if self.italic: - attrs.append("03") - return self.escape(attrs) - - def true_color_string(self): - attrs = [] - if self.fg: - attrs.extend(("38", "2", str(self.fg[0]), str(self.fg[1]), str(self.fg[2]))) - if self.bg: - attrs.extend(("48", "2", str(self.bg[0]), str(self.bg[1]), str(self.bg[2]))) - if self.bold: - attrs.append("01") - if self.underline: - attrs.append("04") - if self.italic: - attrs.append("03") - return self.escape(attrs) - - def reset_string(self): - attrs = [] - if self.fg is not None: - attrs.append("39") - if self.bg is not None: - attrs.append("49") - if self.bold or self.underline or self.italic: - attrs.append("00") - return self.escape(attrs) - - -class Terminal256Formatter(Formatter): - """ - Format tokens with ANSI color sequences, for output in a 256-color - terminal or console. Like in `TerminalFormatter` color sequences - are terminated at newlines, so that paging the output works correctly. - - The formatter takes colors from a style defined by the `style` option - and converts them to nearest ANSI 256-color escape sequences. Bold and - underline attributes from the style are preserved (and displayed). - - .. versionadded:: 0.9 - - .. versionchanged:: 2.2 - If the used style defines foreground colors in the form ``#ansi*``, then - `Terminal256Formatter` will map these to non extended foreground color. - See :ref:`AnsiTerminalStyle` for more information. - - .. versionchanged:: 2.4 - The ANSI color names have been updated with names that are easier to - understand and align with colornames of other projects and terminals. - See :ref:`this table ` for more information. - - - Options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - - `linenos` - Set to ``True`` to have line numbers on the terminal output as well - (default: ``False`` = no line numbers). - """ - name = 'Terminal256' - aliases = ['terminal256', 'console256', '256'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - - self.xterm_colors = [] - self.best_match = {} - self.style_string = {} - - self.usebold = 'nobold' not in options - self.useunderline = 'nounderline' not in options - self.useitalic = 'noitalic' not in options - - self._build_color_table() # build an RGB-to-256 color conversion table - self._setup_styles() # convert selected style's colors to term. colors - - self.linenos = options.get('linenos', False) - self._lineno = 0 - - def _build_color_table(self): - # colors 0..15: 16 basic colors - - self.xterm_colors.append((0x00, 0x00, 0x00)) # 0 - self.xterm_colors.append((0xcd, 0x00, 0x00)) # 1 - self.xterm_colors.append((0x00, 0xcd, 0x00)) # 2 - self.xterm_colors.append((0xcd, 0xcd, 0x00)) # 3 - self.xterm_colors.append((0x00, 0x00, 0xee)) # 4 - self.xterm_colors.append((0xcd, 0x00, 0xcd)) # 5 - self.xterm_colors.append((0x00, 0xcd, 0xcd)) # 6 - self.xterm_colors.append((0xe5, 0xe5, 0xe5)) # 7 - self.xterm_colors.append((0x7f, 0x7f, 0x7f)) # 8 - self.xterm_colors.append((0xff, 0x00, 0x00)) # 9 - self.xterm_colors.append((0x00, 0xff, 0x00)) # 10 - self.xterm_colors.append((0xff, 0xff, 0x00)) # 11 - self.xterm_colors.append((0x5c, 0x5c, 0xff)) # 12 - self.xterm_colors.append((0xff, 0x00, 0xff)) # 13 - self.xterm_colors.append((0x00, 0xff, 0xff)) # 14 - self.xterm_colors.append((0xff, 0xff, 0xff)) # 15 - - # colors 16..232: the 6x6x6 color cube - - valuerange = (0x00, 0x5f, 0x87, 0xaf, 0xd7, 0xff) - - for i in range(217): - r = valuerange[(i // 36) % 6] - g = valuerange[(i // 6) % 6] - b = valuerange[i % 6] - self.xterm_colors.append((r, g, b)) - - # colors 233..253: grayscale - - for i in range(1, 22): - v = 8 + i * 10 - self.xterm_colors.append((v, v, v)) - - def _closest_color(self, r, g, b): - distance = 257*257*3 # "infinity" (>distance from #000000 to #ffffff) - match = 0 - - for i in range(0, 254): - values = self.xterm_colors[i] - - rd = r - values[0] - gd = g - values[1] - bd = b - values[2] - d = rd*rd + gd*gd + bd*bd - - if d < distance: - match = i - distance = d - return match - - def _color_index(self, color): - index = self.best_match.get(color, None) - if color in ansicolors: - # strip the `ansi/#ansi` part and look up code - index = color - self.best_match[color] = index - if index is None: - try: - rgb = int(str(color), 16) - except ValueError: - rgb = 0 - - r = (rgb >> 16) & 0xff - g = (rgb >> 8) & 0xff - b = rgb & 0xff - index = self._closest_color(r, g, b) - self.best_match[color] = index - return index - - def _setup_styles(self): - for ttype, ndef in self.style: - escape = EscapeSequence() - # get foreground from ansicolor if set - if ndef['ansicolor']: - escape.fg = self._color_index(ndef['ansicolor']) - elif ndef['color']: - escape.fg = self._color_index(ndef['color']) - if ndef['bgansicolor']: - escape.bg = self._color_index(ndef['bgansicolor']) - elif ndef['bgcolor']: - escape.bg = self._color_index(ndef['bgcolor']) - if self.usebold and ndef['bold']: - escape.bold = True - if self.useunderline and ndef['underline']: - escape.underline = True - if self.useitalic and ndef['italic']: - escape.italic = True - self.style_string[str(ttype)] = (escape.color_string(), - escape.reset_string()) - - def _write_lineno(self, outfile): - self._lineno += 1 - outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno)) - - def format(self, tokensource, outfile): - return Formatter.format(self, tokensource, outfile) - - def format_unencoded(self, tokensource, outfile): - if self.linenos: - self._write_lineno(outfile) - - for ttype, value in tokensource: - not_found = True - while ttype and not_found: - try: - # outfile.write( "<" + str(ttype) + ">" ) - on, off = self.style_string[str(ttype)] - - # Like TerminalFormatter, add "reset colors" escape sequence - # on newline. - spl = value.split('\n') - for line in spl[:-1]: - if line: - outfile.write(on + line + off) - if self.linenos: - self._write_lineno(outfile) - else: - outfile.write('\n') - - if spl[-1]: - outfile.write(on + spl[-1] + off) - - not_found = False - # outfile.write( '#' + str(ttype) + '#' ) - - except KeyError: - # ottype = ttype - ttype = ttype.parent - # outfile.write( '!' + str(ottype) + '->' + str(ttype) + '!' ) - - if not_found: - outfile.write(value) - - if self.linenos: - outfile.write("\n") - - - -class TerminalTrueColorFormatter(Terminal256Formatter): - r""" - Format tokens with ANSI color sequences, for output in a true-color - terminal or console. Like in `TerminalFormatter` color sequences - are terminated at newlines, so that paging the output works correctly. - - .. versionadded:: 2.1 - - Options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - """ - name = 'TerminalTrueColor' - aliases = ['terminal16m', 'console16m', '16m'] - filenames = [] - - def _build_color_table(self): - pass - - def _color_tuple(self, color): - try: - rgb = int(str(color), 16) - except ValueError: - return None - r = (rgb >> 16) & 0xff - g = (rgb >> 8) & 0xff - b = rgb & 0xff - return (r, g, b) - - def _setup_styles(self): - for ttype, ndef in self.style: - escape = EscapeSequence() - if ndef['color']: - escape.fg = self._color_tuple(ndef['color']) - if ndef['bgcolor']: - escape.bg = self._color_tuple(ndef['bgcolor']) - if self.usebold and ndef['bold']: - escape.bold = True - if self.useunderline and ndef['underline']: - escape.underline = True - if self.useitalic and ndef['italic']: - escape.italic = True - self.style_string[str(ttype)] = (escape.true_color_string(), - escape.reset_string()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/containers.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/containers.py deleted file mode 100644 index e29cf368991ccb083b67cda8133e4635defbfe53..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/containers.py +++ /dev/null @@ -1,167 +0,0 @@ -from itertools import zip_longest -from typing import ( - Iterator, - Iterable, - List, - Optional, - Union, - overload, - TypeVar, - TYPE_CHECKING, -) - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - JustifyMethod, - OverflowMethod, - RenderResult, - RenderableType, - ) - from .text import Text - -from .cells import cell_len -from .measure import Measurement - -T = TypeVar("T") - - -class Renderables: - """A list subclass which renders its contents to the console.""" - - def __init__( - self, renderables: Optional[Iterable["RenderableType"]] = None - ) -> None: - self._renderables: List["RenderableType"] = ( - list(renderables) if renderables is not None else [] - ) - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - """Console render method to insert line-breaks.""" - yield from self._renderables - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - dimensions = [ - Measurement.get(console, options, renderable) - for renderable in self._renderables - ] - if not dimensions: - return Measurement(1, 1) - _min = max(dimension.minimum for dimension in dimensions) - _max = max(dimension.maximum for dimension in dimensions) - return Measurement(_min, _max) - - def append(self, renderable: "RenderableType") -> None: - self._renderables.append(renderable) - - def __iter__(self) -> Iterable["RenderableType"]: - return iter(self._renderables) - - -class Lines: - """A list subclass which can render to the console.""" - - def __init__(self, lines: Iterable["Text"] = ()) -> None: - self._lines: List["Text"] = list(lines) - - def __repr__(self) -> str: - return f"Lines({self._lines!r})" - - def __iter__(self) -> Iterator["Text"]: - return iter(self._lines) - - @overload - def __getitem__(self, index: int) -> "Text": - ... - - @overload - def __getitem__(self, index: slice) -> List["Text"]: - ... - - def __getitem__(self, index: Union[slice, int]) -> Union["Text", List["Text"]]: - return self._lines[index] - - def __setitem__(self, index: int, value: "Text") -> "Lines": - self._lines[index] = value - return self - - def __len__(self) -> int: - return self._lines.__len__() - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - """Console render method to insert line-breaks.""" - yield from self._lines - - def append(self, line: "Text") -> None: - self._lines.append(line) - - def extend(self, lines: Iterable["Text"]) -> None: - self._lines.extend(lines) - - def pop(self, index: int = -1) -> "Text": - return self._lines.pop(index) - - def justify( - self, - console: "Console", - width: int, - justify: "JustifyMethod" = "left", - overflow: "OverflowMethod" = "fold", - ) -> None: - """Justify and overflow text to a given width. - - Args: - console (Console): Console instance. - width (int): Number of characters per line. - justify (str, optional): Default justify method for text: "left", "center", "full" or "right". Defaults to "left". - overflow (str, optional): Default overflow for text: "crop", "fold", or "ellipsis". Defaults to "fold". - - """ - from .text import Text - - if justify == "left": - for line in self._lines: - line.truncate(width, overflow=overflow, pad=True) - elif justify == "center": - for line in self._lines: - line.rstrip() - line.truncate(width, overflow=overflow) - line.pad_left((width - cell_len(line.plain)) // 2) - line.pad_right(width - cell_len(line.plain)) - elif justify == "right": - for line in self._lines: - line.rstrip() - line.truncate(width, overflow=overflow) - line.pad_left(width - cell_len(line.plain)) - elif justify == "full": - for line_index, line in enumerate(self._lines): - if line_index == len(self._lines) - 1: - break - words = line.split(" ") - words_size = sum(cell_len(word.plain) for word in words) - num_spaces = len(words) - 1 - spaces = [1 for _ in range(num_spaces)] - index = 0 - if spaces: - while words_size + num_spaces < width: - spaces[len(spaces) - index - 1] += 1 - num_spaces += 1 - index = (index + 1) % len(spaces) - tokens: List[Text] = [] - for index, (word, next_word) in enumerate( - zip_longest(words, words[1:]) - ): - tokens.append(word) - if index < len(spaces): - style = word.get_style_at_offset(console, -1) - next_style = next_word.get_style_at_offset(console, 0) - space_style = style if style == next_style else line.style - tokens.append(Text(" " * spaces[index], style=space_style)) - self[line_index] = Text("").join(tokens) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/dumper.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/dumper.py deleted file mode 100644 index 6aadba551f3836b02f4752277f4b3027073defad..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/dumper.py +++ /dev/null @@ -1,62 +0,0 @@ - -__all__ = ['BaseDumper', 'SafeDumper', 'Dumper'] - -from .emitter import * -from .serializer import * -from .representer import * -from .resolver import * - -class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver): - - def __init__(self, stream, - default_style=None, default_flow_style=False, - canonical=None, indent=None, width=None, - allow_unicode=None, line_break=None, - encoding=None, explicit_start=None, explicit_end=None, - version=None, tags=None, sort_keys=True): - Emitter.__init__(self, stream, canonical=canonical, - indent=indent, width=width, - allow_unicode=allow_unicode, line_break=line_break) - Serializer.__init__(self, encoding=encoding, - explicit_start=explicit_start, explicit_end=explicit_end, - version=version, tags=tags) - Representer.__init__(self, default_style=default_style, - default_flow_style=default_flow_style, sort_keys=sort_keys) - Resolver.__init__(self) - -class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver): - - def __init__(self, stream, - default_style=None, default_flow_style=False, - canonical=None, indent=None, width=None, - allow_unicode=None, line_break=None, - encoding=None, explicit_start=None, explicit_end=None, - version=None, tags=None, sort_keys=True): - Emitter.__init__(self, stream, canonical=canonical, - indent=indent, width=width, - allow_unicode=allow_unicode, line_break=line_break) - Serializer.__init__(self, encoding=encoding, - explicit_start=explicit_start, explicit_end=explicit_end, - version=version, tags=tags) - SafeRepresenter.__init__(self, default_style=default_style, - default_flow_style=default_flow_style, sort_keys=sort_keys) - Resolver.__init__(self) - -class Dumper(Emitter, Serializer, Representer, Resolver): - - def __init__(self, stream, - default_style=None, default_flow_style=False, - canonical=None, indent=None, width=None, - allow_unicode=None, line_break=None, - encoding=None, explicit_start=None, explicit_end=None, - version=None, tags=None, sort_keys=True): - Emitter.__init__(self, stream, canonical=canonical, - indent=indent, width=width, - allow_unicode=allow_unicode, line_break=line_break) - Serializer.__init__(self, encoding=encoding, - explicit_start=explicit_start, explicit_end=explicit_end, - version=version, tags=tags) - Representer.__init__(self, default_style=default_style, - default_flow_style=default_flow_style, sort_keys=sort_keys) - Resolver.__init__(self) - diff --git a/spaces/qiemanqieman/Salesforce-blip-image-captioning-base/app.py b/spaces/qiemanqieman/Salesforce-blip-image-captioning-base/app.py deleted file mode 100644 index 73f3a256fe9e3bc1899b0d5b5e38da4d58b6648b..0000000000000000000000000000000000000000 --- a/spaces/qiemanqieman/Salesforce-blip-image-captioning-base/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Salesforce/blip-image-captioning-base").launch() \ No newline at end of file diff --git a/spaces/qingxu98/gpt-academic/themes/common.js b/spaces/qingxu98/gpt-academic/themes/common.js deleted file mode 100644 index 4e7a75e28196521e8cca8b0c5abd28028bdc5eae..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/themes/common.js +++ /dev/null @@ -1,139 +0,0 @@ -function gradioApp() { - // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript - const elems = document.getElementsByTagName('gradio-app'); - const elem = elems.length == 0 ? document : elems[0]; - if (elem !== document) { - elem.getElementById = function(id) { - return document.getElementById(id); - }; - } - return elem.shadowRoot ? elem.shadowRoot : elem; -} - - - - -function addCopyButton(botElement) { - // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript - // Copy bot button - const copiedIcon = ''; - const copyIcon = ''; - - const messageBtnColumnElement = botElement.querySelector('.message-btn-row'); - if (messageBtnColumnElement) { - // Do something if .message-btn-column exists, for example, remove it - // messageBtnColumnElement.remove(); - return; - } - - var copyButton = document.createElement('button'); - copyButton.classList.add('copy-bot-btn'); - copyButton.setAttribute('aria-label', 'Copy'); - copyButton.innerHTML = copyIcon; - copyButton.addEventListener('click', async () => { - const textToCopy = botElement.innerText; - try { - if ("clipboard" in navigator) { - await navigator.clipboard.writeText(textToCopy); - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - } else { - const textArea = document.createElement("textarea"); - textArea.value = textToCopy; - document.body.appendChild(textArea); - textArea.select(); - try { - document.execCommand('copy'); - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - } catch (error) { - console.error("Copy failed: ", error); - } - document.body.removeChild(textArea); - } - } catch (error) { - console.error("Copy failed: ", error); - } - }); - var messageBtnColumn = document.createElement('div'); - messageBtnColumn.classList.add('message-btn-row'); - messageBtnColumn.appendChild(copyButton); - botElement.appendChild(messageBtnColumn); -} - -function chatbotContentChanged(attempt = 1, force = false) { - // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript - for (var i = 0; i < attempt; i++) { - setTimeout(() => { - gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton); - }, i === 0 ? 0 : 200); - } -} - -function chatbotAutoHeight(){ - // 自动调整高度 - function update_height(){ - var { panel_height_target, chatbot_height, chatbot } = get_elements(true); - if (panel_height_target!=chatbot_height) - { - var pixelString = panel_height_target.toString() + 'px'; - chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString; - } - } - - function update_height_slow(){ - var { panel_height_target, chatbot_height, chatbot } = get_elements(); - if (panel_height_target!=chatbot_height) - { - new_panel_height = (panel_height_target - chatbot_height)*0.5 + chatbot_height; - if (Math.abs(new_panel_height - panel_height_target) < 10){ - new_panel_height = panel_height_target; - } - // console.log(chatbot_height, panel_height_target, new_panel_height); - var pixelString = new_panel_height.toString() + 'px'; - chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString; - } - } - - update_height(); - setInterval(function() { - update_height_slow() - }, 50); // 每100毫秒执行一次 -} - -function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") { - chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap'); - var chatbotObserver = new MutationObserver(() => { - chatbotContentChanged(1); - }); - chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true }); - if (LAYOUT === "LEFT-RIGHT") {chatbotAutoHeight();} -} - -function get_elements(consider_state_panel=false) { - var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq'); - if (!chatbot) { - chatbot = document.querySelector('#gpt-chatbot'); - } - const panel1 = document.querySelector('#input-panel').getBoundingClientRect(); - const panel2 = document.querySelector('#basic-panel').getBoundingClientRect() - const panel3 = document.querySelector('#plugin-panel').getBoundingClientRect(); - // const panel4 = document.querySelector('#interact-panel').getBoundingClientRect(); - const panel5 = document.querySelector('#input-panel2').getBoundingClientRect(); - const panel_active = document.querySelector('#state-panel').getBoundingClientRect(); - if (consider_state_panel || panel_active.height < 25){ - document.state_panel_height = panel_active.height; - } - // 25 是chatbot的label高度, 16 是右侧的gap - var panel_height_target = panel1.height + panel2.height + panel3.height + 0 + 0 - 25 + 16*2; - // 禁止动态的state-panel高度影响 - panel_height_target = panel_height_target + (document.state_panel_height-panel_active.height) - var panel_height_target = parseInt(panel_height_target); - var chatbot_height = chatbot.style.height; - var chatbot_height = parseInt(chatbot_height); - return { panel_height_target, chatbot_height, chatbot }; -} \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Android Data Recovery Torrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Android Data Recovery Torrent.md deleted file mode 100644 index ab936d04ba221a8d8e92ad9fcc38d95e63063fcc..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Android Data Recovery Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Android Data Recovery Torrent


    Download Zip ►►►►► https://geags.com/2uCr9O



    -
    - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Electronica Basica Bernard Grob En Espavol.md b/spaces/quidiaMuxgu/Expedit-SAM/Electronica Basica Bernard Grob En Espavol.md deleted file mode 100644 index 93ec822d6f4e55be2fc5573f3278f4cafeca6907..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Electronica Basica Bernard Grob En Espavol.md +++ /dev/null @@ -1,6 +0,0 @@ -

    electronica basica bernard grob en espavol


    Download Filehttps://geags.com/2uCqpm



    -
    -México, 2002. Fuentes de información GROB, Bernard. Electrónica básica. McGraw-Hill. México, 1990. MALVINO, Albert y David Bates. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Hide My Ip 5.4 Full !!BETTER!! Version.md b/spaces/quidiaMuxgu/Expedit-SAM/Hide My Ip 5.4 Full !!BETTER!! Version.md deleted file mode 100644 index 9c2380d77437446a984efd4fba3c6e86e1969129..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Hide My Ip 5.4 Full !!BETTER!! Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

    hide my ip 5.4 full version


    Download Filehttps://geags.com/2uCqkb



    -
    -Hide my com . 7. I. 10. ... 7. An old com . 4. Torecommend , Rom . 16.1 . COMMENDED . 19. 23 . P.5.4 . Mourn for ... Job 5.8 . ro G. would I com . my IP 1. 2. 4. 1fdad05405
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mikroc Pro For 8051 V2.2 Crack NEW!.zip.md b/spaces/quidiaMuxgu/Expedit-SAM/Mikroc Pro For 8051 V2.2 Crack NEW!.zip.md deleted file mode 100644 index 1579541053ca164ceeade7daadd87daa48394146..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Mikroc Pro For 8051 V2.2 Crack NEW!.zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

    mikroc pro for 8051 v2.2 crack.zip


    Download Ziphttps://geags.com/2uCrAE



    -
    -Their 2 6 823951. Archive for First ... Pic Torrent Pro download 3. ... It 20 a V2. ... Serial PRO Rar ANSI mikroc New PRO 2 for Download package needed website. ... 8051 5 and 23 Download: full-featured Serial Download the. 1fdad05405
    -
    -
    -

    diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_537227KB.py deleted file mode 100644 index aedb64dfca1d0ab15581d74f633f117ecbc53543..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_537227KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/r3gm/RVC_HF/infer/modules/ipex/hijacks.py b/spaces/r3gm/RVC_HF/infer/modules/ipex/hijacks.py deleted file mode 100644 index b06f3a9c1a70ef515c30d0e7d749923ecb8d0bfe..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/infer/modules/ipex/hijacks.py +++ /dev/null @@ -1,196 +0,0 @@ -import contextlib -import importlib -import torch -import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - -# pylint: disable=protected-access, missing-function-docstring, line-too-long, unnecessary-lambda, no-else-return - -class CondFunc: # pylint: disable=missing-class-docstring - def __new__(cls, orig_func, sub_func, cond_func): - self = super(CondFunc, cls).__new__(cls) - if isinstance(orig_func, str): - func_path = orig_func.split('.') - for i in range(len(func_path)-1, -1, -1): - try: - resolved_obj = importlib.import_module('.'.join(func_path[:i])) - break - except ImportError: - pass - for attr_name in func_path[i:-1]: - resolved_obj = getattr(resolved_obj, attr_name) - orig_func = getattr(resolved_obj, func_path[-1]) - setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) - self.__init__(orig_func, sub_func, cond_func) - return lambda *args, **kwargs: self(*args, **kwargs) - def __init__(self, orig_func, sub_func, cond_func): - self.__orig_func = orig_func - self.__sub_func = sub_func - self.__cond_func = cond_func - def __call__(self, *args, **kwargs): - if not self.__cond_func or self.__cond_func(self.__orig_func, *args, **kwargs): - return self.__sub_func(self.__orig_func, *args, **kwargs) - else: - return self.__orig_func(*args, **kwargs) - -_utils = torch.utils.data._utils -def _shutdown_workers(self): - if torch.utils.data._utils is None or torch.utils.data._utils.python_exit_status is True or torch.utils.data._utils.python_exit_status is None: - return - if hasattr(self, "_shutdown") and not self._shutdown: - self._shutdown = True - try: - if hasattr(self, '_pin_memory_thread'): - self._pin_memory_thread_done_event.set() - self._worker_result_queue.put((None, None)) - self._pin_memory_thread.join() - self._worker_result_queue.cancel_join_thread() - self._worker_result_queue.close() - self._workers_done_event.set() - for worker_id in range(len(self._workers)): - if self._persistent_workers or self._workers_status[worker_id]: - self._mark_worker_as_unavailable(worker_id, shutdown=True) - for w in self._workers: # pylint: disable=invalid-name - w.join(timeout=torch.utils.data._utils.MP_STATUS_CHECK_INTERVAL) - for q in self._index_queues: # pylint: disable=invalid-name - q.cancel_join_thread() - q.close() - finally: - if self._worker_pids_set: - torch.utils.data._utils.signal_handling._remove_worker_pids(id(self)) - self._worker_pids_set = False - for w in self._workers: # pylint: disable=invalid-name - if w.is_alive(): - w.terminate() - -class DummyDataParallel(torch.nn.Module): # pylint: disable=missing-class-docstring, unused-argument, too-few-public-methods - def __new__(cls, module, device_ids=None, output_device=None, dim=0): # pylint: disable=unused-argument - if isinstance(device_ids, list) and len(device_ids) > 1: - print("IPEX backend doesn't support DataParallel on multiple XPU devices") - return module.to("xpu") - -def return_null_context(*args, **kwargs): # pylint: disable=unused-argument - return contextlib.nullcontext() - -def check_device(device): - return bool((isinstance(device, torch.device) and device.type == "cuda") or (isinstance(device, str) and "cuda" in device) or isinstance(device, int)) - -def return_xpu(device): - return f"xpu:{device[-1]}" if isinstance(device, str) and ":" in device else f"xpu:{device}" if isinstance(device, int) else torch.device("xpu") if isinstance(device, torch.device) else "xpu" - -def ipex_no_cuda(orig_func, *args, **kwargs): - torch.cuda.is_available = lambda: False - orig_func(*args, **kwargs) - torch.cuda.is_available = torch.xpu.is_available - -original_autocast = torch.autocast -def ipex_autocast(*args, **kwargs): - if len(args) > 0 and args[0] == "cuda": - return original_autocast("xpu", *args[1:], **kwargs) - else: - return original_autocast(*args, **kwargs) - -original_torch_cat = torch.cat -def torch_cat(tensor, *args, **kwargs): - if len(tensor) == 3 and (tensor[0].dtype != tensor[1].dtype or tensor[2].dtype != tensor[1].dtype): - return original_torch_cat([tensor[0].to(tensor[1].dtype), tensor[1], tensor[2].to(tensor[1].dtype)], *args, **kwargs) - else: - return original_torch_cat(tensor, *args, **kwargs) - -original_interpolate = torch.nn.functional.interpolate -def interpolate(tensor, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None, antialias=False): # pylint: disable=too-many-arguments - if antialias or align_corners is not None: - return_device = tensor.device - return_dtype = tensor.dtype - return original_interpolate(tensor.to("cpu", dtype=torch.float32), size=size, scale_factor=scale_factor, mode=mode, - align_corners=align_corners, recompute_scale_factor=recompute_scale_factor, antialias=antialias).to(return_device, dtype=return_dtype) - else: - return original_interpolate(tensor, size=size, scale_factor=scale_factor, mode=mode, - align_corners=align_corners, recompute_scale_factor=recompute_scale_factor, antialias=antialias) - -original_linalg_solve = torch.linalg.solve -def linalg_solve(A, B, *args, **kwargs): # pylint: disable=invalid-name - if A.device != torch.device("cpu") or B.device != torch.device("cpu"): - return_device = A.device - return original_linalg_solve(A.to("cpu"), B.to("cpu"), *args, **kwargs).to(return_device) - else: - return original_linalg_solve(A, B, *args, **kwargs) - -def ipex_hijacks(): - CondFunc('torch.Tensor.to', - lambda orig_func, self, device=None, *args, **kwargs: orig_func(self, return_xpu(device), *args, **kwargs), - lambda orig_func, self, device=None, *args, **kwargs: check_device(device)) - CondFunc('torch.Tensor.cuda', - lambda orig_func, self, device=None, *args, **kwargs: orig_func(self, return_xpu(device), *args, **kwargs), - lambda orig_func, self, device=None, *args, **kwargs: check_device(device)) - CondFunc('torch.empty', - lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs), - lambda orig_func, *args, device=None, **kwargs: check_device(device)) - CondFunc('torch.load', - lambda orig_func, *args, map_location=None, **kwargs: orig_func(*args, return_xpu(map_location), **kwargs), - lambda orig_func, *args, map_location=None, **kwargs: map_location is None or check_device(map_location)) - CondFunc('torch.randn', - lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs), - lambda orig_func, *args, device=None, **kwargs: check_device(device)) - CondFunc('torch.ones', - lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs), - lambda orig_func, *args, device=None, **kwargs: check_device(device)) - CondFunc('torch.zeros', - lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs), - lambda orig_func, *args, device=None, **kwargs: check_device(device)) - CondFunc('torch.tensor', - lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs), - lambda orig_func, *args, device=None, **kwargs: check_device(device)) - CondFunc('torch.linspace', - lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs), - lambda orig_func, *args, device=None, **kwargs: check_device(device)) - - CondFunc('torch.Generator', - lambda orig_func, device=None: torch.xpu.Generator(device), - lambda orig_func, device=None: device is not None and device != torch.device("cpu") and device != "cpu") - - CondFunc('torch.batch_norm', - lambda orig_func, input, weight, bias, *args, **kwargs: orig_func(input, - weight if weight is not None else torch.ones(input.size()[1], device=input.device), - bias if bias is not None else torch.zeros(input.size()[1], device=input.device), *args, **kwargs), - lambda orig_func, input, *args, **kwargs: input.device != torch.device("cpu")) - CondFunc('torch.instance_norm', - lambda orig_func, input, weight, bias, *args, **kwargs: orig_func(input, - weight if weight is not None else torch.ones(input.size()[1], device=input.device), - bias if bias is not None else torch.zeros(input.size()[1], device=input.device), *args, **kwargs), - lambda orig_func, input, *args, **kwargs: input.device != torch.device("cpu")) - - #Functions with dtype errors: - CondFunc('torch.nn.modules.GroupNorm.forward', - lambda orig_func, self, input: orig_func(self, input.to(self.weight.data.dtype)), - lambda orig_func, self, input: input.dtype != self.weight.data.dtype) - CondFunc('torch.nn.modules.linear.Linear.forward', - lambda orig_func, self, input: orig_func(self, input.to(self.weight.data.dtype)), - lambda orig_func, self, input: input.dtype != self.weight.data.dtype) - CondFunc('torch.nn.modules.conv.Conv2d.forward', - lambda orig_func, self, input: orig_func(self, input.to(self.weight.data.dtype)), - lambda orig_func, self, input: input.dtype != self.weight.data.dtype) - CondFunc('torch.nn.functional.layer_norm', - lambda orig_func, input, normalized_shape=None, weight=None, *args, **kwargs: - orig_func(input.to(weight.data.dtype), normalized_shape, weight, *args, **kwargs), - lambda orig_func, input, normalized_shape=None, weight=None, *args, **kwargs: - weight is not None and input.dtype != weight.data.dtype) - - #Diffusers Float64 (ARC GPUs doesn't support double or Float64): - if not torch.xpu.has_fp64_dtype(): - CondFunc('torch.from_numpy', - lambda orig_func, ndarray: orig_func(ndarray.astype('float32')), - lambda orig_func, ndarray: ndarray.dtype == float) - - #Broken functions when torch.cuda.is_available is True: - CondFunc('torch.utils.data.dataloader._BaseDataLoaderIter.__init__', - lambda orig_func, *args, **kwargs: ipex_no_cuda(orig_func, *args, **kwargs), - lambda orig_func, *args, **kwargs: True) - - #Functions that make compile mad with CondFunc: - torch.utils.data.dataloader._MultiProcessingDataLoaderIter._shutdown_workers = _shutdown_workers - torch.nn.DataParallel = DummyDataParallel - torch.autocast = ipex_autocast - torch.cat = torch_cat - torch.linalg.solve = linalg_solve - torch.nn.functional.interpolate = interpolate - torch.backends.cuda.sdp_kernel = return_null_context \ No newline at end of file diff --git a/spaces/r3gm/RVC_HF/lib/uvr5_pack/lib_v5/spec_utils.py b/spaces/r3gm/RVC_HF/lib/uvr5_pack/lib_v5/spec_utils.py deleted file mode 100644 index a3fd46d333da7becc7f09f42c084ac7cde661035..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/lib/uvr5_pack/lib_v5/spec_utils.py +++ /dev/null @@ -1,667 +0,0 @@ -import os, librosa -import numpy as np -import soundfile as sf -from tqdm import tqdm -import json, math, hashlib - - -def crop_center(h1, h2): - h1_shape = h1.size() - h2_shape = h2.size() - - if h1_shape[3] == h2_shape[3]: - return h1 - elif h1_shape[3] < h2_shape[3]: - raise ValueError("h1_shape[3] must be greater than h2_shape[3]") - - # s_freq = (h2_shape[2] - h1_shape[2]) // 2 - # e_freq = s_freq + h1_shape[2] - s_time = (h1_shape[3] - h2_shape[3]) // 2 - e_time = s_time + h2_shape[3] - h1 = h1[:, :, :, s_time:e_time] - - return h1 - - -def wave_to_spectrogram( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length) - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def wave_to_spectrogram_mt( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - import threading - - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - def run_thread(**kwargs): - global spec_left - spec_left = librosa.stft(**kwargs) - - thread = threading.Thread( - target=run_thread, - kwargs={"y": wave_left, "n_fft": n_fft, "hop_length": hop_length}, - ) - thread.start() - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - thread.join() - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def combine_spectrograms(specs, mp): - l = min([specs[i].shape[2] for i in specs]) - spec_c = np.zeros(shape=(2, mp.param["bins"] + 1, l), dtype=np.complex64) - offset = 0 - bands_n = len(mp.param["band"]) - - for d in range(1, bands_n + 1): - h = mp.param["band"][d]["crop_stop"] - mp.param["band"][d]["crop_start"] - spec_c[:, offset : offset + h, :l] = specs[d][ - :, mp.param["band"][d]["crop_start"] : mp.param["band"][d]["crop_stop"], :l - ] - offset += h - - if offset > mp.param["bins"]: - raise ValueError("Too much bins") - - # lowpass fiter - if ( - mp.param["pre_filter_start"] > 0 - ): # and mp.param['band'][bands_n]['res_type'] in ['scipy', 'polyphase']: - if bands_n == 1: - spec_c = fft_lp_filter( - spec_c, mp.param["pre_filter_start"], mp.param["pre_filter_stop"] - ) - else: - gp = 1 - for b in range( - mp.param["pre_filter_start"] + 1, mp.param["pre_filter_stop"] - ): - g = math.pow( - 10, -(b - mp.param["pre_filter_start"]) * (3.5 - gp) / 20.0 - ) - gp = g - spec_c[:, b, :] *= g - - return np.asfortranarray(spec_c) - - -def spectrogram_to_image(spec, mode="magnitude"): - if mode == "magnitude": - if np.iscomplexobj(spec): - y = np.abs(spec) - else: - y = spec - y = np.log10(y**2 + 1e-8) - elif mode == "phase": - if np.iscomplexobj(spec): - y = np.angle(spec) - else: - y = spec - - y -= y.min() - y *= 255 / y.max() - img = np.uint8(y) - - if y.ndim == 3: - img = img.transpose(1, 2, 0) - img = np.concatenate([np.max(img, axis=2, keepdims=True), img], axis=2) - - return img - - -def reduce_vocal_aggressively(X, y, softmask): - v = X - y - y_mag_tmp = np.abs(y) - v_mag_tmp = np.abs(v) - - v_mask = v_mag_tmp > y_mag_tmp - y_mag = np.clip(y_mag_tmp - v_mag_tmp * v_mask * softmask, 0, np.inf) - - return y_mag * np.exp(1.0j * np.angle(y)) - - -def mask_silence(mag, ref, thres=0.2, min_range=64, fade_size=32): - if min_range < fade_size * 2: - raise ValueError("min_range must be >= fade_area * 2") - - mag = mag.copy() - - idx = np.where(ref.mean(axis=(0, 1)) < thres)[0] - starts = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0]) - ends = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1]) - uninformative = np.where(ends - starts > min_range)[0] - if len(uninformative) > 0: - starts = starts[uninformative] - ends = ends[uninformative] - old_e = None - for s, e in zip(starts, ends): - if old_e is not None and s - old_e < fade_size: - s = old_e - fade_size * 2 - - if s != 0: - weight = np.linspace(0, 1, fade_size) - mag[:, :, s : s + fade_size] += weight * ref[:, :, s : s + fade_size] - else: - s -= fade_size - - if e != mag.shape[2]: - weight = np.linspace(1, 0, fade_size) - mag[:, :, e - fade_size : e] += weight * ref[:, :, e - fade_size : e] - else: - e += fade_size - - mag[:, :, s + fade_size : e - fade_size] += ref[ - :, :, s + fade_size : e - fade_size - ] - old_e = e - - return mag - - -def align_wave_head_and_tail(a, b): - l = min([a[0].size, b[0].size]) - - return a[:l, :l], b[:l, :l] - - -def cache_or_load(mix_path, inst_path, mp): - mix_basename = os.path.splitext(os.path.basename(mix_path))[0] - inst_basename = os.path.splitext(os.path.basename(inst_path))[0] - - cache_dir = "mph{}".format( - hashlib.sha1(json.dumps(mp.param, sort_keys=True).encode("utf-8")).hexdigest() - ) - mix_cache_dir = os.path.join("cache", cache_dir) - inst_cache_dir = os.path.join("cache", cache_dir) - - os.makedirs(mix_cache_dir, exist_ok=True) - os.makedirs(inst_cache_dir, exist_ok=True) - - mix_cache_path = os.path.join(mix_cache_dir, mix_basename + ".npy") - inst_cache_path = os.path.join(inst_cache_dir, inst_basename + ".npy") - - if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path): - X_spec_m = np.load(mix_cache_path) - y_spec_m = np.load(inst_cache_path) - else: - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - X_wave[d], _ = librosa.load( - mix_path, bp["sr"], False, dtype=np.float32, res_type=bp["res_type"] - ) - y_wave[d], _ = librosa.load( - inst_path, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - else: # lower bands - X_wave[d] = librosa.resample( - X_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - y_wave[d] = librosa.resample( - y_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - X_wave[d], y_wave[d] = align_wave_head_and_tail(X_wave[d], y_wave[d]) - - X_spec_s[d] = wave_to_spectrogram( - X_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - y_spec_s[d] = wave_to_spectrogram( - y_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - del X_wave, y_wave - - X_spec_m = combine_spectrograms(X_spec_s, mp) - y_spec_m = combine_spectrograms(y_spec_s, mp) - - if X_spec_m.shape != y_spec_m.shape: - raise ValueError("The combined spectrograms are different: " + mix_path) - - _, ext = os.path.splitext(mix_path) - - np.save(mix_cache_path, X_spec_m) - np.save(inst_cache_path, y_spec_m) - - return X_spec_m, y_spec_m - - -def spectrogram_to_wave(spec, hop_length, mid_side, mid_side_b2, reverse): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hop_length) - wave_right = librosa.istft(spec_right, hop_length=hop_length) - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def spectrogram_to_wave_mt(spec, hop_length, mid_side, reverse, mid_side_b2): - import threading - - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - def run_thread(**kwargs): - global wave_left - wave_left = librosa.istft(**kwargs) - - thread = threading.Thread( - target=run_thread, kwargs={"stft_matrix": spec_left, "hop_length": hop_length} - ) - thread.start() - wave_right = librosa.istft(spec_right, hop_length=hop_length) - thread.join() - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def cmb_spectrogram_to_wave(spec_m, mp, extra_bins_h=None, extra_bins=None): - wave_band = {} - bands_n = len(mp.param["band"]) - offset = 0 - - for d in range(1, bands_n + 1): - bp = mp.param["band"][d] - spec_s = np.ndarray( - shape=(2, bp["n_fft"] // 2 + 1, spec_m.shape[2]), dtype=complex - ) - h = bp["crop_stop"] - bp["crop_start"] - spec_s[:, bp["crop_start"] : bp["crop_stop"], :] = spec_m[ - :, offset : offset + h, : - ] - - offset += h - if d == bands_n: # higher - if extra_bins_h: # if --high_end_process bypass - max_bin = bp["n_fft"] // 2 - spec_s[:, max_bin - extra_bins_h : max_bin, :] = extra_bins[ - :, :extra_bins_h, : - ] - if bp["hpf_start"] > 0: - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - if bands_n == 1: - wave = spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - else: - wave = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - else: - sr = mp.param["band"][d + 1]["sr"] - if d == 1: # lower - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave = librosa.resample( - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - bp["sr"], - sr, - res_type="sinc_fastest", - ) - else: # mid - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave2 = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - # wave = librosa.core.resample(wave2, bp['sr'], sr, res_type="sinc_fastest") - wave = librosa.core.resample(wave2, bp["sr"], sr, res_type="scipy") - - return wave.T - - -def fft_lp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop): - g -= 1 / (bin_stop - bin_start) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, bin_stop:, :] *= 0 - - return spec - - -def fft_hp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop, -1): - g -= 1 / (bin_start - bin_stop) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, 0 : bin_stop + 1, :] *= 0 - - return spec - - -def mirroring(a, spec_m, input_high_end, mp): - if "mirroring" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mirror = mirror * np.exp(1.0j * np.angle(input_high_end)) - - return np.where( - np.abs(input_high_end) <= np.abs(mirror), input_high_end, mirror - ) - - if "mirroring2" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mi = np.multiply(mirror, input_high_end * 1.7) - - return np.where(np.abs(input_high_end) <= np.abs(mi), input_high_end, mi) - - -def ensembling(a, specs): - for i in range(1, len(specs)): - if i == 1: - spec = specs[0] - - ln = min([spec.shape[2], specs[i].shape[2]]) - spec = spec[:, :, :ln] - specs[i] = specs[i][:, :, :ln] - - if "min_mag" == a: - spec = np.where(np.abs(specs[i]) <= np.abs(spec), specs[i], spec) - if "max_mag" == a: - spec = np.where(np.abs(specs[i]) >= np.abs(spec), specs[i], spec) - - return spec - - -def stft(wave, nfft, hl): - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - spec_left = librosa.stft(wave_left, nfft, hop_length=hl) - spec_right = librosa.stft(wave_right, nfft, hop_length=hl) - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def istft(spec, hl): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hl) - wave_right = librosa.istft(spec_right, hop_length=hl) - wave = np.asfortranarray([wave_left, wave_right]) - - -if __name__ == "__main__": - import cv2 - import sys - import time - import argparse - from model_param_init import ModelParameters - - p = argparse.ArgumentParser() - p.add_argument( - "--algorithm", - "-a", - type=str, - choices=["invert", "invert_p", "min_mag", "max_mag", "deep", "align"], - default="min_mag", - ) - p.add_argument( - "--model_params", - "-m", - type=str, - default=os.path.join("modelparams", "1band_sr44100_hl512.json"), - ) - p.add_argument("--output_name", "-o", type=str, default="output") - p.add_argument("--vocals_only", "-v", action="store_true") - p.add_argument("input", nargs="+") - args = p.parse_args() - - start_time = time.time() - - if args.algorithm.startswith("invert") and len(args.input) != 2: - raise ValueError("There should be two input files.") - - if not args.algorithm.startswith("invert") and len(args.input) < 2: - raise ValueError("There must be at least two input files.") - - wave, specs = {}, {} - mp = ModelParameters(args.model_params) - - for i in range(len(args.input)): - spec = {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - wave[d], _ = librosa.load( - args.input[i], - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - - if len(wave[d].shape) == 1: # mono to stereo - wave[d] = np.array([wave[d], wave[d]]) - else: # lower bands - wave[d] = librosa.resample( - wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - spec[d] = wave_to_spectrogram( - wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - specs[i] = combine_spectrograms(spec, mp) - - del wave - - if args.algorithm == "deep": - d_spec = np.where(np.abs(specs[0]) <= np.abs(spec[1]), specs[0], spec[1]) - v_spec = d_spec - specs[1] - sf.write( - os.path.join("{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - - if args.algorithm.startswith("invert"): - ln = min([specs[0].shape[2], specs[1].shape[2]]) - specs[0] = specs[0][:, :, :ln] - specs[1] = specs[1][:, :, :ln] - - if "invert_p" == args.algorithm: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - max_mag = np.where(X_mag >= y_mag, X_mag, y_mag) - v_spec = specs[1] - max_mag * np.exp(1.0j * np.angle(specs[0])) - else: - specs[1] = reduce_vocal_aggressively(specs[0], specs[1], 0.2) - v_spec = specs[0] - specs[1] - - if not args.vocals_only: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - v_mag = np.abs(v_spec) - - X_image = spectrogram_to_image(X_mag) - y_image = spectrogram_to_image(y_mag) - v_image = spectrogram_to_image(v_mag) - - cv2.imwrite("{}_X.png".format(args.output_name), X_image) - cv2.imwrite("{}_y.png".format(args.output_name), y_image) - cv2.imwrite("{}_v.png".format(args.output_name), v_image) - - sf.write( - "{}_X.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[0], mp), - mp.param["sr"], - ) - sf.write( - "{}_y.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[1], mp), - mp.param["sr"], - ) - - sf.write( - "{}_v.wav".format(args.output_name), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - else: - if not args.algorithm == "deep": - sf.write( - os.path.join("ensembled", "{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(ensembling(args.algorithm, specs), mp), - mp.param["sr"], - ) - - if args.algorithm == "align": - trackalignment = [ - { - "file1": '"{}"'.format(args.input[0]), - "file2": '"{}"'.format(args.input[1]), - } - ] - - for i, e in tqdm(enumerate(trackalignment), desc="Performing Alignment..."): - os.system(f"python lib/align_tracks.py {e['file1']} {e['file2']}") - - # print('Total time: {0:.{1}f}s'.format(time.time() - start_time, 1)) diff --git a/spaces/rachana219/MODT2/models/common.py b/spaces/rachana219/MODT2/models/common.py deleted file mode 100644 index edb5edc9fe1b0ad3b345a2103603393e74e5b65c..0000000000000000000000000000000000000000 --- a/spaces/rachana219/MODT2/models/common.py +++ /dev/null @@ -1,2019 +0,0 @@ -import math -from copy import copy -from pathlib import Path - -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision.ops import DeformConv2d -from PIL import Image -from torch.cuda import amp - -from utils.datasets import letterbox -from utils.general import non_max_suppression, make_divisible, scale_coords, increment_path, xyxy2xywh -from utils.plots import color_list, plot_one_box -from utils.torch_utils import time_synchronized - - -##### basic #### - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class MP(nn.Module): - def __init__(self, k=2): - super(MP, self).__init__() - self.m = nn.MaxPool2d(kernel_size=k, stride=k) - - def forward(self, x): - return self.m(x) - - -class SP(nn.Module): - def __init__(self, k=3, s=1): - super(SP, self).__init__() - self.m = nn.MaxPool2d(kernel_size=k, stride=s, padding=k // 2) - - def forward(self, x): - return self.m(x) - - -class ReOrg(nn.Module): - def __init__(self): - super(ReOrg, self).__init__() - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1) - - -class Concat(nn.Module): - def __init__(self, dimension=1): - super(Concat, self).__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class Chuncat(nn.Module): - def __init__(self, dimension=1): - super(Chuncat, self).__init__() - self.d = dimension - - def forward(self, x): - x1 = [] - x2 = [] - for xi in x: - xi1, xi2 = xi.chunk(2, self.d) - x1.append(xi1) - x2.append(xi2) - return torch.cat(x1+x2, self.d) - - -class Shortcut(nn.Module): - def __init__(self, dimension=0): - super(Shortcut, self).__init__() - self.d = dimension - - def forward(self, x): - return x[0]+x[1] - - -class Foldcut(nn.Module): - def __init__(self, dimension=0): - super(Foldcut, self).__init__() - self.d = dimension - - def forward(self, x): - x1, x2 = x.chunk(2, self.d) - return x1+x2 - - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Conv, self).__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def fuseforward(self, x): - return self.act(self.conv(x)) - - -class RobustConv(nn.Module): - # Robust convolution (use high kernel size 7-11 for: downsampling and other layers). Train for 300 - 450 epochs. - def __init__(self, c1, c2, k=7, s=1, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups - super(RobustConv, self).__init__() - self.conv_dw = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act) - self.conv1x1 = nn.Conv2d(c1, c2, 1, 1, 0, groups=1, bias=True) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None - - def forward(self, x): - x = x.to(memory_format=torch.channels_last) - x = self.conv1x1(self.conv_dw(x)) - if self.gamma is not None: - x = x.mul(self.gamma.reshape(1, -1, 1, 1)) - return x - - -class RobustConv2(nn.Module): - # Robust convolution 2 (use [32, 5, 2] or [32, 7, 4] or [32, 11, 8] for one of the paths in CSP). - def __init__(self, c1, c2, k=7, s=4, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups - super(RobustConv2, self).__init__() - self.conv_strided = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act) - self.conv_deconv = nn.ConvTranspose2d(in_channels=c1, out_channels=c2, kernel_size=s, stride=s, - padding=0, bias=True, dilation=1, groups=1 - ) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None - - def forward(self, x): - x = self.conv_deconv(self.conv_strided(x)) - if self.gamma is not None: - x = x.mul(self.gamma.reshape(1, -1, 1, 1)) - return x - - -def DWConv(c1, c2, k=1, s=1, act=True): - # Depthwise convolution - return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super(GhostConv, self).__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat([y, self.cv2(y)], 1) - - -class Stem(nn.Module): - # Stem - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Stem, self).__init__() - c_ = int(c2/2) # hidden channels - self.cv1 = Conv(c1, c_, 3, 2) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(c_, c_, 3, 2) - self.pool = torch.nn.MaxPool2d(2, stride=2) - self.cv4 = Conv(2 * c_, c2, 1, 1) - - def forward(self, x): - x = self.cv1(x) - return self.cv4(torch.cat((self.cv3(self.cv2(x)), self.pool(x)), dim=1)) - - -class DownC(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, n=1, k=2): - super(DownC, self).__init__() - c_ = int(c1) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2//2, 3, k) - self.cv3 = Conv(c1, c2//2, 1, 1) - self.mp = nn.MaxPool2d(kernel_size=k, stride=k) - - def forward(self, x): - return torch.cat((self.cv2(self.cv1(x)), self.cv3(self.mp(x))), dim=1) - - -class SPP(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, k=(5, 9, 13)): - super(SPP, self).__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class Bottleneck(nn.Module): - # Darknet bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super(Bottleneck, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class Res(nn.Module): - # ResNet bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super(Res, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 3, 1, g=g) - self.cv3 = Conv(c_, c2, 1, 1) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv3(self.cv2(self.cv1(x))) if self.add else self.cv3(self.cv2(self.cv1(x))) - - -class ResX(Res): - # ResNet bottleneck - def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - - -class Ghost(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super(Ghost, self).__init__() - c_ = c2 // 2 - self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), - Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - -##### end of basic ##### - - -##### cspnet ##### - -class SPPCSPC(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)): - super(SPPCSPC, self).__init__() - c_ = int(2 * c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 3, 1) - self.cv4 = Conv(c_, c_, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - self.cv5 = Conv(4 * c_, c_, 1, 1) - self.cv6 = Conv(c_, c_, 3, 1) - self.cv7 = Conv(2 * c_, c2, 1, 1) - - def forward(self, x): - x1 = self.cv4(self.cv3(self.cv1(x))) - y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1))) - y2 = self.cv2(x) - return self.cv7(torch.cat((y1, y2), dim=1)) - -class GhostSPPCSPC(SPPCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)): - super().__init__(c1, c2, n, shortcut, g, e, k) - c_ = int(2 * c2 * e) # hidden channels - self.cv1 = GhostConv(c1, c_, 1, 1) - self.cv2 = GhostConv(c1, c_, 1, 1) - self.cv3 = GhostConv(c_, c_, 3, 1) - self.cv4 = GhostConv(c_, c_, 1, 1) - self.cv5 = GhostConv(4 * c_, c_, 1, 1) - self.cv6 = GhostConv(c_, c_, 3, 1) - self.cv7 = GhostConv(2 * c_, c2, 1, 1) - - -class GhostStem(Stem): - # Stem - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__(c1, c2, k, s, p, g, act) - c_ = int(c2/2) # hidden channels - self.cv1 = GhostConv(c1, c_, 3, 2) - self.cv2 = GhostConv(c_, c_, 1, 1) - self.cv3 = GhostConv(c_, c_, 3, 2) - self.cv4 = GhostConv(2 * c_, c2, 1, 1) - - -class BottleneckCSPA(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class BottleneckCSPB(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class BottleneckCSPC(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - - -class ResCSPA(BottleneckCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResCSPB(BottleneckCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResCSPC(BottleneckCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResXCSPA(ResCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class ResXCSPB(ResCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class ResXCSPC(ResCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class GhostCSPA(BottleneckCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - - -class GhostCSPB(BottleneckCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - - -class GhostCSPC(BottleneckCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - -##### end of cspnet ##### - - -##### yolor ##### - -class ImplicitA(nn.Module): - def __init__(self, channel, mean=0., std=.02): - super(ImplicitA, self).__init__() - self.channel = channel - self.mean = mean - self.std = std - self.implicit = nn.Parameter(torch.zeros(1, channel, 1, 1)) - nn.init.normal_(self.implicit, mean=self.mean, std=self.std) - - def forward(self, x): - return self.implicit + x - - -class ImplicitM(nn.Module): - def __init__(self, channel, mean=1., std=.02): - super(ImplicitM, self).__init__() - self.channel = channel - self.mean = mean - self.std = std - self.implicit = nn.Parameter(torch.ones(1, channel, 1, 1)) - nn.init.normal_(self.implicit, mean=self.mean, std=self.std) - - def forward(self, x): - return self.implicit * x - -##### end of yolor ##### - - -##### repvgg ##### - -class RepConv(nn.Module): - # Represented convolution - # https://arxiv.org/abs/2101.03697 - - def __init__(self, c1, c2, k=3, s=1, p=None, g=1, act=True, deploy=False): - super(RepConv, self).__init__() - - self.deploy = deploy - self.groups = g - self.in_channels = c1 - self.out_channels = c2 - - assert k == 3 - assert autopad(k, p) == 1 - - padding_11 = autopad(k, p) - k // 2 - - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - if deploy: - self.rbr_reparam = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=True) - - else: - self.rbr_identity = (nn.BatchNorm2d(num_features=c1) if c2 == c1 and s == 1 else None) - - self.rbr_dense = nn.Sequential( - nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False), - nn.BatchNorm2d(num_features=c2), - ) - - self.rbr_1x1 = nn.Sequential( - nn.Conv2d( c1, c2, 1, s, padding_11, groups=g, bias=False), - nn.BatchNorm2d(num_features=c2), - ) - - def forward(self, inputs): - if hasattr(self, "rbr_reparam"): - return self.act(self.rbr_reparam(inputs)) - - if self.rbr_identity is None: - id_out = 0 - else: - id_out = self.rbr_identity(inputs) - - return self.act(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out) - - def get_equivalent_kernel_bias(self): - kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense) - kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1) - kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity) - return ( - kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, - bias3x3 + bias1x1 + biasid, - ) - - def _pad_1x1_to_3x3_tensor(self, kernel1x1): - if kernel1x1 is None: - return 0 - else: - return nn.functional.pad(kernel1x1, [1, 1, 1, 1]) - - def _fuse_bn_tensor(self, branch): - if branch is None: - return 0, 0 - if isinstance(branch, nn.Sequential): - kernel = branch[0].weight - running_mean = branch[1].running_mean - running_var = branch[1].running_var - gamma = branch[1].weight - beta = branch[1].bias - eps = branch[1].eps - else: - assert isinstance(branch, nn.BatchNorm2d) - if not hasattr(self, "id_tensor"): - input_dim = self.in_channels // self.groups - kernel_value = np.zeros( - (self.in_channels, input_dim, 3, 3), dtype=np.float32 - ) - for i in range(self.in_channels): - kernel_value[i, i % input_dim, 1, 1] = 1 - self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device) - kernel = self.id_tensor - running_mean = branch.running_mean - running_var = branch.running_var - gamma = branch.weight - beta = branch.bias - eps = branch.eps - std = (running_var + eps).sqrt() - t = (gamma / std).reshape(-1, 1, 1, 1) - return kernel * t, beta - running_mean * gamma / std - - def repvgg_convert(self): - kernel, bias = self.get_equivalent_kernel_bias() - return ( - kernel.detach().cpu().numpy(), - bias.detach().cpu().numpy(), - ) - - def fuse_conv_bn(self, conv, bn): - - std = (bn.running_var + bn.eps).sqrt() - bias = bn.bias - bn.running_mean * bn.weight / std - - t = (bn.weight / std).reshape(-1, 1, 1, 1) - weights = conv.weight * t - - bn = nn.Identity() - conv = nn.Conv2d(in_channels = conv.in_channels, - out_channels = conv.out_channels, - kernel_size = conv.kernel_size, - stride=conv.stride, - padding = conv.padding, - dilation = conv.dilation, - groups = conv.groups, - bias = True, - padding_mode = conv.padding_mode) - - conv.weight = torch.nn.Parameter(weights) - conv.bias = torch.nn.Parameter(bias) - return conv - - def fuse_repvgg_block(self): - if self.deploy: - return - print(f"RepConv.fuse_repvgg_block") - - self.rbr_dense = self.fuse_conv_bn(self.rbr_dense[0], self.rbr_dense[1]) - - self.rbr_1x1 = self.fuse_conv_bn(self.rbr_1x1[0], self.rbr_1x1[1]) - rbr_1x1_bias = self.rbr_1x1.bias - weight_1x1_expanded = torch.nn.functional.pad(self.rbr_1x1.weight, [1, 1, 1, 1]) - - # Fuse self.rbr_identity - if (isinstance(self.rbr_identity, nn.BatchNorm2d) or isinstance(self.rbr_identity, nn.modules.batchnorm.SyncBatchNorm)): - # print(f"fuse: rbr_identity == BatchNorm2d or SyncBatchNorm") - identity_conv_1x1 = nn.Conv2d( - in_channels=self.in_channels, - out_channels=self.out_channels, - kernel_size=1, - stride=1, - padding=0, - groups=self.groups, - bias=False) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.to(self.rbr_1x1.weight.data.device) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.squeeze().squeeze() - # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}") - identity_conv_1x1.weight.data.fill_(0.0) - identity_conv_1x1.weight.data.fill_diagonal_(1.0) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.unsqueeze(2).unsqueeze(3) - # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}") - - identity_conv_1x1 = self.fuse_conv_bn(identity_conv_1x1, self.rbr_identity) - bias_identity_expanded = identity_conv_1x1.bias - weight_identity_expanded = torch.nn.functional.pad(identity_conv_1x1.weight, [1, 1, 1, 1]) - else: - # print(f"fuse: rbr_identity != BatchNorm2d, rbr_identity = {self.rbr_identity}") - bias_identity_expanded = torch.nn.Parameter( torch.zeros_like(rbr_1x1_bias) ) - weight_identity_expanded = torch.nn.Parameter( torch.zeros_like(weight_1x1_expanded) ) - - - #print(f"self.rbr_1x1.weight = {self.rbr_1x1.weight.shape}, ") - #print(f"weight_1x1_expanded = {weight_1x1_expanded.shape}, ") - #print(f"self.rbr_dense.weight = {self.rbr_dense.weight.shape}, ") - - self.rbr_dense.weight = torch.nn.Parameter(self.rbr_dense.weight + weight_1x1_expanded + weight_identity_expanded) - self.rbr_dense.bias = torch.nn.Parameter(self.rbr_dense.bias + rbr_1x1_bias + bias_identity_expanded) - - self.rbr_reparam = self.rbr_dense - self.deploy = True - - if self.rbr_identity is not None: - del self.rbr_identity - self.rbr_identity = None - - if self.rbr_1x1 is not None: - del self.rbr_1x1 - self.rbr_1x1 = None - - if self.rbr_dense is not None: - del self.rbr_dense - self.rbr_dense = None - - -class RepBottleneck(Bottleneck): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut=True, g=1, e=0.5) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c2, 3, 1, g=g) - - -class RepBottleneckCSPA(BottleneckCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepBottleneckCSPB(BottleneckCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepBottleneckCSPC(BottleneckCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepRes(Res): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c_, 3, 1, g=g) - - -class RepResCSPA(ResCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResCSPB(ResCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResCSPC(ResCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResX(ResX): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c_, 3, 1, g=g) - - -class RepResXCSPA(ResXCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResXCSPB(ResXCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResXCSPC(ResXCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - -##### end of repvgg ##### - - -##### transformer ##### - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*[TransformerLayer(c2, num_heads) for _ in range(num_layers)]) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2) - p = p.unsqueeze(0) - p = p.transpose(0, 3) - p = p.squeeze(3) - e = self.linear(p) - x = p + e - - x = self.tr(x) - x = x.unsqueeze(3) - x = x.transpose(0, 3) - x = x.reshape(b, self.c2, w, h) - return x - -##### end of transformer ##### - - -##### yolov5 ##### - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Focus, self).__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) - # return self.conv(self.contract(x)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1)) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(N, s, s, C // s ** 2, H, W) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(N, C // s ** 2, H * s, W * s) # x(1,16,160,160) - - -class NMS(nn.Module): - # Non-Maximum Suppression (NMS) module - conf = 0.25 # confidence threshold - iou = 0.45 # IoU threshold - classes = None # (optional list) filter by class - - def __init__(self): - super(NMS, self).__init__() - - def forward(self, x): - return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) - - -class autoShape(nn.Module): - # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - classes = None # (optional list) filter by class - - def __init__(self, model): - super(autoShape, self).__init__() - self.model = model.eval() - - def autoshape(self): - print('autoShape already enabled, skipping... ') # model already converted to model.autoshape() - return self - - @torch.no_grad() - def forward(self, imgs, size=640, augment=False, profile=False): - # Inference from various sources. For height=640, width=1280, RGB images example inputs are: - # filename: imgs = 'data/samples/zidane.jpg' - # URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - t = [time_synchronized()] - p = next(self.model.parameters()) # for device and type - if isinstance(imgs, torch.Tensor): # torch - with amp.autocast(enabled=p.device.type != 'cpu'): - return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(imgs): - f = f'image{i}' # filename - if isinstance(im, str): # filename or uri - im, f = np.asarray(Image.open(requests.get(im, stream=True).raw if im.startswith('http') else im)), im - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(im), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = (size / max(s)) # gain - shape1.append([y * g for y in s]) - imgs[i] = im # update - shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape - x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad - x = np.stack(x, 0) if n > 1 else x[0][None] # stack - x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255. # uint8 to fp16/32 - t.append(time_synchronized()) - - with amp.autocast(enabled=p.device.type != 'cpu'): - # Inference - y = self.model(x, augment, profile)[0] # forward - t.append(time_synchronized()) - - # Post-process - y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) # NMS - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - t.append(time_synchronized()) - return Detections(imgs, y, files, t, self.names, x.shape) - - -class Detections: - # detections class for YOLOv5 inference results - def __init__(self, imgs, pred, files, times=None, names=None, shape=None): - super(Detections, self).__init__() - d = pred[0].device # device - gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations - self.imgs = imgs # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms) - self.s = shape # inference BCHW shape - - def display(self, pprint=False, show=False, save=False, render=False, save_dir=''): - colors = color_list() - for i, (img, pred) in enumerate(zip(self.imgs, self.pred)): - str = f'image {i + 1}/{len(self.pred)}: {img.shape[0]}x{img.shape[1]} ' - if pred is not None: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - if show or save or render: - for *box, conf, cls in pred: # xyxy, confidence, class - label = f'{self.names[int(cls)]} {conf:.2f}' - plot_one_box(box, img, label=label, color=colors[int(cls) % 10]) - img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np - if pprint: - print(str.rstrip(', ')) - if show: - img.show(self.files[i]) # show - if save: - f = self.files[i] - img.save(Path(save_dir) / f) # save - print(f"{'Saved' * (i == 0)} {f}", end=',' if i < self.n - 1 else f' to {save_dir}\n') - if render: - self.imgs[i] = np.asarray(img) - - def print(self): - self.display(pprint=True) # print results - print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t) - - def show(self): - self.display(show=True) # show results - - def save(self, save_dir='runs/hub/exp'): - save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/hub/exp') # increment save_dir - Path(save_dir).mkdir(parents=True, exist_ok=True) - self.display(save=True, save_dir=save_dir) # save results - - def render(self): - self.display(render=True) # render results - return self.imgs - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns - cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns - for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]): - a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)] - for d in x: - for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def __len__(self): - return self.n - - -class Classify(nn.Module): - # Classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super(Classify, self).__init__() - self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1) - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1) - self.flat = nn.Flatten() - - def forward(self, x): - z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list - return self.flat(self.conv(z)) # flatten to x(b,c2) - -##### end of yolov5 ###### - - -##### orepa ##### - -def transI_fusebn(kernel, bn): - gamma = bn.weight - std = (bn.running_var + bn.eps).sqrt() - return kernel * ((gamma / std).reshape(-1, 1, 1, 1)), bn.bias - bn.running_mean * gamma / std - - -class ConvBN(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, - stride=1, padding=0, dilation=1, groups=1, deploy=False, nonlinear=None): - super().__init__() - if nonlinear is None: - self.nonlinear = nn.Identity() - else: - self.nonlinear = nonlinear - if deploy: - self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, - stride=stride, padding=padding, dilation=dilation, groups=groups, bias=True) - else: - self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, - stride=stride, padding=padding, dilation=dilation, groups=groups, bias=False) - self.bn = nn.BatchNorm2d(num_features=out_channels) - - def forward(self, x): - if hasattr(self, 'bn'): - return self.nonlinear(self.bn(self.conv(x))) - else: - return self.nonlinear(self.conv(x)) - - def switch_to_deploy(self): - kernel, bias = transI_fusebn(self.conv.weight, self.bn) - conv = nn.Conv2d(in_channels=self.conv.in_channels, out_channels=self.conv.out_channels, kernel_size=self.conv.kernel_size, - stride=self.conv.stride, padding=self.conv.padding, dilation=self.conv.dilation, groups=self.conv.groups, bias=True) - conv.weight.data = kernel - conv.bias.data = bias - for para in self.parameters(): - para.detach_() - self.__delattr__('conv') - self.__delattr__('bn') - self.conv = conv - -class OREPA_3x3_RepConv(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, - stride=1, padding=0, dilation=1, groups=1, - internal_channels_1x1_3x3=None, - deploy=False, nonlinear=None, single_init=False): - super(OREPA_3x3_RepConv, self).__init__() - self.deploy = deploy - - if nonlinear is None: - self.nonlinear = nn.Identity() - else: - self.nonlinear = nonlinear - - self.kernel_size = kernel_size - self.in_channels = in_channels - self.out_channels = out_channels - self.groups = groups - assert padding == kernel_size // 2 - - self.stride = stride - self.padding = padding - self.dilation = dilation - - self.branch_counter = 0 - - self.weight_rbr_origin = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), kernel_size, kernel_size)) - nn.init.kaiming_uniform_(self.weight_rbr_origin, a=math.sqrt(1.0)) - self.branch_counter += 1 - - - if groups < out_channels: - self.weight_rbr_avg_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1)) - self.weight_rbr_pfir_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_avg_conv, a=1.0) - nn.init.kaiming_uniform_(self.weight_rbr_pfir_conv, a=1.0) - self.weight_rbr_avg_conv.data - self.weight_rbr_pfir_conv.data - self.register_buffer('weight_rbr_avg_avg', torch.ones(kernel_size, kernel_size).mul(1.0/kernel_size/kernel_size)) - self.branch_counter += 1 - - else: - raise NotImplementedError - self.branch_counter += 1 - - if internal_channels_1x1_3x3 is None: - internal_channels_1x1_3x3 = in_channels if groups < out_channels else 2 * in_channels # For mobilenet, it is better to have 2X internal channels - - if internal_channels_1x1_3x3 == in_channels: - self.weight_rbr_1x1_kxk_idconv1 = nn.Parameter(torch.zeros(in_channels, int(in_channels/self.groups), 1, 1)) - id_value = np.zeros((in_channels, int(in_channels/self.groups), 1, 1)) - for i in range(in_channels): - id_value[i, i % int(in_channels/self.groups), 0, 0] = 1 - id_tensor = torch.from_numpy(id_value).type_as(self.weight_rbr_1x1_kxk_idconv1) - self.register_buffer('id_tensor', id_tensor) - - else: - self.weight_rbr_1x1_kxk_conv1 = nn.Parameter(torch.Tensor(internal_channels_1x1_3x3, int(in_channels/self.groups), 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv1, a=math.sqrt(1.0)) - self.weight_rbr_1x1_kxk_conv2 = nn.Parameter(torch.Tensor(out_channels, int(internal_channels_1x1_3x3/self.groups), kernel_size, kernel_size)) - nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv2, a=math.sqrt(1.0)) - self.branch_counter += 1 - - expand_ratio = 8 - self.weight_rbr_gconv_dw = nn.Parameter(torch.Tensor(in_channels*expand_ratio, 1, kernel_size, kernel_size)) - self.weight_rbr_gconv_pw = nn.Parameter(torch.Tensor(out_channels, in_channels*expand_ratio, 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_gconv_dw, a=math.sqrt(1.0)) - nn.init.kaiming_uniform_(self.weight_rbr_gconv_pw, a=math.sqrt(1.0)) - self.branch_counter += 1 - - if out_channels == in_channels and stride == 1: - self.branch_counter += 1 - - self.vector = nn.Parameter(torch.Tensor(self.branch_counter, self.out_channels)) - self.bn = nn.BatchNorm2d(out_channels) - - self.fre_init() - - nn.init.constant_(self.vector[0, :], 0.25) #origin - nn.init.constant_(self.vector[1, :], 0.25) #avg - nn.init.constant_(self.vector[2, :], 0.0) #prior - nn.init.constant_(self.vector[3, :], 0.5) #1x1_kxk - nn.init.constant_(self.vector[4, :], 0.5) #dws_conv - - - def fre_init(self): - prior_tensor = torch.Tensor(self.out_channels, self.kernel_size, self.kernel_size) - half_fg = self.out_channels/2 - for i in range(self.out_channels): - for h in range(3): - for w in range(3): - if i < half_fg: - prior_tensor[i, h, w] = math.cos(math.pi*(h+0.5)*(i+1)/3) - else: - prior_tensor[i, h, w] = math.cos(math.pi*(w+0.5)*(i+1-half_fg)/3) - - self.register_buffer('weight_rbr_prior', prior_tensor) - - def weight_gen(self): - - weight_rbr_origin = torch.einsum('oihw,o->oihw', self.weight_rbr_origin, self.vector[0, :]) - - weight_rbr_avg = torch.einsum('oihw,o->oihw', torch.einsum('oihw,hw->oihw', self.weight_rbr_avg_conv, self.weight_rbr_avg_avg), self.vector[1, :]) - - weight_rbr_pfir = torch.einsum('oihw,o->oihw', torch.einsum('oihw,ohw->oihw', self.weight_rbr_pfir_conv, self.weight_rbr_prior), self.vector[2, :]) - - weight_rbr_1x1_kxk_conv1 = None - if hasattr(self, 'weight_rbr_1x1_kxk_idconv1'): - weight_rbr_1x1_kxk_conv1 = (self.weight_rbr_1x1_kxk_idconv1 + self.id_tensor).squeeze() - elif hasattr(self, 'weight_rbr_1x1_kxk_conv1'): - weight_rbr_1x1_kxk_conv1 = self.weight_rbr_1x1_kxk_conv1.squeeze() - else: - raise NotImplementedError - weight_rbr_1x1_kxk_conv2 = self.weight_rbr_1x1_kxk_conv2 - - if self.groups > 1: - g = self.groups - t, ig = weight_rbr_1x1_kxk_conv1.size() - o, tg, h, w = weight_rbr_1x1_kxk_conv2.size() - weight_rbr_1x1_kxk_conv1 = weight_rbr_1x1_kxk_conv1.view(g, int(t/g), ig) - weight_rbr_1x1_kxk_conv2 = weight_rbr_1x1_kxk_conv2.view(g, int(o/g), tg, h, w) - weight_rbr_1x1_kxk = torch.einsum('gti,gothw->goihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2).view(o, ig, h, w) - else: - weight_rbr_1x1_kxk = torch.einsum('ti,othw->oihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2) - - weight_rbr_1x1_kxk = torch.einsum('oihw,o->oihw', weight_rbr_1x1_kxk, self.vector[3, :]) - - weight_rbr_gconv = self.dwsc2full(self.weight_rbr_gconv_dw, self.weight_rbr_gconv_pw, self.in_channels) - weight_rbr_gconv = torch.einsum('oihw,o->oihw', weight_rbr_gconv, self.vector[4, :]) - - weight = weight_rbr_origin + weight_rbr_avg + weight_rbr_1x1_kxk + weight_rbr_pfir + weight_rbr_gconv - - return weight - - def dwsc2full(self, weight_dw, weight_pw, groups): - - t, ig, h, w = weight_dw.size() - o, _, _, _ = weight_pw.size() - tg = int(t/groups) - i = int(ig*groups) - weight_dw = weight_dw.view(groups, tg, ig, h, w) - weight_pw = weight_pw.squeeze().view(o, groups, tg) - - weight_dsc = torch.einsum('gtihw,ogt->ogihw', weight_dw, weight_pw) - return weight_dsc.view(o, i, h, w) - - def forward(self, inputs): - weight = self.weight_gen() - out = F.conv2d(inputs, weight, bias=None, stride=self.stride, padding=self.padding, dilation=self.dilation, groups=self.groups) - - return self.nonlinear(self.bn(out)) - -class RepConv_OREPA(nn.Module): - - def __init__(self, c1, c2, k=3, s=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False, nonlinear=nn.SiLU()): - super(RepConv_OREPA, self).__init__() - self.deploy = deploy - self.groups = groups - self.in_channels = c1 - self.out_channels = c2 - - self.padding = padding - self.dilation = dilation - self.groups = groups - - assert k == 3 - assert padding == 1 - - padding_11 = padding - k // 2 - - if nonlinear is None: - self.nonlinearity = nn.Identity() - else: - self.nonlinearity = nonlinear - - if use_se: - self.se = SEBlock(self.out_channels, internal_neurons=self.out_channels // 16) - else: - self.se = nn.Identity() - - if deploy: - self.rbr_reparam = nn.Conv2d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s, - padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode) - - else: - self.rbr_identity = nn.BatchNorm2d(num_features=self.in_channels) if self.out_channels == self.in_channels and s == 1 else None - self.rbr_dense = OREPA_3x3_RepConv(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s, padding=padding, groups=groups, dilation=1) - self.rbr_1x1 = ConvBN(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=1, stride=s, padding=padding_11, groups=groups, dilation=1) - print('RepVGG Block, identity = ', self.rbr_identity) - - - def forward(self, inputs): - if hasattr(self, 'rbr_reparam'): - return self.nonlinearity(self.se(self.rbr_reparam(inputs))) - - if self.rbr_identity is None: - id_out = 0 - else: - id_out = self.rbr_identity(inputs) - - out1 = self.rbr_dense(inputs) - out2 = self.rbr_1x1(inputs) - out3 = id_out - out = out1 + out2 + out3 - - return self.nonlinearity(self.se(out)) - - - # Optional. This improves the accuracy and facilitates quantization. - # 1. Cancel the original weight decay on rbr_dense.conv.weight and rbr_1x1.conv.weight. - # 2. Use like this. - # loss = criterion(....) - # for every RepVGGBlock blk: - # loss += weight_decay_coefficient * 0.5 * blk.get_cust_L2() - # optimizer.zero_grad() - # loss.backward() - - # Not used for OREPA - def get_custom_L2(self): - K3 = self.rbr_dense.weight_gen() - K1 = self.rbr_1x1.conv.weight - t3 = (self.rbr_dense.bn.weight / ((self.rbr_dense.bn.running_var + self.rbr_dense.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach() - t1 = (self.rbr_1x1.bn.weight / ((self.rbr_1x1.bn.running_var + self.rbr_1x1.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach() - - l2_loss_circle = (K3 ** 2).sum() - (K3[:, :, 1:2, 1:2] ** 2).sum() # The L2 loss of the "circle" of weights in 3x3 kernel. Use regular L2 on them. - eq_kernel = K3[:, :, 1:2, 1:2] * t3 + K1 * t1 # The equivalent resultant central point of 3x3 kernel. - l2_loss_eq_kernel = (eq_kernel ** 2 / (t3 ** 2 + t1 ** 2)).sum() # Normalize for an L2 coefficient comparable to regular L2. - return l2_loss_eq_kernel + l2_loss_circle - - def get_equivalent_kernel_bias(self): - kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense) - kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1) - kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity) - return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid - - def _pad_1x1_to_3x3_tensor(self, kernel1x1): - if kernel1x1 is None: - return 0 - else: - return torch.nn.functional.pad(kernel1x1, [1,1,1,1]) - - def _fuse_bn_tensor(self, branch): - if branch is None: - return 0, 0 - if not isinstance(branch, nn.BatchNorm2d): - if isinstance(branch, OREPA_3x3_RepConv): - kernel = branch.weight_gen() - elif isinstance(branch, ConvBN): - kernel = branch.conv.weight - else: - raise NotImplementedError - running_mean = branch.bn.running_mean - running_var = branch.bn.running_var - gamma = branch.bn.weight - beta = branch.bn.bias - eps = branch.bn.eps - else: - if not hasattr(self, 'id_tensor'): - input_dim = self.in_channels // self.groups - kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32) - for i in range(self.in_channels): - kernel_value[i, i % input_dim, 1, 1] = 1 - self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device) - kernel = self.id_tensor - running_mean = branch.running_mean - running_var = branch.running_var - gamma = branch.weight - beta = branch.bias - eps = branch.eps - std = (running_var + eps).sqrt() - t = (gamma / std).reshape(-1, 1, 1, 1) - return kernel * t, beta - running_mean * gamma / std - - def switch_to_deploy(self): - if hasattr(self, 'rbr_reparam'): - return - print(f"RepConv_OREPA.switch_to_deploy") - kernel, bias = self.get_equivalent_kernel_bias() - self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.in_channels, out_channels=self.rbr_dense.out_channels, - kernel_size=self.rbr_dense.kernel_size, stride=self.rbr_dense.stride, - padding=self.rbr_dense.padding, dilation=self.rbr_dense.dilation, groups=self.rbr_dense.groups, bias=True) - self.rbr_reparam.weight.data = kernel - self.rbr_reparam.bias.data = bias - for para in self.parameters(): - para.detach_() - self.__delattr__('rbr_dense') - self.__delattr__('rbr_1x1') - if hasattr(self, 'rbr_identity'): - self.__delattr__('rbr_identity') - -##### end of orepa ##### - - -##### swin transformer ##### - -class WindowAttention(nn.Module): - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - nn.init.normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - # print(attn.dtype, v.dtype) - try: - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - except: - #print(attn.dtype, v.dtype) - x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - -class Mlp(nn.Module): - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - -def window_partition(x, window_size): - - B, H, W, C = x.shape - assert H % window_size == 0, 'feature map h and w can not divide by window size' - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - -def window_reverse(windows, window_size, H, W): - - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SwinTransformerLayer(nn.Module): - - def __init__(self, dim, num_heads, window_size=8, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.SiLU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - # if min(self.input_resolution) <= self.window_size: - # # if window size is larger than input resolution, we don't partition windows - # self.shift_size = 0 - # self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def create_mask(self, H, W): - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x): - # reshape x[b c h w] to x[b l c] - _, _, H_, W_ = x.shape - - Padding = False - if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0: - Padding = True - # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.') - pad_r = (self.window_size - W_ % self.window_size) % self.window_size - pad_b = (self.window_size - H_ % self.window_size) % self.window_size - x = F.pad(x, (0, pad_r, 0, pad_b)) - - # print('2', x.shape) - B, C, H, W = x.shape - L = H * W - x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c - - # create mask from init to forward - if self.shift_size > 0: - attn_mask = self.create_mask(H, W).to(x.device) - else: - attn_mask = None - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w - - if Padding: - x = x[:, :, :H_, :W_] # reverse padding - - return x - - -class SwinTransformerBlock(nn.Module): - def __init__(self, c1, c2, num_heads, num_layers, window_size=8): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - - # remove input_resolution - self.blocks = nn.Sequential(*[SwinTransformerLayer(dim=c2, num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)]) - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - x = self.blocks(x) - return x - - -class STCSPA(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class STCSPB(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class STCSPC(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - -##### end of swin transformer ##### - - -##### swin transformer v2 ##### - -class WindowAttention_v2(nn.Module): - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0., - pretrained_window_size=[0, 0]): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.pretrained_window_size = pretrained_window_size - self.num_heads = num_heads - - self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1))), requires_grad=True) - - # mlp to generate continuous relative position bias - self.cpb_mlp = nn.Sequential(nn.Linear(2, 512, bias=True), - nn.ReLU(inplace=True), - nn.Linear(512, num_heads, bias=False)) - - # get relative_coords_table - relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0], dtype=torch.float32) - relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1], dtype=torch.float32) - relative_coords_table = torch.stack( - torch.meshgrid([relative_coords_h, - relative_coords_w])).permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2 - if pretrained_window_size[0] > 0: - relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1) - else: - relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1) - relative_coords_table *= 8 # normalize to -8, 8 - relative_coords_table = torch.sign(relative_coords_table) * torch.log2( - torch.abs(relative_coords_table) + 1.0) / np.log2(8) - - self.register_buffer("relative_coords_table", relative_coords_table) - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=False) - if qkv_bias: - self.q_bias = nn.Parameter(torch.zeros(dim)) - self.v_bias = nn.Parameter(torch.zeros(dim)) - else: - self.q_bias = None - self.v_bias = None - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - - B_, N, C = x.shape - qkv_bias = None - if self.q_bias is not None: - qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias)) - qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) - qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - # cosine attention - attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1)) - logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01))).exp() - attn = attn * logit_scale - - relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads) - relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - relative_position_bias = 16 * torch.sigmoid(relative_position_bias) - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - try: - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - except: - x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C) - - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, ' \ - f'pretrained_window_size={self.pretrained_window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - -class Mlp_v2(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition_v2(x, window_size): - - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse_v2(windows, window_size, H, W): - - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SwinTransformerLayer_v2(nn.Module): - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.SiLU, norm_layer=nn.LayerNorm, pretrained_window_size=0): - super().__init__() - self.dim = dim - #self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - #if min(self.input_resolution) <= self.window_size: - # # if window size is larger than input resolution, we don't partition windows - # self.shift_size = 0 - # self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention_v2( - dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop, - pretrained_window_size=(pretrained_window_size, pretrained_window_size)) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp_v2(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def create_mask(self, H, W): - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x): - # reshape x[b c h w] to x[b l c] - _, _, H_, W_ = x.shape - - Padding = False - if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0: - Padding = True - # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.') - pad_r = (self.window_size - W_ % self.window_size) % self.window_size - pad_b = (self.window_size - H_ % self.window_size) % self.window_size - x = F.pad(x, (0, pad_r, 0, pad_b)) - - # print('2', x.shape) - B, C, H, W = x.shape - L = H * W - x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c - - # create mask from init to forward - if self.shift_size > 0: - attn_mask = self.create_mask(H, W).to(x.device) - else: - attn_mask = None - - shortcut = x - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition_v2(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse_v2(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - x = shortcut + self.drop_path(self.norm1(x)) - - # FFN - x = x + self.drop_path(self.norm2(self.mlp(x))) - x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w - - if Padding: - x = x[:, :, :H_, :W_] # reverse padding - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - - -class SwinTransformer2Block(nn.Module): - def __init__(self, c1, c2, num_heads, num_layers, window_size=7): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - - # remove input_resolution - self.blocks = nn.Sequential(*[SwinTransformerLayer_v2(dim=c2, num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)]) - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - x = self.blocks(x) - return x - - -class ST2CSPA(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class ST2CSPB(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class ST2CSPC(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - -##### end of swin transformer v2 ##### diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Fifa 15 Config Exe BEST Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Fifa 15 Config Exe BEST Download.md deleted file mode 100644 index 457d21bea9aedc91fc2e1008e9d20b6d82e1303b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Fifa 15 Config Exe BEST Download.md +++ /dev/null @@ -1,150 +0,0 @@ - -

    Fifa 15 Config Exe Download: How to Install and Use It

    -

    If you are a fan of FIFA 15, you might have encountered some issues with the game settings, such as graphics, resolution, sound, controller, etc. Sometimes, the default settings are not optimal for your PC or your personal taste. That's why you need a tool that can help you adjust and customize these settings easily and quickly. That tool is called Fifa 15 Config Exe.

    -

    Fifa 15 Config Exe Download


    Download Zip ——— https://tinourl.com/2uL32j



    -

    What is Fifa 15 Config Exe?

    -

    Fifa 15 Config Exe is a file that is part of FIFALauncher, a program developed by Electronic Arts that allows you to launch and configure FIFA 15 on your PC. According to the file information, Fifa 15 Config Exe's description is "FIFALauncher" and its version is 1.0.0.0.

    -

    Fifa 15 Config Exe is an executable file that opens a graphical user interface (GUI) where you can access and modify various game settings, such as:

    -
      -
    • Graphics: You can change the resolution, window mode, rendering quality, anti-aliasing, etc.
    • -
    • Sound: You can adjust the volume, sound effects, commentary language, etc.
    • -
    • Controller: You can choose the input device, button configuration, vibration, etc.
    • -
    • Online: You can enable or disable online features, such as Origin In-Game, cloud storage, etc.
    • -
    -

    Fifa 15 Config Exe also lets you test your system performance and compatibility with FIFA 15 by running a benchmark test that measures your CPU speed, RAM usage, GPU performance, etc.

    -

    Why do you need Fifa 15 Config Exe?

    -

    Fifa 15 Config Exe is a useful tool that can enhance your gaming experience with FIFA 15 by allowing you to customize it according to your preferences and needs. Some of the benefits of using Fifa 15 Config Exe are:

    -
      -
    • You can optimize the game performance by adjusting the graphics settings to match your PC specifications and avoid lagging or crashing.
    • -
    • You can improve the game quality by choosing the best resolution, anti-aliasing, rendering quality, etc. for your monitor and eyesight.
    • -
    • You can personalize the game sound by selecting the commentary language, volume level, sound effects, etc. that suit your mood and taste.
    • -
    • You can control the game input by picking the device, button layout, vibration, etc. that fit your playing style and comfort.
    • -
    • You can enable or disable online features by deciding whether you want to use Origin In-Game, cloud storage, etc. or not.
    • -
    -

    By using Fifa 15 Config Exe, you can make FIFA 15 more enjoyable and fun for yourself and your friends.

    -

    Where can you download Fifa 15 Config Exe?

    -

    If you have installed FIFA 15 on your PC through Origin or a physical disc, you should already have Fifa 15 Config Exe in your FIFA Setup folder. The default location of this folder is:

    -

    Fifa 15 Config Exe File Download
    -How to Download Fifa 15 Config Exe
    -Fifa 15 Config Exe Download for PC
    -Fifa 15 Config Exe Download Free
    -Fifa 15 Config Exe Download Link
    -Fifa 15 Config Exe Download Full Version
    -Fifa 15 Config Exe Download Crack
    -Fifa 15 Config Exe Download Windows 10
    -Fifa 15 Config Exe Download Error Fix
    -Fifa 15 Config Exe Download No Virus
    -Fifa 15 Config Exe Download Torrent
    -Fifa 15 Config Exe Download Mega
    -Fifa 15 Config Exe Download Mediafire
    -Fifa 15 Config Exe Download Google Drive
    -Fifa 15 Config Exe Download Zip
    -Fifa 15 Config Exe Download Rar
    -Fifa 15 Config Exe Download Setup
    -Fifa 15 Config Exe Download Install
    -Fifa 15 Config Exe Download Guide
    -Fifa 15 Config Exe Download Tutorial
    -Fifa 15 Config Exe Download Tips
    -Fifa 15 Config Exe Download Tricks
    -Fifa 15 Config Exe Download Hacks
    -Fifa 15 Config Exe Download Mods
    -Fifa 15 Config Exe Download Cheats
    -Fifa 15 Config Exe Download Patch
    -Fifa 15 Config Exe Download Update
    -Fifa 15 Config Exe Download Latest Version
    -Fifa 15 Config Exe Download Requirements
    -Fifa 15 Config Exe Download Compatibility
    -Fifa 15 Config Exe Download Features
    -Fifa 15 Config Exe Download Benefits
    -Fifa 15 Config Exe Download Reviews
    -Fifa 15 Config Exe Download Ratings
    -Fifa 15 Config Exe Download Feedbacks
    -Fifa 15 Config Exe Download Testimonials
    -Fifa 15 Config Exe Download Alternatives
    -Fifa 15 Config Exe Download Comparisons
    -Fifa 15 Config Exe Download Similar Products
    -Fifa 15 Config Exe Download Recommendations
    -Fifa 15 Config Exe Download Suggestions
    -Fifa 15 Config Exe Download Advice
    -Fifa 15 Config Exe Download Help
    -Fifa 15 Config Exe Download Support
    -Fifa 15 Config Exe Download Forum
    -Fifa 15 Config Exe Download Community
    -Fifa 15 Config Exe Download Blog
    -Fifa 15 Config Exe Download News
    -Fifa 15 Config Exe Download Videos
    -Fifa 15 Config Exe Download Images

    - %PROGRAMFILES (X86)%\Origin Games\FIFA 21\FIFASetup\fifaconfig.exe -

    If you don't have Fifa 15 Config Exe or you have deleted it by mistake, you can download it from various sources online. However, you should be careful when downloading files from unknown or untrusted websites as they might contain viruses or malware that can harm your PC or steal your data. Some of the best sources to download Fifa 15 Config Exe safely and securely are:

    -
      -
    • Software Tested: This website provides a free download link for Fifaconfig.exe by Electronic Arts along with detailed information about the file and its actions on your PC. You can also scan the file for viruses before downloading it.
    • -
    • ModsFire: This website offers a free download link for FIFA 18 fifaconfig for FIFA 15.rar which is a modified version of Fifaconfig.exe that has updated graphics settings for FIFA 18. You can also generate a download link for other mods for FIFA games.
    • -
    • Soccer Gaming: This website is a forum where soccer fans and gamers discuss various topics related to soccer games such as FIFA. You can request someone to send you the fifaconfig.exe file from FIFA Setup folder or share your own file with others.
    • -
    • DLLme: This website provides a free download link for fifaconfig.resources.dll which is a DLL file that is required for fifaconfig.exe to run properly. You can also download other DLL files that are missing or corrupted on your PC.
    • -
    -

    How to install Fifa 15 Config Exe?

    -

    Once you have downloaded Fifaconfig.exe from one of the sources mentioned above, you need to install it on your PC so that you can use it to configure FIFA 15. Here are the steps to install Fifaconfig.exe on your PC:

    -
      -
    1. Locate the downloaded file on your PC and extract it if it is in a compressed format such as .rar or .zip.
    2. -
    3. Copy or move the extracted file (fifaconfig.exe) to your FIFA Setup folder which is usually located at:
      %PROGRAMFILES (X86)%\Origin Games\FIFA 21\FIFASetup\fifaconfig.exe
    4. -
    5. If you already have an existing fifaconfig.exe file in your FIFA Setup folder, rename it or delete it before pasting the new one.
    6. -
    7. Double-click on fifaconfig.exe to launch it and start configuring FIFA 15.
    8. -
    -

    How to use Fifaconfig.exe?

    -

    After installing fifaconfig.exe on your PC, you can use it to launch and customize FIFA 15 according to your preferences and needs. Here is a tutorial on how to use fifaconfig.exe:

    -
      -
    1. Double-click on fifaconfig.exe in your FIFA Setup folder to open it.
    2. -
    3. You will see four tabs at the top of the window: Graphics Settings, Sound Settings, Controller Settings and Online Settings. Click on each tab to access and modify different game settings.
    4. -
    5. In each tab, you will see various options that you can change by clicking on them or using sliders or drop-down menus. For example:
      - In Graphics Settings tab, you can change resolution mode (windowed or full screen), resolution size (width x height), rendering quality (low-medium-high), anti-aliasing (on-off), frame rate limit (30-60-unlimited), etc.
      - In Sound Settings tab, you can change master volume (0-100), music volume (0-100), sound effects volume (0-100), commentary language (English-Spanish-French-German-Italian-etc.), commentary volume (0-100), crowd volume (0-100), etc.
      - In Controller Settings tab - In Controller Settings tab, you can choose the input device (keyboard, mouse or gamepad), button configuration (classic or alternate), vibration (on-off), etc.
      - In Online Settings tab, you can enable or disable online features (Origin In-Game, cloud storage, etc.) and check your network status (ping, NAT type, etc.).
    6. -
    7. After changing the settings that you want, click on Apply and Exit to save your changes and close fifaconfig.exe.
    8. -
    9. To launch FIFA 15 with your new settings, you can either double-click on fifaconfig.exe again and click on Play or go to your FIFA 15 folder and double-click on FIFA15.exe.
    10. -
    -

    Conclusion

    -

    Fifa 15 Config Exe is a handy tool that can help you optimize and personalize FIFA 15 on your PC. It allows you to access and modify various game settings, such as graphics, sound, controller and online. You can download Fifa 15 Config Exe from reliable sources online and install it on your PC easily. You can also use it to launch and play FIFA 15 with your preferred settings. By using Fifa 15 Config Exe, you can enjoy FIFA 15 more and have a better gaming experience.

    -

    If you found this article helpful, please share it with your friends and leave a comment below. If you have any questions or suggestions about Fifa 15 Config Exe, feel free to contact me. Thank you for reading and happy gaming!

    -

    FAQs

    -

    Q: Is Fifa 15 Config Exe safe to download and use?

    -

    A: Yes, Fifa 15 Config Exe is safe to download and use as long as you get it from trusted sources online. However, you should always scan the file for viruses before installing it on your PC and avoid downloading files from unknown or suspicious websites.

    -

    Q: What if Fifa 15 Config Exe doesn't work or causes problems?

    -

    A: If Fifa 15 Config Exe doesn't work or causes problems, such as crashing, freezing or error messages, you can try the following solutions:

    -
      -
    • Make sure that your PC meets the minimum system requirements for FIFA 15.
    • -
    • Update your graphics card drivers and DirectX to the latest versions.
    • -
    • Run fifaconfig.exe as an administrator and in compatibility mode for Windows 7 or 8.
    • -
    • Delete or rename the fifasetup.ini file in your Documents\FIFA 15 folder and restart fifaconfig.exe.
    • -
    • Reinstall FIFA 15 or repair it through Origin.
    • -
    -

    Q: How can I change the keyboard controls in FIFA 15?

    -

    A: To change the keyboard controls in FIFA 15, you need to do the following steps:

    -
      -
    1. Open fifaconfig.exe and go to Controller Settings tab.
    2. -
    3. Select Keyboard as the input device.
    4. -
    5. Click on Customize Controls button at the bottom of the window.
    6. -
    7. You will see a list of actions and their corresponding keys. You can click on any action and press a new key to change it.
    8. -
    9. After changing the keys that you want, click on Save Changes and Exit to apply your changes and close fifaconfig.exe.
    10. -
    -

    Q: How can I use a gamepad in FIFA 15?

    -

    A: To use a gamepad in FIFA 15, you need to do the following steps:

    -
      -
    1. Connect your gamepad to your PC via USB or Bluetooth.
    2. -
    3. Open fifaconfig.exe and go to Controller Settings tab.
    4. -
    5. Select Gamepad as the input device.
    6. -
    7. You will see a list of actions and their corresponding buttons. You can click on any action and press a new button to change it.
    8. -
    9. You can also choose between Classic or Alternate button configuration by clicking on the drop-down menu at the top of the window.
    10. -
    11. If you want to enable vibration on your gamepad, check the box next to Vibration at the bottom of the window.
    12. -
    13. After changing the settings that you want, click on Apply and Exit to save your changes and close fifaconfig.exe.
    14. -
    -

    Q: How can I run a benchmark test in FIFA 15?

    -

    A: To run a benchmark test in FIFA 15, you need to do the following steps:

    -
      -
    1. Open fifaconfig.exe and go to Graphics Settings tab.
    2. -
    3. Click on Test System Performance button at the bottom of the window.
    4. -
    5. You will see a loading screen followed by a gameplay video that shows various scenes from FIFA 15.
    6. -
    7. The video will last for about two minutes and display your average FPS (frames per second) at the top right corner of the screen.
    8. -
    9. The higher your FPS is, the better your system performance is. Ideally, you should aim for at least 30 FPS for a smooth gameplay experience.
    10. -
    11. After the video ends, you will see a summary of your system performance with a score from 0 to 10. The higher your score is, the better your system compatibility is.
    12. -
    13. You can also see more details about your CPU speed, RAM usage, GPU performance, etc. by clicking on Show Details button at the bottom of the window.
    14. -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/File Scavenger 41 License Key 41 Get the Best Deal Online Today.md b/spaces/raedeXanto/academic-chatgpt-beta/File Scavenger 41 License Key 41 Get the Best Deal Online Today.md deleted file mode 100644 index 3c3dfc297e56436187b518d397d5d2d43bcacc9c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/File Scavenger 41 License Key 41 Get the Best Deal Online Today.md +++ /dev/null @@ -1,222 +0,0 @@ - -

    File Scavenger 41 License Key 41: A Complete Guide

    -

    Have you ever lost your important files due to accidental deletion, formatting, virus attack, or any other reason? If yes, then you know how frustrating and stressful it can be to recover them. Fortunately, there is a powerful and reliable data recovery software that can help you restore your lost files in no time. It is called File Scavenger 41.

    -

    File Scavenger 41 License Key 41


    DOWNLOAD ✏ ✏ ✏ https://tinourl.com/2uL2H5



    -

    In this article, we will tell you everything you need to know about File Scavenger 41 and its license key. We will also show you how to download, install, and activate it for free. And if you are looking for some alternatives to File Scavenger 41, we will also give you some suggestions. So, let's get started!

    -

    What is File Scavenger 41?

    -

    File Scavenger 41 is a professional data recovery software that can recover deleted or corrupted files from various types of storage devices, such as hard disks, flash drives, memory cards, optical disks, etc. It can also recover files from damaged or reformatted partitions, RAID arrays, virtual disks, etc.

    -

    File Scavenger 41 supports various file systems, such as NTFS, FAT, exFAT, EXT, HFS+, UFS, XFS, ReFS, etc. It can also recover files with long file names, Unicode file names, compressed files, encrypted files, sparse files, etc. It can even recover files that have been deleted from the Recycle Bin or bypassed it.

    -

    Features of File Scavenger 41

    -

    Some of the main features of File Scavenger 41 are:

    -
      -
    • It can recover files from any type of storage device that can be recognized by Windows.
    • -
    • It can recover files from damaged or reformatted partitions.
    • -
    • It can recover files from RAID arrays and virtual disks.
    • -
    • It can recover files from various file systems.
    • -
    • It can recover files with long file names, Unicode file names, compressed files, encrypted files, sparse files, etc.
    • -
    • It can recover files that have been deleted from the Recycle Bin or bypassed it.
    • -
    • It can recover multiple files at once with a single click.
    • -
    • It can preview the recovered files before saving them.
    • -
    • It can filter the recovered files by name, size, date, type, etc.
    • -
    • It can save the recovered files to any location or device.
    • -
    -

    Benefits of File Scavenger 41

    -

    Some of the benefits of using File Scavenger 41 are:

    -
      -
    • It is fast and efficient. It can scan and recover your lost files in minutes.
    • -
    • It is easy and user-friendly. It has a simple and intuitive interface that guides you through the recovery process.
    • -
    • It is safe and reliable. It does not overwrite or damage your original data. It also protects your privacy by deleting the temporary files after recovery.
    • -
    • It is versatile and flexible. It can recover any type of file from any type of device and file system.
    • -
    • It is affordable and cost-effective. It offers a free trial version that allows you to recover up to 64 KB of data per file. And if you want to unlock the full potential of the software, you can buy the license key for a reasonable price.
    • -
    -

    How to download and install File Scavenger 41

    -

    To download and install File Scavenger 41 on your computer, follow these steps:

    -

    File Scavenger 41 activation code
    -File Scavenger 41 crack download
    -File Scavenger 41 serial number
    -File Scavenger 41 keygen free
    -File Scavenger 41 registration key
    -File Scavenger 41 full version
    -File Scavenger 41 patch
    -File Scavenger 41 license code generator
    -File Scavenger 41 product key
    -File Scavenger 41 license key free download
    -File Scavenger 41 license key crack
    -File Scavenger 41 license key online
    -File Scavenger 41 license key recovery
    -File Scavenger 41 license key finder
    -File Scavenger 41 license key purchase
    -File Scavenger 41 license key email
    -File Scavenger 41 license key expired
    -File Scavenger 41 license key invalid
    -File Scavenger 41 license key lost
    -File Scavenger 41 license key renewal
    -File Scavenger 41 license key update
    -File Scavenger 41 license key upgrade
    -File Scavenger 41 license key transfer
    -File Scavenger 41 license key backup
    -File Scavenger 41 license key restore
    -File Scavenger 41 license key change
    -File Scavenger 41 license key reset
    -File Scavenger 41 license key remove
    -File Scavenger 41 license key deactivate
    -File Scavenger 41 license key reactivate
    -File Scavenger 41 license key reuse
    -File Scavenger 41 license key multiple devices
    -File Scavenger 41 license key lifetime
    -File Scavenger 41 license key discount
    -File Scavenger 41 license key coupon code
    -File Scavenger 41 license key giveaway
    -File Scavenger 41 license key reddit
    -File Scavenger 41 license key quora
    -File Scavenger 41 license key youtube
    -File Scavenger 41 license key review
    -File Scavenger 41 license key support
    -File Scavenger 41 license key contact
    -File Scavenger 41 license key faq
    -File Scavenger 41 license key tutorial
    -File Scavenger 41 license key guide
    -File Scavenger 41 license key manual
    -File Scavenger 41 license key tips and tricks
    -File Scavenger 41 license key best practices
    -File Scavenger 41 license key alternatives
    -File Scavenger 41 license key comparison

    -
      -
    1. Go to the official website of File Scavenger at https://www.quetek.com/download.htm.
    2. -
    3. Select the version of File Scavenger that matches your operating system (32-bit or 64-bit).
    4. -
    5. Click on the "Download" button and save the setup file on your computer.
    6. -
    7. Run the setup file and follow the instructions on the screen to complete the installation.
    8. -
    9. Launch File Scavenger on your computer and enjoy recovering your lost files!
    10. -
    -

    What is File Scavenger 41 License Key 41?

    -

    File Scavenger 41 License Key 41 is a unique code that unlocks the full features and functions of File Scavenger 41. With this license key, you can recover unlimited amount of data from any device and file system. You can also enjoy free updates and technical support from the developers.

    -

    Why do you need File Scavenger 41 License Key 41?

    -

    You need File Scavenger 41 License Key 41 if you want to:

    -
      -
    • Recover more than 64 KB of data per file.
    • -
    • Recover data from RAID arrays and virtual disks.
    • -
    • Recover data from ReFS file system.
    • -
    • Recover data from network drives and remote computers.
    • -
    • Recover data from encrypted volumes (BitLocker).
    • -
    -

    How to get File Scavenger 41 License Key 41 for free

    -

    If you want to get File Scavenger 41 License Key 41 for free, you have two options:

    - - - - -
    OptionDescription
    CrackA crack is a modified version of File Scavenger that bypasses the license verification process. You can download a crack from various websites that offer pirated software. However, this option is risky and illegal. You may end up downloading malware or viruses that can harm your computer or steal your personal information. You may also face legal consequences for violating the intellectual property rights of the developers.
    KeygenA keygen is a program that generates random license keys for File Scavenger. You can download a keygen from various websites that offer hacking tools. However, this option is also risky and illegal. You may end up downloading malware or viruses that can harm your computer or steal your personal information. You may also face legal consequences for violating the intellectual property rights of the developers.
    -

    How to activate File Scavenger 41 with License Key 41

    -

    If you want to activate File Scavenger 41 with License Key 41 legally and safely, you have to buy it from the official website of File Scavenger at https://www.quetek.com/buy_now.htm. You can choose between two types of licenses: Standard ($49) or Professional ($79). The difference between them is that the Professional license allows you to recover data from RAID arrays and virtual disks. To activate File Scavenger 41 with License Key you bought from the official website, follow these steps:

    -
      -
    1. Lunch The Software On Your Computer And Click On The "Help" Menu And Select "Enter License Key".
    2. -
    3. A Dialog Box Will Appear Asking You To Enter Your Name And Email Address That You Used To Purchase The License Key And The License Key Itself.
    4. -
    5. Type In The Required Information And Click On The "OK" Button To Confirm Your Activation.
    6. -
    7. A Message Will Appear Saying That Your Activation Was Successful And That You Can Now Use The Full Version Of The Software.
    8. -
    -

    What are the alternatives to File Scavenger There are many other data recovery software available in the market that claim to offer similar or better features than File Scavenger Some of them are:

    Recuva

    - Recuva is a free data recovery software that can recover deleted or lost files from various types of storage devices,

    Recuva

    -

    Recuva is a free data recovery software that can recover deleted or lost files from various types of storage devices, such as hard disks, flash drives, memory cards, optical disks, etc. It can also recover files from damaged or reformatted partitions. It supports various file systems, such as NTFS, FAT, exFAT, etc. It can also recover files with long file names, compressed files, etc. It can even recover files that have been deleted from the Recycle Bin or bypassed it.

    -

    Some of the main features of Recuva are:

    -
      -
    • It can recover files from any type of storage device that can be recognized by Windows.
    • -
    • It can recover files from damaged or reformatted partitions.
    • -
    • It can recover files with long file names, compressed files, etc.
    • -
    • It can recover files that have been deleted from the Recycle Bin or bypassed it.
    • -
    • It can recover multiple files at once with a single click.
    • -
    • It can preview the recovered files before saving them.
    • -
    • It can filter the recovered files by name, size, date, type, etc.
    • -
    • It can save the recovered files to any location or device.
    • -
    -

    Some of the drawbacks of Recuva are:

    -
      -
    • It does not support RAID arrays and virtual disks.
    • -
    • It does not support ReFS file system.
    • -
    • It does not support network drives and remote computers.
    • -
    • It does not support encrypted volumes (BitLocker).
    • -
    • It does not offer free updates and technical support.
    • -
    -

    EaseUS Data Recovery Wizard

    -

    EaseUS Data Recovery Wizard is a professional data recovery software that can recover deleted or lost files from various types of storage devices, such as hard disks, flash drives, memory cards, optical disks, etc. It can also recover files from damaged or reformatted partitions, RAID arrays, virtual disks, etc. It supports various file systems, such as NTFS, FAT, exFAT, EXT, HFS+, etc. It can also recover files with long file names, Unicode file names, compressed files, encrypted files, sparse files, etc. It can even recover files that have been deleted from the Recycle Bin or bypassed it.

    -

    Some of the main features of EaseUS Data Recovery Wizard are:

    -
      -
    • It can recover files from any type of storage device that can be recognized by Windows.
    • -
    • It can recover files from damaged or reformatted partitions.
    • -
    • It can recover files from RAID arrays and virtual disks.
    • -
    • It can recover files from various file systems.
    • -
    • It can recover files with long file names, Unicode file names, compressed files, encrypted files, sparse files, etc.
    • -
    • It can recover files that have been deleted from the Recycle Bin or bypassed it.
    • -
    • It can recover multiple files at once with a single click.
    • -
    • It can preview the recovered files before saving them.
    • -
    • It can filter the recovered files by name, size, date, type, etc.
    • -
    • It can save the recovered files to any location or device.
    • -
    -

    Some of the drawbacks of EaseUS Data Recovery Wizard are:

    -
      -
    • It is not free. It offers a free trial version that allows you to recover up to 2 GB of data. And if you want to unlock the full potential of the software, you have to buy the license key for a high price ($69.95 for one month, $99.95 for one year, or $149.95 for lifetime).
    • -
    • It does not support ReFS file system.
    • -
    • It does not support network drives and remote computers.
    • -
    • It does not support encrypted volumes (BitLocker).
    • -
    -

    Stellar Data Recovery

    -

    Stellar Data Recovery is a professional data recovery software that can recover deleted or lost files from various types of storage devices, such as hard disks, flash drives, memory cards, optical disks,

    Stellar Data Recovery

    -

    Stellar Data Recovery is a professional data recovery software that can recover deleted or lost files from various types of storage devices, such as hard disks, flash drives, memory cards, optical disks, etc. It can also recover files from damaged or reformatted partitions, RAID arrays, virtual disks, etc. It supports various file systems, such as NTFS, FAT, exFAT, EXT, HFS+, etc. It can also recover files with long file names, Unicode file names, compressed files, encrypted files, sparse files, etc. It can even recover files that have been deleted from the Recycle Bin or bypassed it.

    -

    Some of the main features of Stellar Data Recovery are:

    -
      -
    • It can recover files from any type of storage device that can be recognized by Windows.
    • -
    • It can recover files from damaged or reformatted partitions.
    • -
    • It can recover files from RAID arrays and virtual disks.
    • -
    • It can recover files from various file systems.
    • -
    • It can recover files with long file names, Unicode file names, compressed files, encrypted files, sparse files, etc.
    • -
    • It can recover files that have been deleted from the Recycle Bin or bypassed it.
    • -
    • It can recover multiple files at once with a single click.
    • -
    • It can preview the recovered files before saving them.
    • -
    • It can filter the recovered files by name, size, date, type, etc.
    • -
    • It can save the recovered files to any location or device.
    • -
    -

    Some of the drawbacks of Stellar Data Recovery are:

    -
      -
    • It is not free. It offers a free trial version that allows you to recover up to 1 GB of data. And if you want to unlock the full potential of the software, you have to buy the license key for a high price ($79.99 for one year, or $99.99 for lifetime).
    • -
    • It does not support ReFS file system.
    • -
    • It does not support network drives and remote computers.
    • -
    -

    Conclusion

    -

    In conclusion, File Scavenger 41 is a powerful and reliable data recovery software that can recover deleted or lost files from various types of storage devices and file systems. It has many features and benefits that make it stand out from other data recovery software. However, it also has some limitations and drawbacks that you should be aware of before using it. If you want to use File Scavenger 41, you need to buy the license key from the official website to activate it and enjoy its full potential.

    -

    If you are looking for some alternatives to File Scavenger 41, you can try Recuva, EaseUS Data Recovery Wizard, or Stellar Data Recovery. They are also professional data recovery software that can recover deleted or lost files from various types of storage devices and file systems. They have some features and benefits that are similar or better than File Scavenger 41. However, they also have some limitations and drawbacks that you should be aware of before using them. If you want to use any of these alternatives, you need to download them from their official websites and buy their license keys if you want to unlock their full potential.

    -

    We hope this article has helped you understand more about File Scavenger 41 and its license key. We also hope it has helped you compare it with some of its alternatives and make an informed decision on which data recovery software to use for your needs. Remember, always backup your important data regularly and avoid deleting or losing them in the first place. But if you do, don't panic and use a reliable data recovery software to restore them as soon as possible. Good luck!

    -

    FAQs

    -

    Here are some frequently asked questions about File Scavenger 41 and its license key:

    -
      -
    1. Is File Scavenger 41 safe to use?
    2. -

      Yes, File Scavenger 41 is safe to use if you download it from the official website and activate it with a valid license key. It does not contain any malware or viruses that can harm your computer or steal your personal information. It also does not overwrite or damage your original data during the recovery process. It also deletes the temporary files after recovery to protect your privacy.

      -
    3. How long does File Scavenger 41 take to scan and recover my lost files?
    4. -

      The scanning and recovery time of File Scavenger 41 depends on several factors, such as the size and condition of your storage device, the type and number of your lost files, the complexity of your file system, etc. Generally speaking, File Scavenger 41 is fast and efficient. It can scan and recover your lost files in minutes. However, if your storage device is very large or damaged, or if your lost files are very numerous or complex, it may take longer to scan and recover them. You can monitor the progress and status of the scanning and recovery process on the interface of File Scavenger 41.

      -
    5. Can File Scavenger 41 recover all my lost files?
    6. -

      No, File Scavenger 41 cannot guarantee to recover all your lost files. There are some situations where File Scavenger 41 may fail to recover your lost files, such as:

        -
      • Your storage device is physically damaged or corrupted beyond repair.
      • -
      • Your lost files have been overwritten by new data on your storage device.
      • -
      • Your lost files have been encrypted by ransomware or other malicious software.
      • -
      • Your lost files are too old or too fragmented to be recovered.
      • -
      - In these cases, File Scavenger 41 may not be able to scan or recognize your lost files, or it may only recover partial or corrupted versions of them. Therefore, you should always backup your important data regularly and avoid deleting or losing them in the first place. But if you do, you should use File Scavenger 41 as soon as possible before your lost files are overwritten or damaged beyond recovery.

      -
    7. Can I use File Scavenger 41 on multiple computers?
    8. -

      No, you cannot use File Scavenger 41 on multiple computers with one license key. Each license key is valid for one computer only. If you want to use File Scavenger 41 on multiple computers, you need to buy multiple license keys from the official website. Alternatively, you can use a portable version of File Scavenger 41 that does not require installation or activation. You can download it from https://www.quetek.com/download.htm. However, the portable version has some limitations compared to the installed version, such as:

        -
      • It cannot recover data from RAID arrays and virtual disks.
      • -
      • It cannot recover data from ReFS file system.
      • -
      • It cannot recover data from network drives and remote computers.
      • -
      • It cannot recover data from encrypted volumes (BitLocker).
      • -
      -

      -
    9. How can I contact the developers of File Scavenger 41 if I have any questions or problems?
    10. -

      If you have any questions or problems regarding File Scavenger 41, you can contact the developers by:

        -
      • Email: support@quetek.com
      • -
      • Phone: +1 (713) 667-0190
      • -
      • Fax: +1 (713) 667-0194
      • -
      • Mail: QueTek Consulting Corporation 5959 West Loop South Suite 253 Bellaire TX 77401 USA
      • -
      - You can also visit their website at https://www.quetek.com -for more information and resources about File Scavenger 41.

      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Free Salt Movie Download In Hindi Tips and Tricks for a Smooth Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Free Salt Movie Download In Hindi Tips and Tricks for a Smooth Download.md deleted file mode 100644 index 6430e00d6ceb488467c110258e9aadc787474d73..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Free Salt Movie Download In Hindi Tips and Tricks for a Smooth Download.md +++ /dev/null @@ -1,107 +0,0 @@ - -

    Free Salt Movie Download In Hindi

    -

    Introduction

    -

    Are you a fan of action thriller movies? Do you love watching Angelina Jolie in action? If yes, then you might be interested in watching Salt, a 2010 movie directed by Phillip Noyce and starring Jolie as Evelyn Salt, a CIA agent who is accused of being a Russian spy. But what if you don't have enough money to buy or rent the movie? Or what if you prefer watching movies in Hindi rather than English? In that case, you might be looking for a way to download Salt movie for free in Hindi. But is it possible? And is it safe? In this article, we will answer these questions and more. We will tell you what Salt movie is about, why it is popular, how to download it for free in Hindi, what are the benefits and risks of doing so, and what are some alternatives to downloading it. So, let's get started!

    -

    Free Salt Movie Download In Hindi


    Download » https://tinourl.com/2uL47h



    -

    What is Salt movie about?

    -

    Salt is a movie that follows the story of Evelyn Salt, a CIA agent who is interrogated by a Russian defector named Orlov, who claims that she is a sleeper agent trained by the KGB since childhood. He also says that she is assigned to assassinate the Russian president during his visit to New York for the funeral of the US vice president. Salt denies the accusations and escapes from the CIA custody, trying to contact her husband Mike, an arachnologist. However, she soon realizes that he has been kidnapped by Orlov's men. She also learns that there are other sleeper agents like her who are activated by a code word "Day X". She decides to go after Orlov and his men, while also trying to clear her name and prove her loyalty to the US. Along the way, she faces many obstacles and challenges from both the CIA and the Russians, who are hunting her down.

    -

    Why is Salt movie popular?

    -

    Salt is a movie that has received generally positive reviews from critics and audiences alike. It has been praised for its action scenes, its plot twists, its cinematography, its music, and its performance by Jolie. The movie has also been nominated for several awards, including an Oscar for Best Sound Mixing. The movie has also been a commercial success, grossing over $293 million worldwide against a budget of $110-130 million. The movie has also spawned two alternate versions with different endings, which are available on DVD and Blu-ray discs. The movie has also been rumored to have a sequel in development, although nothing has been confirmed yet.

    -

    How to download Salt movie for free in Hindi?

    -

    If you want to watch Salt movie for free in Hindi, you might be tempted to look for some websites or apps that offer free downloads of movies. However, you should be careful before doing so, as there are many risks involved in downloading movies illegally. We will discuss these risks later in this article. But first, let us tell you how to download Salt movie for free in Hindi if you still want to do so.

    -

    Free Salt Movie Download In Hindi 480p
    -Free Salt Movie Download In Hindi HD
    -Free Salt Movie Download In Hindi Filmyzilla
    -Free Salt Movie Download In Hindi 720p
    -Free Salt Movie Download In Hindi Mp4
    -Free Salt Movie Download In Hindi Filmywap
    -Free Salt Movie Download In Hindi Worldfree4u
    -Free Salt Movie Download In Hindi Khatrimaza
    -Free Salt Movie Download In Hindi Bolly4u
    -Free Salt Movie Download In Hindi Torrent
    -Free Salt Movie Download In Hindi 300mb
    -Free Salt Movie Download In Hindi Mkv
    -Free Salt Movie Download In Hindi Pagalworld
    -Free Salt Movie Download In Hindi Moviesflix
    -Free Salt Movie Download In Hindi Movierulz
    -Free Salt Movie Download In Hindi 1080p
    -Free Salt Movie Download In Hindi Bluray
    -Free Salt Movie Download In Hindi Dailymotion
    -Free Salt Movie Download In Hindi Youtube
    -Free Salt Movie Download In Hindi Netflix
    -Free Salt Movie Download In Hindi Amazon Prime
    -Free Salt Movie Download In Hindi Disney Plus Hotstar
    -Free Salt Movie Download In Hindi Zee5
    -Free Salt Movie Download In Hindi Sony Liv
    -Free Salt Movie Download In Hindi Voot
    -Free Salt Movie Download In Hindi MX Player
    -Free Salt Movie Download In Hindi Alt Balaji
    -Free Salt Movie Download In Hindi Eros Now
    -Free Salt Movie Download In Hindi Jio Cinema
    -Free Salt Movie Download In Hindi Airtel Xstream
    -Free Salt Movie Download In Hindi Vi Movies and TV
    -Free Salt Movie Download In Hindi Hungama Play
    -Free Salt Movie Download In Hindi Shemaroo Me
    -Free Salt Movie Download In Hindi Lionsgate Play
    -Free Salt Movie Download In Hindi Hoichoi
    -Free Salt Movie Download In Hindi Ullu
    -Free Salt Movie Download In Hindi Big Flix
    -Free Salt Movie Download In Hindi Yupp TV
    -Free Salt Movie Download In Hindi Tubi TV
    -Free Salt Movie Download In Hindi Popcornflix
    -Free Salt Movie Download In Hindi Crackle
    -Free Salt Movie Download In Hindi Pluto TV
    -Free Salt Movie Download In Hindi Peacock TV
    -Free Salt Movie Download In Hindi HBO Max
    -Free Salt Movie Download In Hindi Hulu
    -Free Salt Movie Download In Hindi Paramount Plus
    -Free Salt Movie Download In Hindi Apple TV Plus
    -Free Salt Movie Download In Hindi Showtime Anytime
    -Free Salt Movie Download In Hindi Starz Play

    -

    One of the ways to download Salt movie for free in Hindi is to use torrent sites or peer-to-peer networks. These are platforms where users can share files with each other without any central authority or regulation. You can find many torrent sites or apps that have links to download Salt movie for free in Hindi. However, you will need a torrent client software or app to download the files from these sites or networks. You will also need a VPN service or proxy server to hide your IP address and location from your internet service provider or law enforcement agencies.

    -

    Another way to download Salt movie for free in Hindi is to use streaming sites or apps that host pirated copies of movies. These are websites or apps that allow users to watch movies online without downloading them. You can find many streaming sites or apps that have links to watch Salt movie for free in Hindi. However, you will need a good internet connection and a compatible device or browser to watch the movies on these sites or apps. You will also need an ad blocker software or app to block the annoying ads and pop-ups that these sites or apps often have.

    -

    Benefits of downloading Salt movie for free in Hindi

    -

    Downloading Salt movie for free in Hindi might seem like a good idea if you want to enjoy the movie without spending any money. Here are some of the benefits of doing so:

    -

    Enjoy the movie without spending money

    -

    One of the obvious benefits of downloading Salt movie for free in Hindi is that you can watch the movie without paying anything. You don't have to buy or rent the movie from any legal platform or service. You don't have to subscribe to any streaming platform or service that has the movie. You don't have to go to any cinema hall or theater that shows the movie. You can simply download the movie for free from any illegal source and watch it on your device at your convenience.

    -

    Watch the movie with subtitles or dubbing

    -

    Another benefit of downloading Salt movie for free in Hindi is that you can watch the movie with subtitles or dubbing in your preferred language. You don't have to watch the movie in English if you don't understand it well. You don't have to rely on any official translation or localization of the movie that might not be accurate or faithful to the original version. You can simply download the movie for free from any illegal source that has subtitles or dubbing in Hindi or any other language you want.

    -

    Share the movie with friends and family

    -

    A third benefit of downloading Salt movie for free in Hindi is that you can share the movie with your friends and family who might also want to watch it. You don't have to limit yourself to watching the movie alone or with someone who has access to the same legal platform or service as you do. You don't have to worry about any restrictions or limitations on how many times you can watch the movie or how many devices you can use to watch it. You can simply download the movie for free from any illegal source and share it with anyone you want.

    -

    Risks of downloading Salt movie for free in Hindi

    -

    Downloading Salt movie for free in Hindi might seem like a good idea if you want to enjoy the movie without spending any money. However, there are also many risks involved in doing so. Here are some of them:

    -

    Legal issues and penalties

    -

    One of the major risks of downloading Salt movie for free in Hindi is that you might face legal issues and penalties for violating intellectual property rights and piracy laws. Downloading movies illegally is considered a crime in many countries, including India, where piracy laws are very strict and harsh. You might be sued by the producers or distributors of the movie for damages and compensation. You might be fined by your internet service provider or law enforcement agencies for accessing illegal content online. You might even be arrested and jailed for committing piracy offences.

    -

    Malware and viruses

    -

    Another risk of downloading Salt movie for free in Hindi is that you might expose your device and data to malware and viruses that might harm them. Downloading movies illegally often involves visiting shady websites or apps that might contain malicious software or code that might infect your device or data with malware or viruses. These malware or viruses might steal your personal information, damage your files, slow down your device, corrupt your system, or even lock your device until you pay a ransom.

    -

    Poor quality and fake files

    -

    A third risk of downloading Salt movie for free in Hindi is that you might end up with poor quality and fake files that might ruin your viewing experience. Downloading movies illegally often involves getting files from unreliable sources that might not have good quality or authenticity of the movies they offer. You might get files that have low resolution, poor audio, missing scenes, wrong subtitles, wrong dubbing, or even completely different content than what you expected.

    -

    Alternatives to downloading Salt movie for free in Hindi

    -

    If you want to watch Salt Here is the continuation of the article I wrote based on your topic and instructions.

    Alternatives to downloading Salt movie for free in Hindi

    -

    If you want to watch Salt movie in Hindi without risking any legal or technical problems, you might want to consider some alternatives to downloading it for free. Here are some of them:

    -

    Streaming platforms and websites

    -

    One of the best alternatives to downloading Salt movie for free in Hindi is to watch it on legal streaming platforms and websites that have the movie in their catalog. You might have to pay a subscription fee or a rental fee to access these platforms and websites, but you will get high-quality and authentic files of the movie that you can watch online or offline. You will also avoid any malware or viruses that might harm your device or data. Some of the streaming platforms and websites that have Salt movie in Hindi are Prime Video, VUDU, History Vault, and Apple TV . You can check their availability and pricing on their respective websites or apps.

    -

    DVD and Blu-ray discs

    -

    Another alternative to downloading Salt movie for free in Hindi is to buy or rent the DVD or Blu-ray discs of the movie that have subtitles or dubbing in Hindi. You might have to spend some money to get these discs, but you will get good quality and original files of the movie that you can watch on your DVD or Blu-ray player or computer. You will also avoid any legal issues or penalties that might arise from downloading movies illegally. You can find the DVD or Blu-ray discs of Salt movie in Hindi on online stores like Amazon or eBay, or offline stores like Walmart or Best Buy.

    -

    Cinema halls and theaters

    -

    A third alternative to downloading Salt movie for free in Hindi is to watch it on cinema halls and theaters that show the movie in Hindi. You might have to pay a ticket fee to watch the movie on the big screen, but you will get an immersive and thrilling experience of watching the movie with surround sound and special effects. You will also avoid any malware or viruses that might harm your device or data. You can find the cinema halls and theaters that show Salt movie in Hindi on online platforms like BookMyShow or Fandango, or offline platforms like newspapers or magazines.

    -

    Conclusion

    -

    Salt is a movie that tells the story of Evelyn Salt, a CIA agent who is accused of being a Russian spy and goes on the run to clear her name. The movie is popular for its action scenes, its plot twists, its cinematography, its music, and its performance by Jolie. If you want to watch Salt movie in Hindi, you might be tempted to download it for free from illegal sources, but you should be aware of the risks involved in doing so. You might face legal issues and penalties, malware and viruses, poor quality and fake files, and other problems. Instead of downloading Salt movie for free in Hindi, you should consider some alternatives like streaming platforms and websites, DVD and Blu-ray discs, cinema halls and theaters, that offer legal and safe ways to watch the movie in Hindi.

    -

    We hope this article has helped you understand more about Salt movie and how to watch it in Hindi. If you have any questions or comments, please feel free to share them with us below. And if you liked this article, please share it with your friends and family who might also be interested in watching Salt movie in Hindi. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions about Salt movie and how to watch it in Hindi:

    -
      -
    1. Who is Evelyn Salt?
    2. -

      Evelyn Salt is the main character of Salt movie, played by Angelina Jolie. She is a CIA agent who is accused of being a Russian spy by a defector named Orlov. She escapes from the CIA custody and goes on the run to prove her innocence and find her husband Mike.

      -
    3. What is Day X?
    4. -

      Day X is a code word used by Orlov to refer to a plan by the KGB to activate sleeper agents who were trained since childhood to infiltrate the American system and destroy it from within. The plan involves assassinating the Russian president during his visit to New York for the funeral of the US vice president.

      -
    5. What are the alternate versions of Salt movie?
    6. -

      Salt movie has two alternate versions with different endings that are available on DVD and Blu-ray discs. The first one is called \"Theatrical Version\", which is the original version released in cinemas. The second one is called \"Director's Cut\", which has an extended ending that reveals more about Salt's past and future.

      -
    7. Is there a sequel to Salt movie?
    8. -

      There is no official confirmation about a sequel to Salt movie yet, but there have been rumors and speculations about it since 2010. The director Phillip Noyce has expressed interest in making a sequel, but he also said that it depends on Jolie's availability and willingness to reprise her role as Salt.

      -
    9. How can I watch Salt movie legally?
    10. -

      You can watch Salt movie legally by using streaming platforms and websites that have the movie in their catalog, by buying or renting DVD or Blu-ray discs of the movie that have subtitles or dubbing in your preferred language, or by watching it on cinema halls and theaters that show the movie in your preferred language.

      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/GEOlayers 3 1.0.0.219.md b/spaces/raedeXanto/academic-chatgpt-beta/GEOlayers 3 1.0.0.219.md deleted file mode 100644 index 07cba63ec0c92952dd253134356d121e69c4cabf..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/GEOlayers 3 1.0.0.219.md +++ /dev/null @@ -1,110 +0,0 @@ - -

    GEOlayers 3 1.0.0.219: A Powerful Tool for Creating and Animating Maps in After Effects

    -

    If you are looking for a way to create stunning maps and animations in After Effects, you might want to check out GEOlayers 3, a plugin that lets you design and animate maps directly in After Effects. It renders custom maps for you from different online data sources, and provides direct access to extensive databases of geospatial features of the world. You can easily draw buildings, highlight country borders, streets, lakes, rivers, places, regions, animate driving routes, and extrude buildings. Anything in the world that has geodata can be integrated as an editable asset in After Effects.

    -

    In this article, we will show you what GEOlayers 3 can do, how to use it, and how to combine it with other plugins and tools for more creative possibilities. Whether you are making a documentary, a travel video, a news report, or any other project that involves maps, GEOlayers 3 can help you create amazing visuals that will impress your audience.

    -

    GEOlayers 3 1.0.0.219


    Download Zip ››››› https://tinourl.com/2uL5gI



    -

    What is GEOlayers 3 and what can it do?

    -

    An overview of GEOlayers 3 features and benefits

    -

    GEOlayers 3 is a plugin that lets you design and animate maps directly in After Effects. It is the third version of the plugin, rebuilt from scratch with tons of new features and improvements. Here are some of the main features and benefits of GEOlayers 3:

    -
      -
    • It lets you create maps in any projection, resolution, and style.
    • -
    • It lets you animate maps in 3D space with intuitive controls.
    • -
    • It lets you search online for geospatial features such as countries, cities, buildings, points of interest, etc., and add them to your map.
    • -
    • It lets you style your map directly in After Effects with colors, fonts, line widths, hillshading, etc.
    • -
    • It lets you use any image-based tileserver as a map source, such as MapTiler Cloud.
    • -
    • It lets you use other plugins such as Mettle FreeForm Pro or Rowbyte Plexus for more 3D effects.
    • -
    • It has a new user interface that is easy to use and customize.
    • -
    • It has a faster finalization process that reduces render time.
    • -
    • It has a scripting API that allows you to automate tasks and integrate with other scripts.
    • -
    -

    GEOlayers 3 is compatible with After Effects CC 2015 (13.6) or higher. It supports Windows and Mac operating systems.

    -

    How to install and activate GEOlayers 3

    -

    To install GEOlayers 3, you need to download the plugin from the official website and unzip the file. Then, copy the GEOlayers 3 folder to the ScriptUI Panels folder of your After Effects installation. You can find the location of this folder by going to Edit > Preferences > Scripting & Expressions in After Effects and clicking on Reveal Scripting Folder. After copying the folder, restart After Effects and you should see GEOlayers 3 in the Window menu.

    -

    To activate GEOlayers 3, you need to purchase a license from the official website and enter your email and license key in the plugin interface. You can also request a free trial license for 7 days. Once you activate GEOlayers 3, you can start using it to create and animate maps in After Effects.

    -

    How to use GEOlayers 3 to design and animate maps in After Effects

    -

    How to create a map layer and choose a map style

    -

    To create a map layer, go to Window > GEOlayers 3 and click on the Create Map Layer button. This will create a new composition with a map layer that covers the whole world. You can adjust the size and duration of the composition as you like.

    -

    To choose a map style, click on the Map Style button in the plugin interface and select one of the predefined styles from the dropdown menu. You can also create your own custom style by clicking on the Edit button and modifying the settings. You can choose from different map sources, such as OpenStreetMap, Mapbox, Google Maps, Bing Maps, etc., and different map types, such as satellite, terrain, road, hybrid, etc. You can also adjust the brightness, contrast, saturation, hue, and gamma of the map.

    -

    How to animate the map in 3D space

    -

    To animate the map in 3D space, you need to enable the 3D switch for the map layer and add a camera layer to your composition. You can then use the camera tools to move, rotate, zoom, and tilt the camera around the map. You can also use keyframes or expressions to animate the camera properties over time.

    -

    -

    To make the map look more realistic in 3D space, you can enable some of the advanced features of GEOlayers 3, such as:

    -
      -
    • Atmosphere: This adds a realistic atmosphere effect to the map that simulates light scattering and haze.
    • -
    • Curvature: This bends the map along a sphere or an ellipsoid to mimic the curvature of the Earth.
    • -
    • Shadows: This casts realistic shadows on the map based on the sun position and angle.
    • -
    • Reflections: This adds reflections to water surfaces on the map based on the sky color and brightness.
    • -
    -

    You can access these features by clicking on the Advanced button in the plugin interface and adjusting the settings.

    -

    How to search for geospatial features online and add them to the map

    -

    GEOlayers 3 allows you to search for geospatial features online and add them to your map as vector layers. These features can be anything that has geodata associated with it, such as countries, cities, buildings, points of interest, etc. You can use these features to highlight specific locations, draw routes, create labels, etc.

    -

    To search for geospatial features online, click on the Search button in the plugin interface and enter a keyword in the search box. You can also filter the results by category, such as administrative, natural, or man-made features. You can also use the map view to zoom in and out and select a specific area to search. Once you find the feature you want, you can click on the Add button to add it to your map as a vector layer.

    -

    To edit the vector layer, you can use the Layer Settings button in the plugin interface and adjust the properties, such as color, stroke, fill, opacity, etc. You can also use the After Effects tools to transform, mask, or animate the vector layer.

    -

    How to style the map and customize its appearance

    -

    GEOlayers 3 gives you full control over the style and appearance of your map. You can customize every aspect of your map, such as colors, fonts, line widths, hillshading, etc. You can also use expressions or keyframes to animate the style properties over time.

    -

    To style the map, you can use the Style Editor button in the plugin interface and access the different tabs, such as:

    -
      -
    • Map: This lets you change the map source, type, projection, resolution, and brightness.
    • -
    • Labels: This lets you change the font, size, color, and placement of the labels on the map.
    • -
    • Lines: This lets you change the color, width, and dash of the lines on the map.
    • -
    • Polygons: This lets you change the color, opacity, and outline of the polygons on the map.
    • -
    • Hillshading: This lets you enable or disable hillshading on the map and adjust its intensity and direction.
    • -
    • Water: This lets you change the color, opacity, and reflection of the water on the map.
    • -
    -

    You can also use presets to quickly apply different styles to your map. You can save your own presets or use the ones provided by GEOlayers 3.

    -

    How to use GEOlayers 3 with other plugins and tools

    -

    How to use GEOlayers 3 with MapTiler Cloud for more map customization options

    -

    MapTiler Cloud is a service that lets you create and host your own custom maps online. You can use MapTiler Cloud with GEOlayers 3 to access more map customization options, such as:

    -
      -
    • You can upload your own geodata or images and turn them into maps.
    • -
    • You can choose from different map styles or create your own with a graphical editor.
    • -
    • You can add custom overlays and markers to your maps.
    • -
    • You can manage your maps online and share them with others.
    • -
    -

    To use MapTiler Cloud with GEOlayers 3, you need to create an account on MapTiler Cloud and get an API key. Then, you need to enter your API key in GEOlayers 3 settings and select MapTiler Cloud as your map source. You can then browse and select your custom maps from MapTiler Cloud in GEOlayers 3.

    -

    How to use GEOlayers 3 with Mettle FreeForm Pro or Rowbyte Plexus for more 3D effects

    -

    Mettle FreeForm Pro and Rowbyte Plexus are two plugins that let you create and animate 3D shapes and effects in After Effects. You can use them with GEOlayers 3 to add more depth and realism to your maps. For example:

    -
      -
    • You can use Mettle FreeForm Pro to deform and distort your map layer in 3D space.
    • -
    • You can use Rowbyte Plexus to generate 3D lines and particles from your map layer.
    • -
    -

    To use Mettle FreeForm Pro or Rowbyte Plexus with GEOlayers 3, you need to apply them to your map layer after finalizing it. You can then adjust their settings and parameters to create different 3D effects. You can also use keyframes or expressions to animate them over time.

    -

    Conclusion and FAQs

    -

    A summary of the main points and a call to action

    -

    GEOlayers 3 is a powerful tool for creating and animating maps in After Effects. It lets you design and animate maps directly in After Effects using different online data sources. It also lets you search for geospatial features online and add them to your map as vector layers. You can style your map directly in After Effects with colors, fonts, line widths, hillshading, etc. You can also animate your map in 3D space with intuitive controls. You can also use GEOlayers 3 with other plugins and tools for more creative possibilities.

    -

    If you are interested in GEOlayers 3, you can visit the official website and download a free trial version for 7 days. You can also purchase a license and activate the plugin with your email and license key. You can also watch some tutorials and demos on the website to learn more about how to use GEOlayers 3 effectively.

    -

    GEOlayers 3 is a must-have plugin for anyone who wants to create stunning maps and animations in After Effects. It is easy to use, fast, and versatile. It can help you create amazing visuals that will impress your audience and enhance your projects. So, what are you waiting for? Try GEOlayers 3 today and unleash your creativity with maps!

    -

    FAQ #1: What are the system requirements for GEOlayers 3?

    -

    GEOlayers 3 is compatible with After Effects CC 2015 (13.6) or higher. It supports Windows and Mac operating systems. It requires an internet connection to access online data sources and geospatial features. It also requires a minimum of 8 GB of RAM and a graphics card with at least 2 GB of VRAM.

    -

    FAQ #2: How much does GEOlayers 3 cost and where can I buy it?

    -

    GEOlayers 3 costs $249 for a single user license, $449 for a team license (up to 5 users), and $849 for an enterprise license (up to 10 users). You can buy it from the official website using PayPal or credit card. You can also get a discount if you own a previous version of GEOlayers or if you buy it in a bundle with other plugins.

    -

    FAQ #3: What are the differences between GEOlayers 3 and previous versions?

    -

    GEOlayers 3 is the third version of the plugin, rebuilt from scratch with tons of new features and improvements. Some of the main differences are:

    -
      -
    • GEOlayers 3 has a new user interface that is easy to use and customize.
    • -
    • GEOlayers 3 has a faster finalization process that reduces render time.
    • -
    • GEOlayers 3 has a scripting API that allows you to automate tasks and integrate with other scripts.
    • -
    • GEOlayers 3 has more map customization options, such as atmosphere, curvature, shadows, reflections, etc.
    • -
    • GEOlayers 3 has more map sources, such as MapTiler Cloud, Google Maps, Bing Maps, etc.
    • -
    • GEOlayers 3 has more geospatial features, such as buildings, points of interest, etc.
    • -
    • GEOlayers 3 has more presets and styles for your maps.
    • -
    -

    FAQ #4: How can I get support and updates for GEOlayers 3?

    -

    You can get support and updates for GEOlayers 3 by visiting the official website and accessing the following resources:

    -
      -
    • Documentation: This provides detailed information on how to use GEOlayers 3 and its features.
    • -
    • Tutorials: This provides step-by-step instructions on how to create different projects with GEOlayers 3.
    • -
    • Demos: This provides examples of projects made with GEOlayers 3 by other users.
    • -
    • Forum: This provides a place where you can ask questions, share feedback, and interact with other users and developers.
    • -
    • Email: This provides a way to contact the developers directly for any issues or inquiries.
    • -
    -

    FAQ #5: What are some examples of projects made with GEOlayers 3?

    -

    Here are some examples of projects made with GEOlayers 3 by other users:

    - - - - - -
    TitleDescriptionLink
    The World in MotionA short film that shows the movement of people, goods, and information around the world using maps and animations.[1](https://vimeo.com/392019121)
    Covid-19 DashboardA dashboard that shows the latest statistics and trends of the Covid-19 pandemic using maps and charts.[2](https://vimeo.com/401538518)
    Earth at NightA video that shows the beauty of the Earth at night using satellite imagery and map animations.[3](https://vimeo.com/413834237)

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/stream.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/stream.d.ts deleted file mode 100644 index 711fd9ca593045e2e285113fc7a212ced84da948..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/stream.d.ts +++ /dev/null @@ -1,1340 +0,0 @@ -/** - * A stream is an abstract interface for working with streaming data in Node.js. - * The `stream` module provides an API for implementing the stream interface. - * - * There are many stream objects provided by Node.js. For instance, a `request to an HTTP server` and `process.stdout` are both stream instances. - * - * Streams can be readable, writable, or both. All streams are instances of `EventEmitter`. - * - * To access the `stream` module: - * - * ```js - * const stream = require('stream'); - * ``` - * - * The `stream` module is useful for creating new types of stream instances. It is - * usually not necessary to use the `stream` module to consume streams. - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/stream.js) - */ -declare module 'stream' { - import { EventEmitter, Abortable } from 'node:events'; - import { Blob as NodeBlob } from "node:buffer"; - import * as streamPromises from 'node:stream/promises'; - import * as streamConsumers from 'node:stream/consumers'; - import * as streamWeb from 'node:stream/web'; - class internal extends EventEmitter { - pipe( - destination: T, - options?: { - end?: boolean | undefined; - } - ): T; - } - namespace internal { - class Stream extends internal { - constructor(opts?: ReadableOptions); - } - interface StreamOptions extends Abortable { - emitClose?: boolean | undefined; - highWaterMark?: number | undefined; - objectMode?: boolean | undefined; - construct?(this: T, callback: (error?: Error | null) => void): void; - destroy?(this: T, error: Error | null, callback: (error: Error | null) => void): void; - autoDestroy?: boolean | undefined; - } - interface ReadableOptions extends StreamOptions { - encoding?: BufferEncoding | undefined; - read?(this: Readable, size: number): void; - } - /** - * @since v0.9.4 - */ - class Readable extends Stream implements NodeJS.ReadableStream { - /** - * A utility method for creating Readable Streams out of iterators. - */ - static from(iterable: Iterable | AsyncIterable, options?: ReadableOptions): Readable; - /** - * A utility method for creating a `Readable` from a web `ReadableStream`. - * @since v17.0.0 - * @experimental - */ - static fromWeb(readableStream: streamWeb.ReadableStream, options?: Pick): Readable; - /** - * Returns whether the stream has been read from or cancelled. - * @since v16.8.0 - */ - static isDisturbed(stream: Readable | NodeJS.ReadableStream): boolean; - /** - * A utility method for creating a web `ReadableStream` from a `Readable`. - * @since v17.0.0 - * @experimental - */ - static toWeb(streamReadable: Readable): streamWeb.ReadableStream; - /** - * Returns whether the stream was destroyed or errored before emitting `'end'`. - * @since v16.8.0 - * @experimental - */ - readonly readableAborted: boolean; - /** - * Is `true` if it is safe to call `readable.read()`, which means - * the stream has not been destroyed or emitted `'error'` or `'end'`. - * @since v11.4.0 - */ - readable: boolean; - /** - * Returns whether `'data'` has been emitted. - * @since v16.7.0, v14.18.0 - * @experimental - */ - readonly readableDidRead: boolean; - /** - * Getter for the property `encoding` of a given `Readable` stream. The `encoding`property can be set using the `readable.setEncoding()` method. - * @since v12.7.0 - */ - readonly readableEncoding: BufferEncoding | null; - /** - * Becomes `true` when `'end'` event is emitted. - * @since v12.9.0 - */ - readonly readableEnded: boolean; - /** - * This property reflects the current state of a `Readable` stream as described - * in the `Three states` section. - * @since v9.4.0 - */ - readonly readableFlowing: boolean | null; - /** - * Returns the value of `highWaterMark` passed when creating this `Readable`. - * @since v9.3.0 - */ - readonly readableHighWaterMark: number; - /** - * This property contains the number of bytes (or objects) in the queue - * ready to be read. The value provides introspection data regarding - * the status of the `highWaterMark`. - * @since v9.4.0 - */ - readonly readableLength: number; - /** - * Getter for the property `objectMode` of a given `Readable` stream. - * @since v12.3.0 - */ - readonly readableObjectMode: boolean; - /** - * Is `true` after `readable.destroy()` has been called. - * @since v8.0.0 - */ - destroyed: boolean; - /** - * Is true after 'close' has been emitted. - * @since v18.0.0 - */ - readonly closed: boolean; - /** - * Returns error if the stream has been destroyed with an error. - * @since v18.0.0 - */ - readonly errored: Error | null; - constructor(opts?: ReadableOptions); - _construct?(callback: (error?: Error | null) => void): void; - _read(size: number): void; - /** - * The `readable.read()` method reads data out of the internal buffer and - * returns it. If no data is available to be read, `null` is returned. By default, - * the data is returned as a `Buffer` object unless an encoding has been - * specified using the `readable.setEncoding()` method or the stream is operating - * in object mode. - * - * The optional `size` argument specifies a specific number of bytes to read. If`size` bytes are not available to be read, `null` will be returned _unless_the stream has ended, in which - * case all of the data remaining in the internal - * buffer will be returned. - * - * If the `size` argument is not specified, all of the data contained in the - * internal buffer will be returned. - * - * The `size` argument must be less than or equal to 1 GiB. - * - * The `readable.read()` method should only be called on `Readable` streams - * operating in paused mode. In flowing mode, `readable.read()` is called - * automatically until the internal buffer is fully drained. - * - * ```js - * const readable = getReadableStreamSomehow(); - * - * // 'readable' may be triggered multiple times as data is buffered in - * readable.on('readable', () => { - * let chunk; - * console.log('Stream is readable (new data received in buffer)'); - * // Use a loop to make sure we read all currently available data - * while (null !== (chunk = readable.read())) { - * console.log(`Read ${chunk.length} bytes of data...`); - * } - * }); - * - * // 'end' will be triggered once when there is no more data available - * readable.on('end', () => { - * console.log('Reached end of stream.'); - * }); - * ``` - * - * Each call to `readable.read()` returns a chunk of data, or `null`. The chunks - * are not concatenated. A `while` loop is necessary to consume all data - * currently in the buffer. When reading a large file `.read()` may return `null`, - * having consumed all buffered content so far, but there is still more data to - * come not yet buffered. In this case a new `'readable'` event will be emitted - * when there is more data in the buffer. Finally the `'end'` event will be - * emitted when there is no more data to come. - * - * Therefore to read a file's whole contents from a `readable`, it is necessary - * to collect chunks across multiple `'readable'` events: - * - * ```js - * const chunks = []; - * - * readable.on('readable', () => { - * let chunk; - * while (null !== (chunk = readable.read())) { - * chunks.push(chunk); - * } - * }); - * - * readable.on('end', () => { - * const content = chunks.join(''); - * }); - * ``` - * - * A `Readable` stream in object mode will always return a single item from - * a call to `readable.read(size)`, regardless of the value of the`size` argument. - * - * If the `readable.read()` method returns a chunk of data, a `'data'` event will - * also be emitted. - * - * Calling {@link read} after the `'end'` event has - * been emitted will return `null`. No runtime error will be raised. - * @since v0.9.4 - * @param size Optional argument to specify how much data to read. - */ - read(size?: number): any; - /** - * The `readable.setEncoding()` method sets the character encoding for - * data read from the `Readable` stream. - * - * By default, no encoding is assigned and stream data will be returned as`Buffer` objects. Setting an encoding causes the stream data - * to be returned as strings of the specified encoding rather than as `Buffer`objects. For instance, calling `readable.setEncoding('utf8')` will cause the - * output data to be interpreted as UTF-8 data, and passed as strings. Calling`readable.setEncoding('hex')` will cause the data to be encoded in hexadecimal - * string format. - * - * The `Readable` stream will properly handle multi-byte characters delivered - * through the stream that would otherwise become improperly decoded if simply - * pulled from the stream as `Buffer` objects. - * - * ```js - * const readable = getReadableStreamSomehow(); - * readable.setEncoding('utf8'); - * readable.on('data', (chunk) => { - * assert.equal(typeof chunk, 'string'); - * console.log('Got %d characters of string data:', chunk.length); - * }); - * ``` - * @since v0.9.4 - * @param encoding The encoding to use. - */ - setEncoding(encoding: BufferEncoding): this; - /** - * The `readable.pause()` method will cause a stream in flowing mode to stop - * emitting `'data'` events, switching out of flowing mode. Any data that - * becomes available will remain in the internal buffer. - * - * ```js - * const readable = getReadableStreamSomehow(); - * readable.on('data', (chunk) => { - * console.log(`Received ${chunk.length} bytes of data.`); - * readable.pause(); - * console.log('There will be no additional data for 1 second.'); - * setTimeout(() => { - * console.log('Now data will start flowing again.'); - * readable.resume(); - * }, 1000); - * }); - * ``` - * - * The `readable.pause()` method has no effect if there is a `'readable'`event listener. - * @since v0.9.4 - */ - pause(): this; - /** - * The `readable.resume()` method causes an explicitly paused `Readable` stream to - * resume emitting `'data'` events, switching the stream into flowing mode. - * - * The `readable.resume()` method can be used to fully consume the data from a - * stream without actually processing any of that data: - * - * ```js - * getReadableStreamSomehow() - * .resume() - * .on('end', () => { - * console.log('Reached the end, but did not read anything.'); - * }); - * ``` - * - * The `readable.resume()` method has no effect if there is a `'readable'`event listener. - * @since v0.9.4 - */ - resume(): this; - /** - * The `readable.isPaused()` method returns the current operating state of the`Readable`. This is used primarily by the mechanism that underlies the`readable.pipe()` method. In most - * typical cases, there will be no reason to - * use this method directly. - * - * ```js - * const readable = new stream.Readable(); - * - * readable.isPaused(); // === false - * readable.pause(); - * readable.isPaused(); // === true - * readable.resume(); - * readable.isPaused(); // === false - * ``` - * @since v0.11.14 - */ - isPaused(): boolean; - /** - * The `readable.unpipe()` method detaches a `Writable` stream previously attached - * using the {@link pipe} method. - * - * If the `destination` is not specified, then _all_ pipes are detached. - * - * If the `destination` is specified, but no pipe is set up for it, then - * the method does nothing. - * - * ```js - * const fs = require('fs'); - * const readable = getReadableStreamSomehow(); - * const writable = fs.createWriteStream('file.txt'); - * // All the data from readable goes into 'file.txt', - * // but only for the first second. - * readable.pipe(writable); - * setTimeout(() => { - * console.log('Stop writing to file.txt.'); - * readable.unpipe(writable); - * console.log('Manually close the file stream.'); - * writable.end(); - * }, 1000); - * ``` - * @since v0.9.4 - * @param destination Optional specific stream to unpipe - */ - unpipe(destination?: NodeJS.WritableStream): this; - /** - * Passing `chunk` as `null` signals the end of the stream (EOF) and behaves the - * same as `readable.push(null)`, after which no more data can be written. The EOF - * signal is put at the end of the buffer and any buffered data will still be - * flushed. - * - * The `readable.unshift()` method pushes a chunk of data back into the internal - * buffer. This is useful in certain situations where a stream is being consumed by - * code that needs to "un-consume" some amount of data that it has optimistically - * pulled out of the source, so that the data can be passed on to some other party. - * - * The `stream.unshift(chunk)` method cannot be called after the `'end'` event - * has been emitted or a runtime error will be thrown. - * - * Developers using `stream.unshift()` often should consider switching to - * use of a `Transform` stream instead. See the `API for stream implementers` section for more information. - * - * ```js - * // Pull off a header delimited by \n\n. - * // Use unshift() if we get too much. - * // Call the callback with (error, header, stream). - * const { StringDecoder } = require('string_decoder'); - * function parseHeader(stream, callback) { - * stream.on('error', callback); - * stream.on('readable', onReadable); - * const decoder = new StringDecoder('utf8'); - * let header = ''; - * function onReadable() { - * let chunk; - * while (null !== (chunk = stream.read())) { - * const str = decoder.write(chunk); - * if (str.includes('\n\n')) { - * // Found the header boundary. - * const split = str.split(/\n\n/); - * header += split.shift(); - * const remaining = split.join('\n\n'); - * const buf = Buffer.from(remaining, 'utf8'); - * stream.removeListener('error', callback); - * // Remove the 'readable' listener before unshifting. - * stream.removeListener('readable', onReadable); - * if (buf.length) - * stream.unshift(buf); - * // Now the body of the message can be read from the stream. - * callback(null, header, stream); - * return; - * } - * // Still reading the header. - * header += str; - * } - * } - * } - * ``` - * - * Unlike {@link push}, `stream.unshift(chunk)` will not - * end the reading process by resetting the internal reading state of the stream. - * This can cause unexpected results if `readable.unshift()` is called during a - * read (i.e. from within a {@link _read} implementation on a - * custom stream). Following the call to `readable.unshift()` with an immediate {@link push} will reset the reading state appropriately, - * however it is best to simply avoid calling `readable.unshift()` while in the - * process of performing a read. - * @since v0.9.11 - * @param chunk Chunk of data to unshift onto the read queue. For streams not operating in object mode, `chunk` must be a string, `Buffer`, `Uint8Array` or `null`. For object mode - * streams, `chunk` may be any JavaScript value. - * @param encoding Encoding of string chunks. Must be a valid `Buffer` encoding, such as `'utf8'` or `'ascii'`. - */ - unshift(chunk: any, encoding?: BufferEncoding): void; - /** - * Prior to Node.js 0.10, streams did not implement the entire `stream` module API - * as it is currently defined. (See `Compatibility` for more information.) - * - * When using an older Node.js library that emits `'data'` events and has a {@link pause} method that is advisory only, the`readable.wrap()` method can be used to create a `Readable` - * stream that uses - * the old stream as its data source. - * - * It will rarely be necessary to use `readable.wrap()` but the method has been - * provided as a convenience for interacting with older Node.js applications and - * libraries. - * - * ```js - * const { OldReader } = require('./old-api-module.js'); - * const { Readable } = require('stream'); - * const oreader = new OldReader(); - * const myReader = new Readable().wrap(oreader); - * - * myReader.on('readable', () => { - * myReader.read(); // etc. - * }); - * ``` - * @since v0.9.4 - * @param stream An "old style" readable stream - */ - wrap(stream: NodeJS.ReadableStream): this; - push(chunk: any, encoding?: BufferEncoding): boolean; - _destroy(error: Error | null, callback: (error?: Error | null) => void): void; - /** - * Destroy the stream. Optionally emit an `'error'` event, and emit a `'close'`event (unless `emitClose` is set to `false`). After this call, the readable - * stream will release any internal resources and subsequent calls to `push()`will be ignored. - * - * Once `destroy()` has been called any further calls will be a no-op and no - * further errors except from `_destroy()` may be emitted as `'error'`. - * - * Implementors should not override this method, but instead implement `readable._destroy()`. - * @since v8.0.0 - * @param error Error which will be passed as payload in `'error'` event - */ - destroy(error?: Error): this; - /** - * Event emitter - * The defined events on documents including: - * 1. close - * 2. data - * 3. end - * 4. error - * 5. pause - * 6. readable - * 7. resume - */ - addListener(event: 'close', listener: () => void): this; - addListener(event: 'data', listener: (chunk: any) => void): this; - addListener(event: 'end', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: 'pause', listener: () => void): this; - addListener(event: 'readable', listener: () => void): this; - addListener(event: 'resume', listener: () => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'close'): boolean; - emit(event: 'data', chunk: any): boolean; - emit(event: 'end'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: 'pause'): boolean; - emit(event: 'readable'): boolean; - emit(event: 'resume'): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'close', listener: () => void): this; - on(event: 'data', listener: (chunk: any) => void): this; - on(event: 'end', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: 'pause', listener: () => void): this; - on(event: 'readable', listener: () => void): this; - on(event: 'resume', listener: () => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'data', listener: (chunk: any) => void): this; - once(event: 'end', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: 'pause', listener: () => void): this; - once(event: 'readable', listener: () => void): this; - once(event: 'resume', listener: () => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'data', listener: (chunk: any) => void): this; - prependListener(event: 'end', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: 'pause', listener: () => void): this; - prependListener(event: 'readable', listener: () => void): this; - prependListener(event: 'resume', listener: () => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'data', listener: (chunk: any) => void): this; - prependOnceListener(event: 'end', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: 'pause', listener: () => void): this; - prependOnceListener(event: 'readable', listener: () => void): this; - prependOnceListener(event: 'resume', listener: () => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - removeListener(event: 'close', listener: () => void): this; - removeListener(event: 'data', listener: (chunk: any) => void): this; - removeListener(event: 'end', listener: () => void): this; - removeListener(event: 'error', listener: (err: Error) => void): this; - removeListener(event: 'pause', listener: () => void): this; - removeListener(event: 'readable', listener: () => void): this; - removeListener(event: 'resume', listener: () => void): this; - removeListener(event: string | symbol, listener: (...args: any[]) => void): this; - [Symbol.asyncIterator](): AsyncIterableIterator; - } - interface WritableOptions extends StreamOptions { - decodeStrings?: boolean | undefined; - defaultEncoding?: BufferEncoding | undefined; - write?(this: Writable, chunk: any, encoding: BufferEncoding, callback: (error?: Error | null) => void): void; - writev?( - this: Writable, - chunks: Array<{ - chunk: any; - encoding: BufferEncoding; - }>, - callback: (error?: Error | null) => void - ): void; - final?(this: Writable, callback: (error?: Error | null) => void): void; - } - /** - * @since v0.9.4 - */ - class Writable extends Stream implements NodeJS.WritableStream { - /** - * A utility method for creating a `Writable` from a web `WritableStream`. - * @since v17.0.0 - * @experimental - */ - static fromWeb(writableStream: streamWeb.WritableStream, options?: Pick): Writable; - /** - * A utility method for creating a web `WritableStream` from a `Writable`. - * @since v17.0.0 - * @experimental - */ - static toWeb(streamWritable: Writable): streamWeb.WritableStream; - /** - * Is `true` if it is safe to call `writable.write()`, which means - * the stream has not been destroyed, errored or ended. - * @since v11.4.0 - */ - readonly writable: boolean; - /** - * Is `true` after `writable.end()` has been called. This property - * does not indicate whether the data has been flushed, for this use `writable.writableFinished` instead. - * @since v12.9.0 - */ - readonly writableEnded: boolean; - /** - * Is set to `true` immediately before the `'finish'` event is emitted. - * @since v12.6.0 - */ - readonly writableFinished: boolean; - /** - * Return the value of `highWaterMark` passed when creating this `Writable`. - * @since v9.3.0 - */ - readonly writableHighWaterMark: number; - /** - * This property contains the number of bytes (or objects) in the queue - * ready to be written. The value provides introspection data regarding - * the status of the `highWaterMark`. - * @since v9.4.0 - */ - readonly writableLength: number; - /** - * Getter for the property `objectMode` of a given `Writable` stream. - * @since v12.3.0 - */ - readonly writableObjectMode: boolean; - /** - * Number of times `writable.uncork()` needs to be - * called in order to fully uncork the stream. - * @since v13.2.0, v12.16.0 - */ - readonly writableCorked: number; - /** - * Is `true` after `writable.destroy()` has been called. - * @since v8.0.0 - */ - destroyed: boolean; - /** - * Is true after 'close' has been emitted. - * @since v18.0.0 - */ - readonly closed: boolean; - /** - * Returns error if the stream has been destroyed with an error. - * @since v18.0.0 - */ - readonly errored: Error | null; - /** - * Is `true` if the stream's buffer has been full and stream will emit 'drain'. - * @since v15.2.0, v14.17.0 - */ - readonly writableNeedDrain: boolean; - constructor(opts?: WritableOptions); - _write(chunk: any, encoding: BufferEncoding, callback: (error?: Error | null) => void): void; - _writev?( - chunks: Array<{ - chunk: any; - encoding: BufferEncoding; - }>, - callback: (error?: Error | null) => void - ): void; - _construct?(callback: (error?: Error | null) => void): void; - _destroy(error: Error | null, callback: (error?: Error | null) => void): void; - _final(callback: (error?: Error | null) => void): void; - /** - * The `writable.write()` method writes some data to the stream, and calls the - * supplied `callback` once the data has been fully handled. If an error - * occurs, the `callback` will be called with the error as its - * first argument. The `callback` is called asynchronously and before `'error'` is - * emitted. - * - * The return value is `true` if the internal buffer is less than the`highWaterMark` configured when the stream was created after admitting `chunk`. - * If `false` is returned, further attempts to write data to the stream should - * stop until the `'drain'` event is emitted. - * - * While a stream is not draining, calls to `write()` will buffer `chunk`, and - * return false. Once all currently buffered chunks are drained (accepted for - * delivery by the operating system), the `'drain'` event will be emitted. - * Once `write()` returns false, do not write more chunks - * until the `'drain'` event is emitted. While calling `write()` on a stream that - * is not draining is allowed, Node.js will buffer all written chunks until - * maximum memory usage occurs, at which point it will abort unconditionally. - * Even before it aborts, high memory usage will cause poor garbage collector - * performance and high RSS (which is not typically released back to the system, - * even after the memory is no longer required). Since TCP sockets may never - * drain if the remote peer does not read the data, writing a socket that is - * not draining may lead to a remotely exploitable vulnerability. - * - * Writing data while the stream is not draining is particularly - * problematic for a `Transform`, because the `Transform` streams are paused - * by default until they are piped or a `'data'` or `'readable'` event handler - * is added. - * - * If the data to be written can be generated or fetched on demand, it is - * recommended to encapsulate the logic into a `Readable` and use {@link pipe}. However, if calling `write()` is preferred, it is - * possible to respect backpressure and avoid memory issues using the `'drain'` event: - * - * ```js - * function write(data, cb) { - * if (!stream.write(data)) { - * stream.once('drain', cb); - * } else { - * process.nextTick(cb); - * } - * } - * - * // Wait for cb to be called before doing any other write. - * write('hello', () => { - * console.log('Write completed, do more writes now.'); - * }); - * ``` - * - * A `Writable` stream in object mode will always ignore the `encoding` argument. - * @since v0.9.4 - * @param chunk Optional data to write. For streams not operating in object mode, `chunk` must be a string, `Buffer` or `Uint8Array`. For object mode streams, `chunk` may be any - * JavaScript value other than `null`. - * @param [encoding='utf8'] The encoding, if `chunk` is a string. - * @param callback Callback for when this chunk of data is flushed. - * @return `false` if the stream wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`. - */ - write(chunk: any, callback?: (error: Error | null | undefined) => void): boolean; - write(chunk: any, encoding: BufferEncoding, callback?: (error: Error | null | undefined) => void): boolean; - /** - * The `writable.setDefaultEncoding()` method sets the default `encoding` for a `Writable` stream. - * @since v0.11.15 - * @param encoding The new default encoding - */ - setDefaultEncoding(encoding: BufferEncoding): this; - /** - * Calling the `writable.end()` method signals that no more data will be written - * to the `Writable`. The optional `chunk` and `encoding` arguments allow one - * final additional chunk of data to be written immediately before closing the - * stream. - * - * Calling the {@link write} method after calling {@link end} will raise an error. - * - * ```js - * // Write 'hello, ' and then end with 'world!'. - * const fs = require('fs'); - * const file = fs.createWriteStream('example.txt'); - * file.write('hello, '); - * file.end('world!'); - * // Writing more now is not allowed! - * ``` - * @since v0.9.4 - * @param chunk Optional data to write. For streams not operating in object mode, `chunk` must be a string, `Buffer` or `Uint8Array`. For object mode streams, `chunk` may be any - * JavaScript value other than `null`. - * @param encoding The encoding if `chunk` is a string - * @param callback Callback for when the stream is finished. - */ - end(cb?: () => void): this; - end(chunk: any, cb?: () => void): this; - end(chunk: any, encoding: BufferEncoding, cb?: () => void): this; - /** - * The `writable.cork()` method forces all written data to be buffered in memory. - * The buffered data will be flushed when either the {@link uncork} or {@link end} methods are called. - * - * The primary intent of `writable.cork()` is to accommodate a situation in which - * several small chunks are written to the stream in rapid succession. Instead of - * immediately forwarding them to the underlying destination, `writable.cork()`buffers all the chunks until `writable.uncork()` is called, which will pass them - * all to `writable._writev()`, if present. This prevents a head-of-line blocking - * situation where data is being buffered while waiting for the first small chunk - * to be processed. However, use of `writable.cork()` without implementing`writable._writev()` may have an adverse effect on throughput. - * - * See also: `writable.uncork()`, `writable._writev()`. - * @since v0.11.2 - */ - cork(): void; - /** - * The `writable.uncork()` method flushes all data buffered since {@link cork} was called. - * - * When using `writable.cork()` and `writable.uncork()` to manage the buffering - * of writes to a stream, defer calls to `writable.uncork()` using`process.nextTick()`. Doing so allows batching of all`writable.write()` calls that occur within a given Node.js event - * loop phase. - * - * ```js - * stream.cork(); - * stream.write('some '); - * stream.write('data '); - * process.nextTick(() => stream.uncork()); - * ``` - * - * If the `writable.cork()` method is called multiple times on a stream, the - * same number of calls to `writable.uncork()` must be called to flush the buffered - * data. - * - * ```js - * stream.cork(); - * stream.write('some '); - * stream.cork(); - * stream.write('data '); - * process.nextTick(() => { - * stream.uncork(); - * // The data will not be flushed until uncork() is called a second time. - * stream.uncork(); - * }); - * ``` - * - * See also: `writable.cork()`. - * @since v0.11.2 - */ - uncork(): void; - /** - * Destroy the stream. Optionally emit an `'error'` event, and emit a `'close'`event (unless `emitClose` is set to `false`). After this call, the writable - * stream has ended and subsequent calls to `write()` or `end()` will result in - * an `ERR_STREAM_DESTROYED` error. - * This is a destructive and immediate way to destroy a stream. Previous calls to`write()` may not have drained, and may trigger an `ERR_STREAM_DESTROYED` error. - * Use `end()` instead of destroy if data should flush before close, or wait for - * the `'drain'` event before destroying the stream. - * - * Once `destroy()` has been called any further calls will be a no-op and no - * further errors except from `_destroy()` may be emitted as `'error'`. - * - * Implementors should not override this method, - * but instead implement `writable._destroy()`. - * @since v8.0.0 - * @param error Optional, an error to emit with `'error'` event. - */ - destroy(error?: Error): this; - /** - * Event emitter - * The defined events on documents including: - * 1. close - * 2. drain - * 3. error - * 4. finish - * 5. pipe - * 6. unpipe - */ - addListener(event: 'close', listener: () => void): this; - addListener(event: 'drain', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: 'finish', listener: () => void): this; - addListener(event: 'pipe', listener: (src: Readable) => void): this; - addListener(event: 'unpipe', listener: (src: Readable) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'close'): boolean; - emit(event: 'drain'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: 'finish'): boolean; - emit(event: 'pipe', src: Readable): boolean; - emit(event: 'unpipe', src: Readable): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'close', listener: () => void): this; - on(event: 'drain', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: 'finish', listener: () => void): this; - on(event: 'pipe', listener: (src: Readable) => void): this; - on(event: 'unpipe', listener: (src: Readable) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'drain', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: 'finish', listener: () => void): this; - once(event: 'pipe', listener: (src: Readable) => void): this; - once(event: 'unpipe', listener: (src: Readable) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'drain', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: 'finish', listener: () => void): this; - prependListener(event: 'pipe', listener: (src: Readable) => void): this; - prependListener(event: 'unpipe', listener: (src: Readable) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'drain', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: 'finish', listener: () => void): this; - prependOnceListener(event: 'pipe', listener: (src: Readable) => void): this; - prependOnceListener(event: 'unpipe', listener: (src: Readable) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - removeListener(event: 'close', listener: () => void): this; - removeListener(event: 'drain', listener: () => void): this; - removeListener(event: 'error', listener: (err: Error) => void): this; - removeListener(event: 'finish', listener: () => void): this; - removeListener(event: 'pipe', listener: (src: Readable) => void): this; - removeListener(event: 'unpipe', listener: (src: Readable) => void): this; - removeListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - interface DuplexOptions extends ReadableOptions, WritableOptions { - allowHalfOpen?: boolean | undefined; - readableObjectMode?: boolean | undefined; - writableObjectMode?: boolean | undefined; - readableHighWaterMark?: number | undefined; - writableHighWaterMark?: number | undefined; - writableCorked?: number | undefined; - construct?(this: Duplex, callback: (error?: Error | null) => void): void; - read?(this: Duplex, size: number): void; - write?(this: Duplex, chunk: any, encoding: BufferEncoding, callback: (error?: Error | null) => void): void; - writev?( - this: Duplex, - chunks: Array<{ - chunk: any; - encoding: BufferEncoding; - }>, - callback: (error?: Error | null) => void - ): void; - final?(this: Duplex, callback: (error?: Error | null) => void): void; - destroy?(this: Duplex, error: Error | null, callback: (error: Error | null) => void): void; - } - /** - * Duplex streams are streams that implement both the `Readable` and `Writable` interfaces. - * - * Examples of `Duplex` streams include: - * - * * `TCP sockets` - * * `zlib streams` - * * `crypto streams` - * @since v0.9.4 - */ - class Duplex extends Readable implements Writable { - readonly writable: boolean; - readonly writableEnded: boolean; - readonly writableFinished: boolean; - readonly writableHighWaterMark: number; - readonly writableLength: number; - readonly writableObjectMode: boolean; - readonly writableCorked: number; - readonly writableNeedDrain: boolean; - readonly closed: boolean; - readonly errored: Error | null; - /** - * If `false` then the stream will automatically end the writable side when the - * readable side ends. Set initially by the `allowHalfOpen` constructor option, - * which defaults to `false`. - * - * This can be changed manually to change the half-open behavior of an existing`Duplex` stream instance, but must be changed before the `'end'` event is - * emitted. - * @since v0.9.4 - */ - allowHalfOpen: boolean; - constructor(opts?: DuplexOptions); - /** - * A utility method for creating duplex streams. - * - * - `Stream` converts writable stream into writable `Duplex` and readable stream - * to `Duplex`. - * - `Blob` converts into readable `Duplex`. - * - `string` converts into readable `Duplex`. - * - `ArrayBuffer` converts into readable `Duplex`. - * - `AsyncIterable` converts into a readable `Duplex`. Cannot yield `null`. - * - `AsyncGeneratorFunction` converts into a readable/writable transform - * `Duplex`. Must take a source `AsyncIterable` as first parameter. Cannot yield - * `null`. - * - `AsyncFunction` converts into a writable `Duplex`. Must return - * either `null` or `undefined` - * - `Object ({ writable, readable })` converts `readable` and - * `writable` into `Stream` and then combines them into `Duplex` where the - * `Duplex` will write to the `writable` and read from the `readable`. - * - `Promise` converts into readable `Duplex`. Value `null` is ignored. - * - * @since v16.8.0 - */ - static from(src: Stream | NodeBlob | ArrayBuffer | string | Iterable | AsyncIterable | AsyncGeneratorFunction | Promise | Object): Duplex; - _write(chunk: any, encoding: BufferEncoding, callback: (error?: Error | null) => void): void; - _writev?( - chunks: Array<{ - chunk: any; - encoding: BufferEncoding; - }>, - callback: (error?: Error | null) => void - ): void; - _destroy(error: Error | null, callback: (error: Error | null) => void): void; - _final(callback: (error?: Error | null) => void): void; - write(chunk: any, encoding?: BufferEncoding, cb?: (error: Error | null | undefined) => void): boolean; - write(chunk: any, cb?: (error: Error | null | undefined) => void): boolean; - setDefaultEncoding(encoding: BufferEncoding): this; - end(cb?: () => void): this; - end(chunk: any, cb?: () => void): this; - end(chunk: any, encoding?: BufferEncoding, cb?: () => void): this; - cork(): void; - uncork(): void; - } - type TransformCallback = (error?: Error | null, data?: any) => void; - interface TransformOptions extends DuplexOptions { - construct?(this: Transform, callback: (error?: Error | null) => void): void; - read?(this: Transform, size: number): void; - write?(this: Transform, chunk: any, encoding: BufferEncoding, callback: (error?: Error | null) => void): void; - writev?( - this: Transform, - chunks: Array<{ - chunk: any; - encoding: BufferEncoding; - }>, - callback: (error?: Error | null) => void - ): void; - final?(this: Transform, callback: (error?: Error | null) => void): void; - destroy?(this: Transform, error: Error | null, callback: (error: Error | null) => void): void; - transform?(this: Transform, chunk: any, encoding: BufferEncoding, callback: TransformCallback): void; - flush?(this: Transform, callback: TransformCallback): void; - } - /** - * Transform streams are `Duplex` streams where the output is in some way - * related to the input. Like all `Duplex` streams, `Transform` streams - * implement both the `Readable` and `Writable` interfaces. - * - * Examples of `Transform` streams include: - * - * * `zlib streams` - * * `crypto streams` - * @since v0.9.4 - */ - class Transform extends Duplex { - constructor(opts?: TransformOptions); - _transform(chunk: any, encoding: BufferEncoding, callback: TransformCallback): void; - _flush(callback: TransformCallback): void; - } - /** - * The `stream.PassThrough` class is a trivial implementation of a `Transform` stream that simply passes the input bytes across to the output. Its purpose is - * primarily for examples and testing, but there are some use cases where`stream.PassThrough` is useful as a building block for novel sorts of streams. - */ - class PassThrough extends Transform {} - /** - * Attaches an AbortSignal to a readable or writeable stream. This lets code - * control stream destruction using an `AbortController`. - * - * Calling `abort` on the `AbortController` corresponding to the passed`AbortSignal` will behave the same way as calling `.destroy(new AbortError())`on the stream. - * - * ```js - * const fs = require('fs'); - * - * const controller = new AbortController(); - * const read = addAbortSignal( - * controller.signal, - * fs.createReadStream(('object.json')) - * ); - * // Later, abort the operation closing the stream - * controller.abort(); - * ``` - * - * Or using an `AbortSignal` with a readable stream as an async iterable: - * - * ```js - * const controller = new AbortController(); - * setTimeout(() => controller.abort(), 10_000); // set a timeout - * const stream = addAbortSignal( - * controller.signal, - * fs.createReadStream(('object.json')) - * ); - * (async () => { - * try { - * for await (const chunk of stream) { - * await process(chunk); - * } - * } catch (e) { - * if (e.name === 'AbortError') { - * // The operation was cancelled - * } else { - * throw e; - * } - * } - * })(); - * ``` - * @since v15.4.0 - * @param signal A signal representing possible cancellation - * @param stream a stream to attach a signal to - */ - function addAbortSignal(signal: AbortSignal, stream: T): T; - interface FinishedOptions extends Abortable { - error?: boolean | undefined; - readable?: boolean | undefined; - writable?: boolean | undefined; - } - /** - * A function to get notified when a stream is no longer readable, writable - * or has experienced an error or a premature close event. - * - * ```js - * const { finished } = require('stream'); - * - * const rs = fs.createReadStream('archive.tar'); - * - * finished(rs, (err) => { - * if (err) { - * console.error('Stream failed.', err); - * } else { - * console.log('Stream is done reading.'); - * } - * }); - * - * rs.resume(); // Drain the stream. - * ``` - * - * Especially useful in error handling scenarios where a stream is destroyed - * prematurely (like an aborted HTTP request), and will not emit `'end'`or `'finish'`. - * - * The `finished` API provides promise version: - * - * ```js - * const { finished } = require('stream/promises'); - * - * const rs = fs.createReadStream('archive.tar'); - * - * async function run() { - * await finished(rs); - * console.log('Stream is done reading.'); - * } - * - * run().catch(console.error); - * rs.resume(); // Drain the stream. - * ``` - * - * `stream.finished()` leaves dangling event listeners (in particular`'error'`, `'end'`, `'finish'` and `'close'`) after `callback` has been - * invoked. The reason for this is so that unexpected `'error'` events (due to - * incorrect stream implementations) do not cause unexpected crashes. - * If this is unwanted behavior then the returned cleanup function needs to be - * invoked in the callback: - * - * ```js - * const cleanup = finished(rs, (err) => { - * cleanup(); - * // ... - * }); - * ``` - * @since v10.0.0 - * @param stream A readable and/or writable stream. - * @param callback A callback function that takes an optional error argument. - * @return A cleanup function which removes all registered listeners. - */ - function finished(stream: NodeJS.ReadableStream | NodeJS.WritableStream | NodeJS.ReadWriteStream, options: FinishedOptions, callback: (err?: NodeJS.ErrnoException | null) => void): () => void; - function finished(stream: NodeJS.ReadableStream | NodeJS.WritableStream | NodeJS.ReadWriteStream, callback: (err?: NodeJS.ErrnoException | null) => void): () => void; - namespace finished { - function __promisify__(stream: NodeJS.ReadableStream | NodeJS.WritableStream | NodeJS.ReadWriteStream, options?: FinishedOptions): Promise; - } - type PipelineSourceFunction = () => Iterable | AsyncIterable; - type PipelineSource = Iterable | AsyncIterable | NodeJS.ReadableStream | PipelineSourceFunction; - type PipelineTransform, U> = - | NodeJS.ReadWriteStream - | ((source: S extends (...args: any[]) => Iterable | AsyncIterable ? AsyncIterable : S) => AsyncIterable); - type PipelineTransformSource = PipelineSource | PipelineTransform; - type PipelineDestinationIterableFunction = (source: AsyncIterable) => AsyncIterable; - type PipelineDestinationPromiseFunction = (source: AsyncIterable) => Promise

    ; - type PipelineDestination, P> = S extends PipelineTransformSource - ? NodeJS.WritableStream | PipelineDestinationIterableFunction | PipelineDestinationPromiseFunction - : never; - type PipelineCallback> = S extends PipelineDestinationPromiseFunction - ? (err: NodeJS.ErrnoException | null, value: P) => void - : (err: NodeJS.ErrnoException | null) => void; - type PipelinePromise> = S extends PipelineDestinationPromiseFunction ? Promise

    : Promise; - interface PipelineOptions { - signal: AbortSignal; - } - /** - * A module method to pipe between streams and generators forwarding errors and - * properly cleaning up and provide a callback when the pipeline is complete. - * - * ```js - * const { pipeline } = require('stream'); - * const fs = require('fs'); - * const zlib = require('zlib'); - * - * // Use the pipeline API to easily pipe a series of streams - * // together and get notified when the pipeline is fully done. - * - * // A pipeline to gzip a potentially huge tar file efficiently: - * - * pipeline( - * fs.createReadStream('archive.tar'), - * zlib.createGzip(), - * fs.createWriteStream('archive.tar.gz'), - * (err) => { - * if (err) { - * console.error('Pipeline failed.', err); - * } else { - * console.log('Pipeline succeeded.'); - * } - * } - * ); - * ``` - * - * The `pipeline` API provides a promise version, which can also - * receive an options argument as the last parameter with a`signal` `AbortSignal` property. When the signal is aborted,`destroy` will be called on the underlying pipeline, with - * an`AbortError`. - * - * ```js - * const { pipeline } = require('stream/promises'); - * - * async function run() { - * await pipeline( - * fs.createReadStream('archive.tar'), - * zlib.createGzip(), - * fs.createWriteStream('archive.tar.gz') - * ); - * console.log('Pipeline succeeded.'); - * } - * - * run().catch(console.error); - * ``` - * - * To use an `AbortSignal`, pass it inside an options object, - * as the last argument: - * - * ```js - * const { pipeline } = require('stream/promises'); - * - * async function run() { - * const ac = new AbortController(); - * const signal = ac.signal; - * - * setTimeout(() => ac.abort(), 1); - * await pipeline( - * fs.createReadStream('archive.tar'), - * zlib.createGzip(), - * fs.createWriteStream('archive.tar.gz'), - * { signal }, - * ); - * } - * - * run().catch(console.error); // AbortError - * ``` - * - * The `pipeline` API also supports async generators: - * - * ```js - * const { pipeline } = require('stream/promises'); - * const fs = require('fs'); - * - * async function run() { - * await pipeline( - * fs.createReadStream('lowercase.txt'), - * async function* (source, { signal }) { - * source.setEncoding('utf8'); // Work with strings rather than `Buffer`s. - * for await (const chunk of source) { - * yield await processChunk(chunk, { signal }); - * } - * }, - * fs.createWriteStream('uppercase.txt') - * ); - * console.log('Pipeline succeeded.'); - * } - * - * run().catch(console.error); - * ``` - * - * Remember to handle the `signal` argument passed into the async generator. - * Especially in the case where the async generator is the source for the - * pipeline (i.e. first argument) or the pipeline will never complete. - * - * ```js - * const { pipeline } = require('stream/promises'); - * const fs = require('fs'); - * - * async function run() { - * await pipeline( - * async function* ({ signal }) { - * await someLongRunningfn({ signal }); - * yield 'asd'; - * }, - * fs.createWriteStream('uppercase.txt') - * ); - * console.log('Pipeline succeeded.'); - * } - * - * run().catch(console.error); - * ``` - * - * `stream.pipeline()` will call `stream.destroy(err)` on all streams except: - * - * * `Readable` streams which have emitted `'end'` or `'close'`. - * * `Writable` streams which have emitted `'finish'` or `'close'`. - * - * `stream.pipeline()` leaves dangling event listeners on the streams - * after the `callback` has been invoked. In the case of reuse of streams after - * failure, this can cause event listener leaks and swallowed errors. If the last - * stream is readable, dangling event listeners will be removed so that the last - * stream can be consumed later. - * - * `stream.pipeline()` closes all the streams when an error is raised. - * The `IncomingRequest` usage with `pipeline` could lead to an unexpected behavior - * once it would destroy the socket without sending the expected response. - * See the example below: - * - * ```js - * const fs = require('fs'); - * const http = require('http'); - * const { pipeline } = require('stream'); - * - * const server = http.createServer((req, res) => { - * const fileStream = fs.createReadStream('./fileNotExist.txt'); - * pipeline(fileStream, res, (err) => { - * if (err) { - * console.log(err); // No such file - * // this message can't be sent once `pipeline` already destroyed the socket - * return res.end('error!!!'); - * } - * }); - * }); - * ``` - * @since v10.0.0 - * @param callback Called when the pipeline is fully done. - */ - function pipeline, B extends PipelineDestination>( - source: A, - destination: B, - callback?: PipelineCallback - ): B extends NodeJS.WritableStream ? B : NodeJS.WritableStream; - function pipeline, T1 extends PipelineTransform, B extends PipelineDestination>( - source: A, - transform1: T1, - destination: B, - callback?: PipelineCallback - ): B extends NodeJS.WritableStream ? B : NodeJS.WritableStream; - function pipeline, T1 extends PipelineTransform, T2 extends PipelineTransform, B extends PipelineDestination>( - source: A, - transform1: T1, - transform2: T2, - destination: B, - callback?: PipelineCallback - ): B extends NodeJS.WritableStream ? B : NodeJS.WritableStream; - function pipeline< - A extends PipelineSource, - T1 extends PipelineTransform, - T2 extends PipelineTransform, - T3 extends PipelineTransform, - B extends PipelineDestination - >(source: A, transform1: T1, transform2: T2, transform3: T3, destination: B, callback?: PipelineCallback): B extends NodeJS.WritableStream ? B : NodeJS.WritableStream; - function pipeline< - A extends PipelineSource, - T1 extends PipelineTransform, - T2 extends PipelineTransform, - T3 extends PipelineTransform, - T4 extends PipelineTransform, - B extends PipelineDestination - >(source: A, transform1: T1, transform2: T2, transform3: T3, transform4: T4, destination: B, callback?: PipelineCallback): B extends NodeJS.WritableStream ? B : NodeJS.WritableStream; - function pipeline( - streams: ReadonlyArray, - callback?: (err: NodeJS.ErrnoException | null) => void - ): NodeJS.WritableStream; - function pipeline( - stream1: NodeJS.ReadableStream, - stream2: NodeJS.ReadWriteStream | NodeJS.WritableStream, - ...streams: Array void)> - ): NodeJS.WritableStream; - namespace pipeline { - function __promisify__, B extends PipelineDestination>(source: A, destination: B, options?: PipelineOptions): PipelinePromise; - function __promisify__, T1 extends PipelineTransform, B extends PipelineDestination>( - source: A, - transform1: T1, - destination: B, - options?: PipelineOptions - ): PipelinePromise; - function __promisify__, T1 extends PipelineTransform, T2 extends PipelineTransform, B extends PipelineDestination>( - source: A, - transform1: T1, - transform2: T2, - destination: B, - options?: PipelineOptions - ): PipelinePromise; - function __promisify__< - A extends PipelineSource, - T1 extends PipelineTransform, - T2 extends PipelineTransform, - T3 extends PipelineTransform, - B extends PipelineDestination - >(source: A, transform1: T1, transform2: T2, transform3: T3, destination: B, options?: PipelineOptions): PipelinePromise; - function __promisify__< - A extends PipelineSource, - T1 extends PipelineTransform, - T2 extends PipelineTransform, - T3 extends PipelineTransform, - T4 extends PipelineTransform, - B extends PipelineDestination - >(source: A, transform1: T1, transform2: T2, transform3: T3, transform4: T4, destination: B, options?: PipelineOptions): PipelinePromise; - function __promisify__(streams: ReadonlyArray, options?: PipelineOptions): Promise; - function __promisify__( - stream1: NodeJS.ReadableStream, - stream2: NodeJS.ReadWriteStream | NodeJS.WritableStream, - ...streams: Array - ): Promise; - } - interface Pipe { - close(): void; - hasRef(): boolean; - ref(): void; - unref(): void; - } - - /** - * Returns whether the stream has encountered an error. - * @since v17.3.0 - */ - function isErrored(stream: Readable | Writable | NodeJS.ReadableStream | NodeJS.WritableStream): boolean; - - /** - * Returns whether the stream is readable. - * @since v17.4.0 - */ - function isReadable(stream: Readable | NodeJS.ReadableStream): boolean; - - const promises: typeof streamPromises; - const consumers: typeof streamConsumers; - } - export = internal; -} -declare module 'node:stream' { - import stream = require('stream'); - export = stream; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crash Time 3 Free Download Full 64 [2021].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crash Time 3 Free Download Full 64 [2021].md deleted file mode 100644 index 9952c57c40be14baef3da202653f247b738f9045..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crash Time 3 Free Download Full 64 [2021].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Crash Time 3 Free Download Full 64


    DOWNLOAD ⚹⚹⚹ https://urlgoal.com/2uCKYw



    -
    -Crash Time III - RIP, is Game Low Spec, Genre from this game is Racing, Download Crash Time III - RIP from pcgamelow with Single Link ( Google Drive ). ... Processor: Intel Pentium 4 at 3.2 GHz / Athlon 64 3200+ or better. Memory: 1 GB for ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/rgres/Seg2Sat/frontend/build/_app/immutable/error.svelte-d9523301.js b/spaces/rgres/Seg2Sat/frontend/build/_app/immutable/error.svelte-d9523301.js deleted file mode 100644 index 1c200845989d5bbde3173a928c2ca48d13743a81..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/build/_app/immutable/error.svelte-d9523301.js +++ /dev/null @@ -1 +0,0 @@ -import{S as w,i as y,s as z,e as E,t as v,c as d,a as b,h as P,d as o,g as u,J as R,j as N,k as S,l as C,m as j,E as H}from"./chunks/index-bcf2726a.js";function J(r){let l,t=r[1].frame+"",a;return{c(){l=E("pre"),a=v(t)},l(f){l=d(f,"PRE",{});var s=b(l);a=P(s,t),s.forEach(o)},m(f,s){u(f,l,s),R(l,a)},p(f,s){s&2&&t!==(t=f[1].frame+"")&&N(a,t)},d(f){f&&o(l)}}}function h(r){let l,t=r[1].stack+"",a;return{c(){l=E("pre"),a=v(t)},l(f){l=d(f,"PRE",{});var s=b(l);a=P(s,t),s.forEach(o)},m(f,s){u(f,l,s),R(l,a)},p(f,s){s&2&&t!==(t=f[1].stack+"")&&N(a,t)},d(f){f&&o(l)}}}function A(r){let l,t,a,f,s=r[1].message+"",c,k,n,p,i=r[1].frame&&J(r),_=r[1].stack&&h(r);return{c(){l=E("h1"),t=v(r[0]),a=S(),f=E("pre"),c=v(s),k=S(),i&&i.c(),n=S(),_&&_.c(),p=C()},l(e){l=d(e,"H1",{});var m=b(l);t=P(m,r[0]),m.forEach(o),a=j(e),f=d(e,"PRE",{});var q=b(f);c=P(q,s),q.forEach(o),k=j(e),i&&i.l(e),n=j(e),_&&_.l(e),p=C()},m(e,m){u(e,l,m),R(l,t),u(e,a,m),u(e,f,m),R(f,c),u(e,k,m),i&&i.m(e,m),u(e,n,m),_&&_.m(e,m),u(e,p,m)},p(e,[m]){m&1&&N(t,e[0]),m&2&&s!==(s=e[1].message+"")&&N(c,s),e[1].frame?i?i.p(e,m):(i=J(e),i.c(),i.m(n.parentNode,n)):i&&(i.d(1),i=null),e[1].stack?_?_.p(e,m):(_=h(e),_.c(),_.m(p.parentNode,p)):_&&(_.d(1),_=null)},i:H,o:H,d(e){e&&o(l),e&&o(a),e&&o(f),e&&o(k),i&&i.d(e),e&&o(n),_&&_.d(e),e&&o(p)}}}function F({error:r,status:l}){return{props:{error:r,status:l}}}function B(r,l,t){let{status:a}=l,{error:f}=l;return r.$$set=s=>{"status"in s&&t(0,a=s.status),"error"in s&&t(1,f=s.error)},[a,f]}class G extends w{constructor(l){super(),y(this,l,B,A,z,{status:0,error:1})}}export{G as default,F as load}; diff --git a/spaces/rgres/Seg2Sat/frontend/build/index.html b/spaces/rgres/Seg2Sat/frontend/build/index.html deleted file mode 100644 index 0b288f2691e52253be9dd6fdba698ad0e61fed67..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/build/index.html +++ /dev/null @@ -1,152 +0,0 @@ - - - - - - - - - - - - - - - - - - - - -

    Drawing to Map

    -

    This space is for the ControlNet model Drawing2Map

    - - -

    Brush Type

    -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    -

    Brush Size

    -
    -
    -
    -
    - - deciduous -
    -
    -
    -
    - -
    - - - - - -

    Select a Template

    -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    - -
    -
    - -

    Prompt

    - - -

    Modifier

    - - - -

    Random Seed

    - - -

    Sample Steps

    -
    -
    -
    -
    - - - - - diff --git a/spaces/rinong/StyleGAN-NADA/e4e/models/stylegan2/op/fused_bias_act.cpp b/spaces/rinong/StyleGAN-NADA/e4e/models/stylegan2/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/rinong/StyleGAN-NADA/e4e/models/stylegan2/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/robin0307/MMOCR/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py b/spaces/robin0307/MMOCR/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py deleted file mode 100644 index 983378118b4d589f531a7f401a06d238966a45d4..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', '../../_base_/recog_models/sar.py', - '../../_base_/schedules/schedule_adam_step_5e.py', - '../../_base_/recog_pipelines/sar_pipeline.py', - '../../_base_/recog_datasets/ST_SA_MJ_real_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=64, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/voc.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/voc.py deleted file mode 100644 index 0a3ea7aac75c7ef3ee1576ec05f251fd47412b72..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/voc.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -from mmcv.utils import print_log - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .xml_style import XMLDataset - - -@DATASETS.register_module() -class VOCDataset(XMLDataset): - - CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', - 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', - 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', - 'tvmonitor') - - PALETTE = [(106, 0, 228), (119, 11, 32), (165, 42, 42), (0, 0, 192), - (197, 226, 255), (0, 60, 100), (0, 0, 142), (255, 77, 255), - (153, 69, 1), (120, 166, 157), (0, 182, 199), (0, 226, 252), - (182, 182, 255), (0, 0, 230), (220, 20, 60), (163, 255, 0), - (0, 82, 0), (3, 95, 161), (0, 80, 100), (183, 130, 88)] - - def __init__(self, **kwargs): - super(VOCDataset, self).__init__(**kwargs) - if 'VOC2007' in self.img_prefix: - self.year = 2007 - elif 'VOC2012' in self.img_prefix: - self.year = 2012 - else: - raise ValueError('Cannot infer dataset year from img_prefix') - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate in VOC protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'mAP', 'recall'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple], optional): Scale ranges for evaluating - mAP. If not specified, all bounding boxes would be included in - evaluation. Default: None. - - Returns: - dict[str, float]: AP/recall metrics. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - if self.year == 2007: - ds_name = 'voc07' - else: - ds_name = self.CLASSES - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - # Follow the official implementation, - # http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar - # we should use the legacy coordinate system in mmdet 1.x, - # which means w, h should be computed as 'x2 - x1 + 1` and - # `y2 - y1 + 1` - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=None, - iou_thr=iou_thr, - dataset=ds_name, - logger=logger, - use_legacy_coordinate=True) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - eval_results.move_to_end('mAP', last=False) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, - results, - proposal_nums, - iou_thrs, - logger=logger, - use_legacy_coordinate=True) - for i, num in enumerate(proposal_nums): - for j, iou_thr in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou_thr}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/tood.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/tood.py deleted file mode 100644 index 7dd18c3c96abd0fb4d4eac5a6fb708b242be0571..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/tood.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class TOOD(SingleStageDetector): - r"""Implementation of `TOOD: Task-aligned One-stage Object Detection. - `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(TOOD, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def set_epoch(self, epoch): - self.bbox_head.epoch = epoch diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/ops/src/cuda/ms_deform_attn_cuda.h b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/ops/src/cuda/ms_deform_attn_cuda.h deleted file mode 100644 index c7ae53f99c820ce6193b608ad344550348a0b42c..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/ops/src/cuda/ms_deform_attn_cuda.h +++ /dev/null @@ -1,30 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Converter MHT to DOC Free Download The Ultimate Guide to Converting MHT Files to DOC Format.md b/spaces/rorallitri/biomedical-language-models/logs/Converter MHT to DOC Free Download The Ultimate Guide to Converting MHT Files to DOC Format.md deleted file mode 100644 index abf1127f9867497f3c60aad08f899b585e29d58f..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Converter MHT to DOC Free Download The Ultimate Guide to Converting MHT Files to DOC Format.md +++ /dev/null @@ -1,27 +0,0 @@ - -

    File extension.MHTCategoryDocument FileDescriptionMHT (MHTML) is intended for archiving web pages and combining the HTML code and resources - images, animation, audio files, into a single HTML file. These resources are usually represented as external links. The content of resulting files is encoded in the same way as in HTML email messages. MHT files use MIME type encoding. MHT files are directly supported by Internet Explorer and can be opened in it. Also, they can be opened in Firefox and other browsers, providing users install add-on software.Associated programsAny Web Browser (e.g. Internet Explorer, Safari, Firefox, Google Chrome)Developed byWorld Wide Web Consortium & WHATWGMIME typetext/htmlUseful linksGet more information about MHT at fileinfo.com
    MHT file description in wikipediaDoc (Word) FileFile extension.DOCCategoryDocument FileDescriptionDOC is a native MS Word text format that supports markup and rich text styling. As opposite to TXT, together with texts DOC file can contain various formatting parameters, tables, images, other graphic elements and charts. Documents of such type are readable by MS Word, free Microsoft Word Viewer and many open source packages like LibreOffice. DOC files can be read and edited on Android OS by Kingsoft Office For Android. Since Word 2007 new, improved format version is used - DOCX.Associated programsAbiWord
    Apple Pages
    AppleWorks
    KWord
    Microsoft Word
    StarOfficeDeveloped byMicrosoftMIME typeapplication/mswordUseful linksMore detailed information on DOC filesOnline Converters

    • Audio Converter
    • Image Converter
    • Excel Converter
    • XML Converter
    • Doc Converter
    • PDF Converter
    • Mail Converter
    • PDF Combine
    • TIFF Combine
    Related Articles
    • Total HTML Converter
    • Convert MHT to Doc
    • Convert MHT to PDF
    Related Converters
    • MHT to PDF
    • MHT to JPG
    • JFIF to PNG
    • DWG to PDF
    • PDN to PNG
    • XWM to MP3
    • All online converters
    Top online converters
    • JFIF to JPG
    • PDF to Doc
    • JFIF to PNG
    • OPUS to MP3
    • DWG to PDF
    Rating MHT to DOC 4.9 (645 votes)Rate It POPULAR

    -

    The MHT converter application has been designed in such a way that it does not need Outlook installation on the system to perform the conversion of data. The software only requires MHT file to convert data. However, to view the data after conversion Outlook is required.

    -

    converter mht to doc free download


    DOWNLOADhttps://tinurll.com/2uzloS



    -

    MHT Converter free download version is available that will completely make you understand and tell you the complete process about how it works. It is just to evaluate and analyse the working of MHT document converter. The free download version will allow only 25 MHT files for conversion. To perform unlimited MHT file conversion you need to purchase license key.

    -

    MHT Converter Wizard is a know-how-based solution for importing file format files. It offers a free preview before buying a licensed version. The great thing about the service. Free Demo enabled me to truly understand the application's full capabilities. To use this program, I would recommend to anyone who is required to convert MHT files.

    -

    Convert your MHT files online. You can convert your MHT documents from any platform (Windows, Linux, macOS). No registration needed. Just drag and drop your MHT file on upload form, choose the desired output format and click convert button. Once conversion completed you can download your DOC file.

    -

    If so you can download any of the below versions for testing. The product will function as normal except for an evaluation limitation. At the time of purchase we provide a license file via email that will allow the product to work in its full capacity. If you would also like an evaluation license to test without any restrictions for 30 days, please follow the directions provided here.

    -

    GroupDocs.Conversion for Java is a native Java on-premise high code API that helps build document converter applications in Java programming language with support for file conversion of 70+ file formats including Microsoft Office Word®, Excel®, PowerPoint®, OpenOffice®, 3D, CAD, Photoshop®, Adobe® PDF, eBook, & HTML. No software installation is required.

    -

    If you're looking to harness the power of the PDF format, you need a conversion tool like the one from Foxit. The Foxit online PDF converter transforms other prevalent file types into PDF and vice versa. Eliminate compatibility issues and gain ultimate flexibility in how you share information between teams, clients, vendors, and more.

    -

    Give the Foxit converter online tools a try and see just how efficient and accurate they are. The tools are entirely online, making them easily accessible no matter what device or operating system you're using. As long as you have a reliable Internet connection and a good browser, you're ready to go!

    -

    Using the Foxit PDF converter tools couldn't be easier. It starts with choosing which resource you need. Whether you're looking to convert XLS files or convert from DOCX, find the tool you need and visit its corresponding page on the Foxit platform.

    -

    -

    Once there, you can choose the files you want to modify and start the process. Foxit makes it easy by offering multiple ways to upload your files to the platform. You can use the file selector menu or simply drag and drop it into the converter box. Foxit also allows you to upload files from cloud storage services such as Google Drive.

    -

    Once it's finished, you can download your newly created file directly to your computer for viewing, editing, and sending.Why Choose Foxit?There are many reasons to consider using the Foxit PDF converter tools. The first is sheer convenience. Because Foxit operates entirely online, there's no need to download any special software or modify computer settings. You can use the tool as long as you have an Internet connection.

    -

    Try the Foxit online PDF converter tools today. Create an account with Foxit and explore the many available features. Start your free trial and see what these must-have tools can do for your business.

    -

    With Advik MBOX to DOC converter, user can save MBOX emails into different file formats in a single software interface. Using this conversion software, one can convert MBOX to MSG, DOC, XPS, EML, MHT, CSV, HTML, and another format with complete email information. After converting the MBOX file into Word Document format, the user can transfer this email file without facing any glitches. As well as, the DOC file format is easy to alter so it can be edited as per requirement. As various file saving options are present in a single interface, it is easy for users to export email files into different file formats and save time as well as money.

    -

    This email file converter software is successfully workable on all Windows Operating systems devoid of any technical issues. This utility is compatible with Windows 11, 10, Windows Server 2000, Windows XP, Vista, and other previous OS versions and works smoothly on devices that have these OS versions

    -

    As far as .NET code doesn't depend on the underlying hardware or operating system, but only on a Virtual Machine, you are free to develop any kind of software for Windows, macOS, Android, iOS and Linux. Just make sure you have installed the corresponding version of .NET Framework, .NET Core, Windows Azure, Mono or Xamarin.

    -

    Our free mht converter online tools does not required any registrations and installations on your system, 100% free and online mhtml web archive (.mht) converter tool. Open from any device with a modern browser like Chrome, Opera and Firefox.

    -

    We have 100% free online MHT tools and apps that does not required any registrations and installations on your system, here are few popular free MHT tools to View, Convert, Edit, Merge, Split, Compare and manage file metadata online from any device with a modern browser like Chrome, Opera and Firefox.

    -

    Many documents for download on the Internet are in PDF format. PDF stands for Portable Document Format. This format was designed to duplicate the formatting and layout of the original printed document. It requires a PDF reader program to open and read the document. Adobe Reader and many other similar programs are available for free download. Most people who use the Internet have such a program already. When to use PDF?

    • Exact layout is important.
    • Your intended users don't have a program that can open the original file type of the document.
    • You want to keep viewers from editing the document.
    With the better PDF writing programs, you can control who can open your PDF, who can make changes and what type of changes. You can make a form that the user can fill in while online, but the user cannot edit the form itself. Many cool features are not available in programs that just save in PDF format! Adobe Acrobat is the standard (expensive) PDF writing program but there are also many free programs that offer many of the advanced features.

    -

    Original program that created the document - with its own Save as HTML or Save as Web Page command.

  13. HTML Editing software - Copy and Paste from the original to a blank page in the program's Design view, and then save as a web page.
  14. Conversion program - There are many free programs and online services for converting between many different file types, including to HTML.
  15. Original Program Many programs, including Word and Excel, allow you to save documents in HTML format. The resulting web page may need some tweaking, however. Sometimes the changes that the conversion makes are quite startling.

    -

    Click on the button given below to download Coolutils Converter free setup. It is a complete offline setup of Coolutils Converter for Windows and has excellent compatibility with x86 and x64 architectures.

    -

    BitRecover vCard Converter Wizard 2022 is a fast and powerful application which allows you to easily and quickly convert .vcf Contact files to multiple file formats. It is a multi-purpose utility which allows you to easily transfer, import and export vCard files in a professional manner. It has the ability to import several VCFs into MSG, EML, EMLX, PDF, MBOX, MHT, XPS, RTF and DOC. It supports all Versions of vCard files like vCard 2.1, 3.0, and 4.0 vCard files. You can also download BitRecover Save2Outlook Wizard 2022 Free Download.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Din Vde 0580 Pdf Download.md b/spaces/rorallitri/biomedical-language-models/logs/Din Vde 0580 Pdf Download.md deleted file mode 100644 index d466ffc469d79bec8deb8c210127f99107290344..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Din Vde 0580 Pdf Download.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    this licence grant is a limited, non-transferable, non-assignable, non-sublicensable license to use snap camera in accordance with the terms and conditions set forth in this document and snaps terms of use.

    -

    snap camera and any lenses are licensed, not sold. snap camera and your use thereof may not be used for anything other than taking and sharing images and video on your device. you acknowledge that snap camera and any lenses are not sold. snap does not issue any guarantee or warranties whatsoever on snap camera and your use thereof.

    -

    din vde 0580 pdf download


    Download Zip ——— https://tinurll.com/2uzmoq



    -

    3. release of rights. snap may, in its sole discretion, release or disable snap camera at any time. in order to transfer the rights in snap camera to you, you must consent to snap recording you audio and/or video while you interact with snap camera. you also consent to snap publishing such audio and/or video.

    -

    4. links to other websites. snap may provide links to other websites on the internet. such websites are not owned or controlled by snap. when you access and use such a website, you acknowledge that snap is not responsible for the contents of any linked websites or any link contained in a linked website. in addition, you acknowledge that snap is not responsible for any link contained in a linked website.

    -

    5. links to snap camera. snap may provide links to snap camera on the web. snap grants you a limited, non-exclusive license to use such links solely to access snap camera and the features provided by snap under this agreement and may remove any link at any time. snap may change the links at any time. snap may revise this link at any time. you may not link to any page not under snaps control. you may not frame or otherwise reproduce snap s logo or any portion thereof. you may not make available any content that is contrary to any snap s trademark, service mark, trade dress, trade name, or other proprietary rights.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/GAMS 23.5.1 (General Algebraic Modeling System 32bit) A Powerful Tool for Optimization Problems.md b/spaces/rorallitri/biomedical-language-models/logs/GAMS 23.5.1 (General Algebraic Modeling System 32bit) A Powerful Tool for Optimization Problems.md deleted file mode 100644 index 9bc88356320ddf724f29c85d84642b1f63e3ffa3..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/GAMS 23.5.1 (General Algebraic Modeling System 32bit) A Powerful Tool for Optimization Problems.md +++ /dev/null @@ -1,6 +0,0 @@ -

    GAMS 23.5.1 (General Algebraic Modeling System, 32bit)


    Download Zip https://tinurll.com/2uzopR



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Endhiran Full Movie Free Download In).md b/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Endhiran Full Movie Free Download In).md deleted file mode 100644 index 3a1507ac361a477ef5796b8fb8bad1aa0f9e43bf..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Endhiran Full Movie Free Download In).md +++ /dev/null @@ -1,14 +0,0 @@ -

    HD Online Player (Endhiran Full Movie Free Download In)


    Download File ⇒⇒⇒ https://tinurll.com/2uzlm8



    - -Aadam Puli (2014) is a Telugu Action film directed by Surender Reddy. Starring Nagarjuna,Tarun,Ponnamala,Rithvik,Ranjitha and Prakash Raj in the lead roles. Watch Now or Download . - -Beware of Dogs (2015) is an English-language action thriller film directed by Daniel Barber and written by Simon Vaughan and Jemma Timmins. Starring Tom Bateman, Abbie Cornish, Alicia Vikander, Luke Evans and David Oyelowo in the lead roles, the film is based on the true story of a police detective, played by Bateman, who protects a London Zoo curator, played by Cornish, who stumbles upon the remains of a young woman that were found in a dumpster only to discover that she had been dismembered. The film is due for release on September 13, 2015 in the United States. - -Sammy J's Journey (2016) is an Indian Tamil film directed by debutant Thiagarajan, starring Mithra Kurian and Arthana Binu in the lead roles, while Nassar and Shyam Gopal play supporting roles, with a script by debutant director Thiagarajan and music by duo Santhosh Narayanan and Prasanna. The film premiered in theaters on 8 June 2016. - -Baadshah (2016) is an Indian Hindi language action thriller film directed by Rohit Shetty. Starring Salman Khan, Sonakshi Sinha, Bobby Deol, Arshad Warsi, Vivek Oberoi, Bobby Bedi and Farhan Akhtar in the lead roles. The film was released worldwide on 11 February 2016. It grossed worldwide. - -Shivaay (2016) is a 2016 Indian Tamil-language action film written and directed by Vikram Kumar and produced by L. V. Prasad. The film features an ensemble cast consisting of Varalaxmi Sarathkumar, Trisha Krishnan, Raghava Lawrence, Jayapradha, Akash, Yogi Babu, Mime Gopi, Surekha Vani, Renuka Menon and Srinivasan in lead roles. The film also features an ensemble soundtrack with music composed by G. V. Prakash Kumar, while cinematography is handled by K. L. Narayanan. The film's story is about a journalist who helps rescue her 4fefd39f24
    -
    -
    -

    diff --git a/spaces/ruboin/faster-whisper-webui/tests/vad_test.py b/spaces/ruboin/faster-whisper-webui/tests/vad_test.py deleted file mode 100644 index b465d8a380f9316a6830d9aac320c85f22aba0a0..0000000000000000000000000000000000000000 --- a/spaces/ruboin/faster-whisper-webui/tests/vad_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import pprint -import unittest -import numpy as np -import sys - -sys.path.append('../whisper-webui') - -from src.vad import AbstractTranscription, TranscriptionConfig, VadSileroTranscription - -class TestVad(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestVad, self).__init__(*args, **kwargs) - self.transcribe_calls = [] - - def test_transcript(self): - mock = MockVadTranscription() - - self.transcribe_calls.clear() - result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment)) - - self.assertListEqual(self.transcribe_calls, [ - [30, 30], - [100, 100] - ]) - - self.assertListEqual(result['segments'], - [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '}, - {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}] - ) - - def transcribe_segments(self, segment): - self.transcribe_calls.append(segment.tolist()) - - # Dummy text - return { - 'text': "Hello world ", - 'segments': [ - { - "start": 10.0, - "end": 20.0, - "text": "Hello world " - } - ], - 'language': "" - } - -class MockVadTranscription(AbstractTranscription): - def __init__(self): - super().__init__() - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - start_time_seconds = float(start_time.removesuffix("s")) - duration_seconds = float(duration.removesuffix("s")) - - # For mocking, this just returns a simple numppy array - return np.array([start_time_seconds, duration_seconds], dtype=np.float64) - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, duration: float): - result = [] - - result.append( { 'start': 30, 'end': 60 } ) - result.append( { 'start': 100, 'end': 200 } ) - return result - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/modeling/image_encoder.py b/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/modeling/image_encoder.py deleted file mode 100644 index 7048651eb05d44bb427601be26963d6232ce812a..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/modeling/image_encoder.py +++ /dev/null @@ -1,398 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from typing import Optional, Tuple, Type - -from .common import LayerNorm2d, MLPBlock - - -# This class and its supporting functions below lightly adapted from the ViTDet backbone available at: https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/vit.py # noqa -class ImageEncoderViT(nn.Module): - def __init__( - self, - img_size: int = 1024, - patch_size: int = 16, - in_chans: int = 3, - embed_dim: int = 768, - depth: int = 12, - num_heads: int = 12, - mlp_ratio: float = 4.0, - out_chans: int = 256, - qkv_bias: bool = True, - norm_layer: Type[nn.Module] = nn.LayerNorm, - act_layer: Type[nn.Module] = nn.GELU, - use_abs_pos: bool = True, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - window_size: int = 0, - global_attn_indexes: Tuple[int, ...] = (), - ) -> None: - """ - Args: - img_size (int): Input image size. - patch_size (int): Patch size. - in_chans (int): Number of input image channels. - embed_dim (int): Patch embedding dimension. - depth (int): Depth of ViT. - num_heads (int): Number of attention heads in each ViT block. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool): If True, add a learnable bias to query, key, value. - norm_layer (nn.Module): Normalization layer. - act_layer (nn.Module): Activation layer. - use_abs_pos (bool): If True, use absolute positional embeddings. - use_rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - window_size (int): Window size for window attention blocks. - global_attn_indexes (list): Indexes for blocks using global attention. - """ - super().__init__() - self.img_size = img_size - - self.patch_embed = PatchEmbed( - kernel_size=(patch_size, patch_size), - stride=(patch_size, patch_size), - in_chans=in_chans, - embed_dim=embed_dim, - ) - - self.pos_embed: Optional[nn.Parameter] = None - if use_abs_pos: - # Initialize absolute positional embedding with pretrain image size. - self.pos_embed = nn.Parameter( - torch.zeros(1, img_size // patch_size, img_size // patch_size, embed_dim) - ) - - self.blocks = nn.ModuleList() - for i in range(depth): - block = Block( - dim=embed_dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - norm_layer=norm_layer, - act_layer=act_layer, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - window_size=window_size if i not in global_attn_indexes else 0, - input_size=(img_size // patch_size, img_size // patch_size), - ) - self.blocks.append(block) - - self.neck = nn.Sequential( - nn.Conv2d( - embed_dim, - out_chans, - kernel_size=1, - bias=False, - ), - LayerNorm2d(out_chans), - nn.Conv2d( - out_chans, - out_chans, - kernel_size=3, - padding=1, - bias=False, - ), - LayerNorm2d(out_chans), - ) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.patch_embed(x) - if self.pos_embed is not None: - x = x + self.pos_embed - - interm_embeddings=[] - for blk in self.blocks: - x = blk(x) - if blk.window_size == 0: - interm_embeddings.append(x) - - x = self.neck(x.permute(0, 3, 1, 2)) - - return x, interm_embeddings - - -class Block(nn.Module): - """Transformer blocks with support of window attention and residual propagation blocks""" - - def __init__( - self, - dim: int, - num_heads: int, - mlp_ratio: float = 4.0, - qkv_bias: bool = True, - norm_layer: Type[nn.Module] = nn.LayerNorm, - act_layer: Type[nn.Module] = nn.GELU, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - window_size: int = 0, - input_size: Optional[Tuple[int, int]] = None, - ) -> None: - """ - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads in each ViT block. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool): If True, add a learnable bias to query, key, value. - norm_layer (nn.Module): Normalization layer. - act_layer (nn.Module): Activation layer. - use_rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - window_size (int): Window size for window attention blocks. If it equals 0, then - use global attention. - input_size (tuple(int, int) or None): Input resolution for calculating the relative - positional parameter size. - """ - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, - qkv_bias=qkv_bias, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - input_size=input_size if window_size == 0 else (window_size, window_size), - ) - - self.norm2 = norm_layer(dim) - self.mlp = MLPBlock(embedding_dim=dim, mlp_dim=int(dim * mlp_ratio), act=act_layer) - - self.window_size = window_size - - def forward(self, x: torch.Tensor) -> torch.Tensor: - shortcut = x - x = self.norm1(x) - # Window partition - if self.window_size > 0: - H, W = x.shape[1], x.shape[2] - x, pad_hw = window_partition(x, self.window_size) - - x = self.attn(x) - # Reverse window partition - if self.window_size > 0: - x = window_unpartition(x, self.window_size, pad_hw, (H, W)) - - x = shortcut + x - x = x + self.mlp(self.norm2(x)) - - return x - - -class Attention(nn.Module): - """Multi-head Attention block with relative position embeddings.""" - - def __init__( - self, - dim: int, - num_heads: int = 8, - qkv_bias: bool = True, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - input_size: Optional[Tuple[int, int]] = None, - ) -> None: - """ - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - qkv_bias (bool): If True, add a learnable bias to query, key, value. - rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - input_size (tuple(int, int) or None): Input resolution for calculating the relative - positional parameter size. - """ - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = head_dim**-0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.proj = nn.Linear(dim, dim) - - self.use_rel_pos = use_rel_pos - if self.use_rel_pos: - assert ( - input_size is not None - ), "Input size must be provided if using relative positional encoding." - # initialize relative positional embeddings - self.rel_pos_h = nn.Parameter(torch.zeros(2 * input_size[0] - 1, head_dim)) - self.rel_pos_w = nn.Parameter(torch.zeros(2 * input_size[1] - 1, head_dim)) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - B, H, W, _ = x.shape - # qkv with shape (3, B, nHead, H * W, C) - qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - # q, k, v with shape (B * nHead, H * W, C) - q, k, v = qkv.reshape(3, B * self.num_heads, H * W, -1).unbind(0) - - attn = (q * self.scale) @ k.transpose(-2, -1) - - if self.use_rel_pos: - attn = add_decomposed_rel_pos(attn, q, self.rel_pos_h, self.rel_pos_w, (H, W), (H, W)) - - attn = attn.softmax(dim=-1) - x = (attn @ v).view(B, self.num_heads, H, W, -1).permute(0, 2, 3, 1, 4).reshape(B, H, W, -1) - x = self.proj(x) - - return x - - -def window_partition(x: torch.Tensor, window_size: int) -> Tuple[torch.Tensor, Tuple[int, int]]: - """ - Partition into non-overlapping windows with padding if needed. - Args: - x (tensor): input tokens with [B, H, W, C]. - window_size (int): window size. - - Returns: - windows: windows after partition with [B * num_windows, window_size, window_size, C]. - (Hp, Wp): padded height and width before partition - """ - B, H, W, C = x.shape - - pad_h = (window_size - H % window_size) % window_size - pad_w = (window_size - W % window_size) % window_size - if pad_h > 0 or pad_w > 0: - x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h)) - Hp, Wp = H + pad_h, W + pad_w - - x = x.view(B, Hp // window_size, window_size, Wp // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows, (Hp, Wp) - - -def window_unpartition( - windows: torch.Tensor, window_size: int, pad_hw: Tuple[int, int], hw: Tuple[int, int] -) -> torch.Tensor: - """ - Window unpartition into original sequences and removing padding. - Args: - windows (tensor): input tokens with [B * num_windows, window_size, window_size, C]. - window_size (int): window size. - pad_hw (Tuple): padded height and width (Hp, Wp). - hw (Tuple): original height and width (H, W) before padding. - - Returns: - x: unpartitioned sequences with [B, H, W, C]. - """ - Hp, Wp = pad_hw - H, W = hw - B = windows.shape[0] // (Hp * Wp // window_size // window_size) - x = windows.view(B, Hp // window_size, Wp // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, Hp, Wp, -1) - - if Hp > H or Wp > W: - x = x[:, :H, :W, :].contiguous() - return x - - -def get_rel_pos(q_size: int, k_size: int, rel_pos: torch.Tensor) -> torch.Tensor: - """ - Get relative positional embeddings according to the relative positions of - query and key sizes. - Args: - q_size (int): size of query q. - k_size (int): size of key k. - rel_pos (Tensor): relative position embeddings (L, C). - - Returns: - Extracted positional embeddings according to relative positions. - """ - max_rel_dist = int(2 * max(q_size, k_size) - 1) - # Interpolate rel pos if needed. - if rel_pos.shape[0] != max_rel_dist: - # Interpolate rel pos. - rel_pos_resized = F.interpolate( - rel_pos.reshape(1, rel_pos.shape[0], -1).permute(0, 2, 1), - size=max_rel_dist, - mode="linear", - ) - rel_pos_resized = rel_pos_resized.reshape(-1, max_rel_dist).permute(1, 0) - else: - rel_pos_resized = rel_pos - - # Scale the coords with short length if shapes for q and k are different. - q_coords = torch.arange(q_size)[:, None] * max(k_size / q_size, 1.0) - k_coords = torch.arange(k_size)[None, :] * max(q_size / k_size, 1.0) - relative_coords = (q_coords - k_coords) + (k_size - 1) * max(q_size / k_size, 1.0) - - return rel_pos_resized[relative_coords.long()] - - -def add_decomposed_rel_pos( - attn: torch.Tensor, - q: torch.Tensor, - rel_pos_h: torch.Tensor, - rel_pos_w: torch.Tensor, - q_size: Tuple[int, int], - k_size: Tuple[int, int], -) -> torch.Tensor: - """ - Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`. - https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py # noqa B950 - Args: - attn (Tensor): attention map. - q (Tensor): query q in the attention layer with shape (B, q_h * q_w, C). - rel_pos_h (Tensor): relative position embeddings (Lh, C) for height axis. - rel_pos_w (Tensor): relative position embeddings (Lw, C) for width axis. - q_size (Tuple): spatial sequence size of query q with (q_h, q_w). - k_size (Tuple): spatial sequence size of key k with (k_h, k_w). - - Returns: - attn (Tensor): attention map with added relative positional embeddings. - """ - q_h, q_w = q_size - k_h, k_w = k_size - Rh = get_rel_pos(q_h, k_h, rel_pos_h) - Rw = get_rel_pos(q_w, k_w, rel_pos_w) - - B, _, dim = q.shape - r_q = q.reshape(B, q_h, q_w, dim) - rel_h = torch.einsum("bhwc,hkc->bhwk", r_q, Rh) - rel_w = torch.einsum("bhwc,wkc->bhwk", r_q, Rw) - - attn = ( - attn.view(B, q_h, q_w, k_h, k_w) + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :] - ).view(B, q_h * q_w, k_h * k_w) - - return attn - - -class PatchEmbed(nn.Module): - """ - Image to Patch Embedding. - """ - - def __init__( - self, - kernel_size: Tuple[int, int] = (16, 16), - stride: Tuple[int, int] = (16, 16), - padding: Tuple[int, int] = (0, 0), - in_chans: int = 3, - embed_dim: int = 768, - ) -> None: - """ - Args: - kernel_size (Tuple): kernel size of the projection layer. - stride (Tuple): stride of the projection layer. - padding (Tuple): padding size of the projection layer. - in_chans (int): Number of input image channels. - embed_dim (int): Patch embedding dimension. - """ - super().__init__() - - self.proj = nn.Conv2d( - in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding - ) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.proj(x) - # B C H W -> B H W C - x = x.permute(0, 2, 3, 1) - return x diff --git a/spaces/samueldomdey/ClipCosineSimilarityURL/README.md b/spaces/samueldomdey/ClipCosineSimilarityURL/README.md deleted file mode 100644 index 74741345e835a462693681d59ab5d0d19f5f758b..0000000000000000000000000000000000000000 --- a/spaces/samueldomdey/ClipCosineSimilarityURL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ClipCosineSimilarity -emoji: 🌖 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/scedlatioru/img-to-music/example/Anno 2070 Offline Ark Upgrades Crackl.md b/spaces/scedlatioru/img-to-music/example/Anno 2070 Offline Ark Upgrades Crackl.md deleted file mode 100644 index 60b8e93dd7bb09345fbd51bb401c4f6131c737ad..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Anno 2070 Offline Ark Upgrades Crackl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Anno 2070 Offline Ark Upgrades Crackl


    DOWNLOADhttps://gohhs.com/2uEyPQ



    -
    - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Corel Draw X6 Setup Icamsi 30 !!LINK!!.md b/spaces/scedlatioru/img-to-music/example/Corel Draw X6 Setup Icamsi 30 !!LINK!!.md deleted file mode 100644 index 445e96cc29a5c38dc60b920256adc2d039821f89..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Corel Draw X6 Setup Icamsi 30 !!LINK!!.md +++ /dev/null @@ -1,12 +0,0 @@ -

    Corel Draw X6 Setup Icamsi 30


    Download Zip ---> https://gohhs.com/2uEAwo



    - -Coub is YouTube for looping videos. You can take any video, trim the best part, merge with other videos, add soundtrack. May 29, 2019 13:20 —0400. -Brian Wesley. -At the end of 2012, Coub was launched. -Coub is one of the most popular video streaming formats in the world. In 2014, Coub made it to the top 5 best apps on the App Store. -As of September 2019, Coub contains over 50,000 videos. -Coub is used in their videos by the largest companies in the world: Google, Facebook, Amazon, Twitter, Netflix and others. -The creator of Coub Anton Agarkov became a millionaire. At its peak in March 2015, Agarkov sold the app for $30M. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Media Nav Carte Europe Dacia [Extra Quality] Full Version With Torrent.md b/spaces/scedlatioru/img-to-music/example/Media Nav Carte Europe Dacia [Extra Quality] Full Version With Torrent.md deleted file mode 100644 index 02fd1f876724086d55f0af67ba74fb7304a69245..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Media Nav Carte Europe Dacia [Extra Quality] Full Version With Torrent.md +++ /dev/null @@ -1,37 +0,0 @@ - -

    How to Update Your Media Nav Carte Europe Dacia with the Latest Maps

    -

    If you own a Dacia car with a Media Nav system, you might want to update your maps to the latest version. This will help you navigate better and avoid traffic jams, road closures, and other hazards. The latest maps for Media Nav Carte Europe Dacia are from 2020 Q2 and cover most of the European countries and major roads.

    -

    Media Nav Carte Europe Dacia Full Version With Torrent


    Download Ziphttps://gohhs.com/2uEzgI



    -

    There are two ways to update your maps: using a torrent or using a mega link. Both methods require a USB flash drive with at least 8 GB of free space and formatted in FAT32. You also need to backup your current maps and settings before proceeding.

    -

    Using a Torrent

    -

    A torrent is a file that contains information about other files that are shared by users over the internet. You need a torrent client software to download the files using a torrent. Some popular torrent clients are uTorrent, BitTorrent, and qBittorrent.

    -

    To update your maps using a torrent, follow these steps:

    -
      -
    1. Download the torrent file for the Media Nav Carte Europe Dacia 2020 Q2 maps from this link: https://www.zezauto.com/Thread-Renault-Dacia-Europe-Maps-2020-Q2-Medianav-Torrent-mega
    2. -
    3. Open the torrent file with your torrent client and select the files you want to download. You can choose the countries you need or download all of them.
    4. -
    5. Wait for the download to finish. It may take some time depending on your internet speed and the number of seeders (users who have the files and share them).
    6. -
    7. Copy the downloaded files to your USB flash drive. Make sure you copy them to the root folder of the drive and not inside any subfolder.
    8. -
    9. Eject the USB flash drive safely from your computer.
    10. -
    11. Start your car and turn on the Media Nav system.
    12. -
    13. Insert the USB flash drive into the USB port of the Media Nav system.
    14. -
    15. Follow the instructions on the screen to update your maps. Do not turn off your car or remove the USB flash drive during the process.
    16. -
    17. When the update is complete, restart your Media Nav system and enjoy your new maps.
    18. -
    -

    Using a Mega Link

    -

    A mega link is a URL that leads to a file hosting service where you can download large files. You need a web browser and an internet connection to access the mega link.

    -

    To update your maps using a mega link, follow these steps:

    -

    -
      -
    1. Download the mega link for the Media Nav Carte Europe Dacia 2020 Q2 maps from this link: https://mhhauto.com/Thread-Renault-Dacia-Europe-Maps-2020-Q2-Medianav-Torrent-mega-without-pass
    2. -
    3. Open the mega link with your web browser and click on the download button. You may need to create an account or log in to access the file.
    4. -
    5. Wait for the download to finish. It may take some time depending on your internet speed and the size of the file.
    6. -
    7. Extract the downloaded file using a software like WinRAR or 7-Zip. You should get a folder with several files inside.
    8. -
    9. Copy the extracted files to your USB flash drive. Make sure you copy them to the root folder of the drive and not inside any subfolder.
    10. -
    11. Eject the USB flash drive safely from your computer.
    12. -
    13. Start your car and turn on the Media Nav system.
    14. -
    15. Insert the USB flash drive into the USB port of the Media Nav system.
    16. -
    17. Follow the instructions on the screen to update your maps. Do not turn off your car or remove the USB flash drive during the process.
    18. -
    19. When the update is complete, restart your Media Nav system and enjoy your new maps.
    20. -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Mvci Driver For Toyota-cable 2.0.1.epub.md b/spaces/scedlatioru/img-to-music/example/Mvci Driver For Toyota-cable 2.0.1.epub.md deleted file mode 100644 index 1a2f30659e69ee814d813f963fa01f19ede24bc6..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Mvci Driver For Toyota-cable 2.0.1.epub.md +++ /dev/null @@ -1,20 +0,0 @@ -

    Mvci Driver For Toyota-cable 2.0.1.epub


    Download File >>>>> https://gohhs.com/2uEACm



    -
    -I don't know how to solve this problem. - -How can I repair my driver and make it work properly? - -A: - -The problem with connecting your Blueview to your computer is that the boot order of your system. It means that your computer is booting first in your system and your Blueview is not be able to access the DVD or the USB and it boots instead. - -In order to change the boot order you can click in the Start button on your taskbar and choose properties of the corresponding drive that you want to be the first one on the list. - -It will take you to the "Startup and Shut Down" tab and change the boot order there. - -Second echocardiographic assessment of the left ventricular function after first period of myocardial infarction. - -Second echocardiographic assessment of the left ventricular function was performed in 138 patients after the first period of myocardial infarction. Left ventricular function was assessed in terms of end-diastolic volume, end-systolic volume, ejection fraction, stroke volume and cardiac index. Doppler recordings of mitral flow were used to evaluate left ventricular diastolic function. End-diastolic volume increased from 91 +/- 25 ml to 107 +/- 26 ml (p less than 0.0001) and end-systolic volume from 39 +/- 15 ml to 46 +/- 15 ml (p less than 0.0001) after first period of myocardial infarction. Ejection fraction decreased from 61 +/- 15% to 52 +/- 16% (p less than 0.0001). Left ventricular stroke volume decreased from 64 +/- 20 ml to 56 +/- 18 ml (p less than 0.001), and cardiac index decreased from 2.8 +/- 0.9 l/min/m2 to 2.4 +/- 0.7 l/min/m2 (p less than 0.001). The study shows that in early stage of myocardial infarction ejection fraction remains depressed but the decrease is statistically insignificant. The end-systolic volume increases significantly due to the increase in the end-diastolic volume. The stroke volume decreases significantly due to the decrease in the cardiac index. The study also indicates that after a first period of myocardial infarction left ventricular diastolic function may deteriorate slightly and it remains depressed up to 1 month of the infarction.My dear readers, I am so thrilled 4fefd39f24
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/TorrentdownloadRevit2007key ((LINK)).md b/spaces/scedlatioru/img-to-music/example/TorrentdownloadRevit2007key ((LINK)).md deleted file mode 100644 index 400cca95ab4e94ab445b3f58ab20d72d64cf5966..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/TorrentdownloadRevit2007key ((LINK)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    torrentdownloadRevit2007key


    Download Zip ✫✫✫ https://gohhs.com/2uEAKu



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/segestic/ArticlePara/multiapp.py b/spaces/segestic/ArticlePara/multiapp.py deleted file mode 100644 index b778c9c27e635862d356b4b10a611f4ff2d655b9..0000000000000000000000000000000000000000 --- a/spaces/segestic/ArticlePara/multiapp.py +++ /dev/null @@ -1,48 +0,0 @@ -"""Frameworks for running multiple Streamlit applications as a single app. -""" -import streamlit as st - -class MultiApp: - """Framework for combining multiple streamlit applications. - Usage: - def foo(): - st.title("Hello Foo") - def bar(): - st.title("Hello Bar") - app = MultiApp() - app.add_app("Foo", foo) - app.add_app("Bar", bar) - app.run() - It is also possible keep each application in a separate file. - import foo - import bar - app = MultiApp() - app.add_app("Foo", foo.app) - app.add_app("Bar", bar.app) - app.run() - """ - def __init__(self): - self.apps = [] - - def add_app(self, title, func): - """Adds a new application. - Parameters - ---------- - func: - the python function to render this app. - title: - title of the app. Appears in the dropdown in the sidebar. - """ - self.apps.append({ - "title": title, - "function": func - }) - - def run(self): - # app = st.sidebar.radio( - app = st.selectbox( - 'Navigation', - self.apps, - format_func=lambda app: app['title']) - - app['function']() diff --git a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/positionwise_feed_forward.py b/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/positionwise_feed_forward.py deleted file mode 100644 index f6d5a7c1a46e906cc8a3c47a013f2630ca2be2bd..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/positionwise_feed_forward.py +++ /dev/null @@ -1,66 +0,0 @@ -# encoding: utf-8 -"""Class Declaration of Transformer's Positionwise Feedforward.""" - -import chainer - -import chainer.functions as F -import chainer.links as L - -import numpy as np - - -class PositionwiseFeedForward(chainer.Chain): - """Positionwise feed forward. - - Args: - :param int idim: input dimenstion - :param int hidden_units: number of hidden units - :param float dropout_rate: dropout rate - - """ - - def __init__( - self, n_units, d_units=0, dropout=0.1, initialW=None, initial_bias=None - ): - """Initialize PositionwiseFeedForward. - - Args: - n_units (int): Input dimension. - d_units (int, optional): Output dimension of hidden layer. - dropout (float, optional): Dropout ratio. - initialW (int, optional): Initializer to initialize the weight. - initial_bias (bool, optional): Initializer to initialize the bias. - - """ - super(PositionwiseFeedForward, self).__init__() - n_inner_units = d_units if d_units > 0 else n_units * 4 - with self.init_scope(): - stvd = 1.0 / np.sqrt(n_units) - self.w_1 = L.Linear( - n_units, - n_inner_units, - initialW=initialW(scale=stvd), - initial_bias=initial_bias(scale=stvd), - ) - stvd = 1.0 / np.sqrt(n_inner_units) - self.w_2 = L.Linear( - n_inner_units, - n_units, - initialW=initialW(scale=stvd), - initial_bias=initial_bias(scale=stvd), - ) - self.act = F.relu - self.dropout = dropout - - def __call__(self, e): - """Initialize PositionwiseFeedForward. - - Args: - e (chainer.Variable): Input variable. - - Return: - chainer.Variable: Output variable. - - """ - e = F.dropout(self.act(self.w_1(e)), self.dropout) - return self.w_2(e) diff --git a/spaces/segments-tobias/conex/espnet/utils/__init__.py b/spaces/segments-tobias/conex/espnet/utils/__init__.py deleted file mode 100644 index b7f177368e62a5578b8706300e101f831a3972ac..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Initialize sub package.""" diff --git a/spaces/sgxz/bingo/src/components/chat-header.tsx b/spaces/sgxz/bingo/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
    - logo -
    欢迎使用新必应
    -
    由 AI 支持的网页版 Copilot
    -
    - ) -} diff --git a/spaces/shabnam91/Sanskrit-TTS/model_modules/commons.py b/spaces/shabnam91/Sanskrit-TTS/model_modules/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/model_modules/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/sheldon/xiaolxl-GuoFeng3/README.md b/spaces/sheldon/xiaolxl-GuoFeng3/README.md deleted file mode 100644 index e1ecdf39d4e835f0d8b1df918d36d4ca6ed45452..0000000000000000000000000000000000000000 --- a/spaces/sheldon/xiaolxl-GuoFeng3/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Xiaolxl GuoFeng3 -emoji: 🔥 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer_pack/commons.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/shidokan/ai.Life/app.py b/spaces/shidokan/ai.Life/app.py deleted file mode 100644 index c2b9d2a12539101c0a0bcc1bdca59511cc92986a..0000000000000000000000000000000000000000 --- a/spaces/shidokan/ai.Life/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch(share=True) \ No newline at end of file diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_utils.py b/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_utils.py deleted file mode 100644 index 8c357757741c6d9bd7ce4d8ce740fefd51850fbf..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_utils.py +++ /dev/null @@ -1,421 +0,0 @@ -import numpy as np -import torch -import torchvision -from itertools import product as product -from math import ceil - - -class PriorBox(object): - - def __init__(self, cfg, image_size=None, phase='train'): - super(PriorBox, self).__init__() - self.min_sizes = cfg['min_sizes'] - self.steps = cfg['steps'] - self.clip = cfg['clip'] - self.image_size = image_size - self.feature_maps = [[ceil(self.image_size[0] / step), ceil(self.image_size[1] / step)] for step in self.steps] - self.name = 's' - - def forward(self): - anchors = [] - for k, f in enumerate(self.feature_maps): - min_sizes = self.min_sizes[k] - for i, j in product(range(f[0]), range(f[1])): - for min_size in min_sizes: - s_kx = min_size / self.image_size[1] - s_ky = min_size / self.image_size[0] - dense_cx = [x * self.steps[k] / self.image_size[1] for x in [j + 0.5]] - dense_cy = [y * self.steps[k] / self.image_size[0] for y in [i + 0.5]] - for cy, cx in product(dense_cy, dense_cx): - anchors += [cx, cy, s_kx, s_ky] - - # back to torch land - output = torch.Tensor(anchors).view(-1, 4) - if self.clip: - output.clamp_(max=1, min=0) - return output - - -def py_cpu_nms(dets, thresh): - """Pure Python NMS baseline.""" - keep = torchvision.ops.nms( - boxes=torch.Tensor(dets[:, :4]), - scores=torch.Tensor(dets[:, 4]), - iou_threshold=thresh, - ) - - return list(keep) - - -def point_form(boxes): - """ Convert prior_boxes to (xmin, ymin, xmax, ymax) - representation for comparison to point form ground truth data. - Args: - boxes: (tensor) center-size default boxes from priorbox layers. - Return: - boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes. - """ - return torch.cat( - ( - boxes[:, :2] - boxes[:, 2:] / 2, # xmin, ymin - boxes[:, :2] + boxes[:, 2:] / 2), - 1) # xmax, ymax - - -def center_size(boxes): - """ Convert prior_boxes to (cx, cy, w, h) - representation for comparison to center-size form ground truth data. - Args: - boxes: (tensor) point_form boxes - Return: - boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes. - """ - return torch.cat( - (boxes[:, 2:] + boxes[:, :2]) / 2, # cx, cy - boxes[:, 2:] - boxes[:, :2], - 1) # w, h - - -def intersect(box_a, box_b): - """ We resize both tensors to [A,B,2] without new malloc: - [A,2] -> [A,1,2] -> [A,B,2] - [B,2] -> [1,B,2] -> [A,B,2] - Then we compute the area of intersect between box_a and box_b. - Args: - box_a: (tensor) bounding boxes, Shape: [A,4]. - box_b: (tensor) bounding boxes, Shape: [B,4]. - Return: - (tensor) intersection area, Shape: [A,B]. - """ - A = box_a.size(0) - B = box_b.size(0) - max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2), box_b[:, 2:].unsqueeze(0).expand(A, B, 2)) - min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2), box_b[:, :2].unsqueeze(0).expand(A, B, 2)) - inter = torch.clamp((max_xy - min_xy), min=0) - return inter[:, :, 0] * inter[:, :, 1] - - -def jaccard(box_a, box_b): - """Compute the jaccard overlap of two sets of boxes. The jaccard overlap - is simply the intersection over union of two boxes. Here we operate on - ground truth boxes and default boxes. - E.g.: - A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B) - Args: - box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4] - box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4] - Return: - jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)] - """ - inter = intersect(box_a, box_b) - area_a = ((box_a[:, 2] - box_a[:, 0]) * (box_a[:, 3] - box_a[:, 1])).unsqueeze(1).expand_as(inter) # [A,B] - area_b = ((box_b[:, 2] - box_b[:, 0]) * (box_b[:, 3] - box_b[:, 1])).unsqueeze(0).expand_as(inter) # [A,B] - union = area_a + area_b - inter - return inter / union # [A,B] - - -def matrix_iou(a, b): - """ - return iou of a and b, numpy version for data augenmentation - """ - lt = np.maximum(a[:, np.newaxis, :2], b[:, :2]) - rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:]) - - area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2) - area_a = np.prod(a[:, 2:] - a[:, :2], axis=1) - area_b = np.prod(b[:, 2:] - b[:, :2], axis=1) - return area_i / (area_a[:, np.newaxis] + area_b - area_i) - - -def matrix_iof(a, b): - """ - return iof of a and b, numpy version for data augenmentation - """ - lt = np.maximum(a[:, np.newaxis, :2], b[:, :2]) - rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:]) - - area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2) - area_a = np.prod(a[:, 2:] - a[:, :2], axis=1) - return area_i / np.maximum(area_a[:, np.newaxis], 1) - - -def match(threshold, truths, priors, variances, labels, landms, loc_t, conf_t, landm_t, idx): - """Match each prior box with the ground truth box of the highest jaccard - overlap, encode the bounding boxes, then return the matched indices - corresponding to both confidence and location preds. - Args: - threshold: (float) The overlap threshold used when matching boxes. - truths: (tensor) Ground truth boxes, Shape: [num_obj, 4]. - priors: (tensor) Prior boxes from priorbox layers, Shape: [n_priors,4]. - variances: (tensor) Variances corresponding to each prior coord, - Shape: [num_priors, 4]. - labels: (tensor) All the class labels for the image, Shape: [num_obj]. - landms: (tensor) Ground truth landms, Shape [num_obj, 10]. - loc_t: (tensor) Tensor to be filled w/ encoded location targets. - conf_t: (tensor) Tensor to be filled w/ matched indices for conf preds. - landm_t: (tensor) Tensor to be filled w/ encoded landm targets. - idx: (int) current batch index - Return: - The matched indices corresponding to 1)location 2)confidence - 3)landm preds. - """ - # jaccard index - overlaps = jaccard(truths, point_form(priors)) - # (Bipartite Matching) - # [1,num_objects] best prior for each ground truth - best_prior_overlap, best_prior_idx = overlaps.max(1, keepdim=True) - - # ignore hard gt - valid_gt_idx = best_prior_overlap[:, 0] >= 0.2 - best_prior_idx_filter = best_prior_idx[valid_gt_idx, :] - if best_prior_idx_filter.shape[0] <= 0: - loc_t[idx] = 0 - conf_t[idx] = 0 - return - - # [1,num_priors] best ground truth for each prior - best_truth_overlap, best_truth_idx = overlaps.max(0, keepdim=True) - best_truth_idx.squeeze_(0) - best_truth_overlap.squeeze_(0) - best_prior_idx.squeeze_(1) - best_prior_idx_filter.squeeze_(1) - best_prior_overlap.squeeze_(1) - best_truth_overlap.index_fill_(0, best_prior_idx_filter, 2) # ensure best prior - # TODO refactor: index best_prior_idx with long tensor - # ensure every gt matches with its prior of max overlap - for j in range(best_prior_idx.size(0)): # 判别此anchor是预测哪一个boxes - best_truth_idx[best_prior_idx[j]] = j - matches = truths[best_truth_idx] # Shape: [num_priors,4] 此处为每一个anchor对应的bbox取出来 - conf = labels[best_truth_idx] # Shape: [num_priors] 此处为每一个anchor对应的label取出来 - conf[best_truth_overlap < threshold] = 0 # label as background overlap<0.35的全部作为负样本 - loc = encode(matches, priors, variances) - - matches_landm = landms[best_truth_idx] - landm = encode_landm(matches_landm, priors, variances) - loc_t[idx] = loc # [num_priors,4] encoded offsets to learn - conf_t[idx] = conf # [num_priors] top class label for each prior - landm_t[idx] = landm - - -def encode(matched, priors, variances): - """Encode the variances from the priorbox layers into the ground truth boxes - we have matched (based on jaccard overlap) with the prior boxes. - Args: - matched: (tensor) Coords of ground truth for each prior in point-form - Shape: [num_priors, 4]. - priors: (tensor) Prior boxes in center-offset form - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - encoded boxes (tensor), Shape: [num_priors, 4] - """ - - # dist b/t match center and prior's center - g_cxcy = (matched[:, :2] + matched[:, 2:]) / 2 - priors[:, :2] - # encode variance - g_cxcy /= (variances[0] * priors[:, 2:]) - # match wh / prior wh - g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:] - g_wh = torch.log(g_wh) / variances[1] - # return target for smooth_l1_loss - return torch.cat([g_cxcy, g_wh], 1) # [num_priors,4] - - -def encode_landm(matched, priors, variances): - """Encode the variances from the priorbox layers into the ground truth boxes - we have matched (based on jaccard overlap) with the prior boxes. - Args: - matched: (tensor) Coords of ground truth for each prior in point-form - Shape: [num_priors, 10]. - priors: (tensor) Prior boxes in center-offset form - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - encoded landm (tensor), Shape: [num_priors, 10] - """ - - # dist b/t match center and prior's center - matched = torch.reshape(matched, (matched.size(0), 5, 2)) - priors_cx = priors[:, 0].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors_cy = priors[:, 1].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors_w = priors[:, 2].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors_h = priors[:, 3].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors = torch.cat([priors_cx, priors_cy, priors_w, priors_h], dim=2) - g_cxcy = matched[:, :, :2] - priors[:, :, :2] - # encode variance - g_cxcy /= (variances[0] * priors[:, :, 2:]) - # g_cxcy /= priors[:, :, 2:] - g_cxcy = g_cxcy.reshape(g_cxcy.size(0), -1) - # return target for smooth_l1_loss - return g_cxcy - - -# Adapted from https://github.com/Hakuyume/chainer-ssd -def decode(loc, priors, variances): - """Decode locations from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - loc (tensor): location predictions for loc layers, - Shape: [num_priors,4] - priors (tensor): Prior boxes in center-offset form. - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded bounding box predictions - """ - - boxes = torch.cat((priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:], - priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1) - boxes[:, :2] -= boxes[:, 2:] / 2 - boxes[:, 2:] += boxes[:, :2] - return boxes - - -def decode_landm(pre, priors, variances): - """Decode landm from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - pre (tensor): landm predictions for loc layers, - Shape: [num_priors,10] - priors (tensor): Prior boxes in center-offset form. - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded landm predictions - """ - tmp = ( - priors[:, :2] + pre[:, :2] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 2:4] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 4:6] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 6:8] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 8:10] * variances[0] * priors[:, 2:], - ) - landms = torch.cat(tmp, dim=1) - return landms - - -def batched_decode(b_loc, priors, variances): - """Decode locations from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - b_loc (tensor): location predictions for loc layers, - Shape: [num_batches,num_priors,4] - priors (tensor): Prior boxes in center-offset form. - Shape: [1,num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded bounding box predictions - """ - boxes = ( - priors[:, :, :2] + b_loc[:, :, :2] * variances[0] * priors[:, :, 2:], - priors[:, :, 2:] * torch.exp(b_loc[:, :, 2:] * variances[1]), - ) - boxes = torch.cat(boxes, dim=2) - - boxes[:, :, :2] -= boxes[:, :, 2:] / 2 - boxes[:, :, 2:] += boxes[:, :, :2] - return boxes - - -def batched_decode_landm(pre, priors, variances): - """Decode landm from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - pre (tensor): landm predictions for loc layers, - Shape: [num_batches,num_priors,10] - priors (tensor): Prior boxes in center-offset form. - Shape: [1,num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded landm predictions - """ - landms = ( - priors[:, :, :2] + pre[:, :, :2] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 2:4] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 4:6] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 6:8] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 8:10] * variances[0] * priors[:, :, 2:], - ) - landms = torch.cat(landms, dim=2) - return landms - - -def log_sum_exp(x): - """Utility function for computing log_sum_exp while determining - This will be used to determine unaveraged confidence loss across - all examples in a batch. - Args: - x (Variable(tensor)): conf_preds from conf layers - """ - x_max = x.data.max() - return torch.log(torch.sum(torch.exp(x - x_max), 1, keepdim=True)) + x_max - - -# Original author: Francisco Massa: -# https://github.com/fmassa/object-detection.torch -# Ported to PyTorch by Max deGroot (02/01/2017) -def nms(boxes, scores, overlap=0.5, top_k=200): - """Apply non-maximum suppression at test time to avoid detecting too many - overlapping bounding boxes for a given object. - Args: - boxes: (tensor) The location preds for the img, Shape: [num_priors,4]. - scores: (tensor) The class predscores for the img, Shape:[num_priors]. - overlap: (float) The overlap thresh for suppressing unnecessary boxes. - top_k: (int) The Maximum number of box preds to consider. - Return: - The indices of the kept boxes with respect to num_priors. - """ - - keep = torch.Tensor(scores.size(0)).fill_(0).long() - if boxes.numel() == 0: - return keep - x1 = boxes[:, 0] - y1 = boxes[:, 1] - x2 = boxes[:, 2] - y2 = boxes[:, 3] - area = torch.mul(x2 - x1, y2 - y1) - v, idx = scores.sort(0) # sort in ascending order - # I = I[v >= 0.01] - idx = idx[-top_k:] # indices of the top-k largest vals - xx1 = boxes.new() - yy1 = boxes.new() - xx2 = boxes.new() - yy2 = boxes.new() - w = boxes.new() - h = boxes.new() - - # keep = torch.Tensor() - count = 0 - while idx.numel() > 0: - i = idx[-1] # index of current largest val - # keep.append(i) - keep[count] = i - count += 1 - if idx.size(0) == 1: - break - idx = idx[:-1] # remove kept element from view - # load bboxes of next highest vals - torch.index_select(x1, 0, idx, out=xx1) - torch.index_select(y1, 0, idx, out=yy1) - torch.index_select(x2, 0, idx, out=xx2) - torch.index_select(y2, 0, idx, out=yy2) - # store element-wise max with next highest score - xx1 = torch.clamp(xx1, min=x1[i]) - yy1 = torch.clamp(yy1, min=y1[i]) - xx2 = torch.clamp(xx2, max=x2[i]) - yy2 = torch.clamp(yy2, max=y2[i]) - w.resize_as_(xx2) - h.resize_as_(yy2) - w = xx2 - xx1 - h = yy2 - yy1 - # check sizes of xx1 and xx2.. after each iteration - w = torch.clamp(w, min=0.0) - h = torch.clamp(h, min=0.0) - inter = w * h - # IoU = i / (area(a) + area(b) - i) - rem_areas = torch.index_select(area, 0, idx) # load remaining areas) - union = (rem_areas - inter) + area[i] - IoU = inter / union # store result in iou - # keep only elements with an IoU <= overlap - idx = idx[IoU.le(overlap)] - return keep, count diff --git a/spaces/shravanrevanna/hdfc-bank-statement/README.md b/spaces/shravanrevanna/hdfc-bank-statement/README.md deleted file mode 100644 index 1b9d48b1d6782a4a498c525e12dc1cc203ccd3e0..0000000000000000000000000000000000000000 --- a/spaces/shravanrevanna/hdfc-bank-statement/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hdfc Bank Statement -emoji: 🐠 -colorFrom: red -colorTo: red -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/op/__init__.py b/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/op/__init__.py deleted file mode 100644 index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/op/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/simonduerr/diffdock/visualizations/README.md b/spaces/simonduerr/diffdock/visualizations/README.md deleted file mode 100644 index 0675fb01e8b5d5a8952031bf40de90b89dcfcf40..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/visualizations/README.md +++ /dev/null @@ -1,14 +0,0 @@ -## Visualizations of complexes that were unseen during training. EquiBind (cyan), DockDiff highest confidence sample (red), all other DockDiff samples (orange), and the crystal structure (green). - -Complex 6agt: -![Alt Text](example_6agt_symmetric.gif) - -Complex 6dz3: -![Alt Text](example_6dz3_symmetric.gif) - -Complex 6gdy: -![Alt Text](example_6gdy_symmetric.gif) - -Complex 6ckl: -![Alt Text](example_6ckl_symmetric.gif) - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cursor Roblox and Make Your Game More Fun.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cursor Roblox and Make Your Game More Fun.md deleted file mode 100644 index 8b4e0294b8488701f3cf42c87210b07e63315a2c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cursor Roblox and Make Your Game More Fun.md +++ /dev/null @@ -1,150 +0,0 @@ - -

    How to Download Cursor Roblox

    -

    If you are a fan of Roblox, you might want to customize your mouse pointer with some cool and unique cursor icons. In this article, we will show you how to download cursor roblox icons, how to install them, and how to choose the best ones for your gaming experience. Let's get started!

    -

    What is Cursor Roblox?

    -

    A brief introduction to Roblox and its cursor options

    -

    Roblox is a popular online platform that allows users to create and play games of various genres and styles. You can explore millions of games created by other users, or make your own using the Roblox Studio tool. You can also chat, socialize, and collaborate with other players from around the world.

    -

    download cursor roblox


    Download File >>>>> https://ssurll.com/2uNRG4



    -

    One of the features that makes Roblox stand out is its customization options. You can customize your avatar, your game settings, your interface, and even your mouse pointer. Yes, you heard that right. You can change the appearance of your cursor on Roblox using different icons and themes.

    -

    The benefits of customizing your cursor on Roblox

    -

    Why would you want to change your cursor on Roblox? Well, there are several reasons why you might want to do that. Here are some of them:

    -
      -
    • You can express your personality and style with a unique cursor icon.
    • -
    • You can improve your gaming performance and accuracy with a cursor icon that suits your preferences.
    • -
    • You can enhance your gaming experience and enjoyment with a cursor icon that matches your game theme.
    • -
    • You can stand out from other players and impress them with a cool and creative cursor icon.
    • -
    -

    As you can see, customizing your cursor on Roblox can have many benefits. But how do you do that? Let's find out in the next section.

    -

    How to Download Cursor Roblox Icons

    -

    The steps to download free Roblox cursor icons from Icons8 website

    -

    One of the easiest ways to download cursor roblox icons is to use the Icons8 website. Icons8 is a website that offers thousands of free icons in various styles and formats. You can find many roblox-related icons on this website, such as roblox logo, roblox studio, roblox character, roblox game, etc.

    -

    How to get a custom cursor on roblox
    -Roblox cursor icons free download
    -Cursors for roblox icons and logos
    -Roblox custom cursor pack download
    -How to change your cursor in roblox
    -Roblox cursor skins and themes
    -Download roblox cursor for windows 10
    -Roblox cursor png and svg files
    -How to make your own roblox cursor
    -Roblox cursor mod and plugin
    -Roblox cursor tutorial and guide
    -Roblox cursor collection and gallery
    -Roblox cursor design and style
    -Roblox cursor animation and effect
    -Roblox cursor color and shape
    -Roblox cursor size and speed
    -Roblox cursor sound and music
    -Roblox cursor code and script
    -Roblox cursor generator and editor
    -Roblox cursor online and offline
    -Roblox cursor extension and add-on
    -Roblox cursor chrome and firefox
    -Roblox cursor mac and linux
    -Roblox cursor android and ios
    -Roblox cursor mouse and touchpad
    -Roblox cursor game and studio
    -Roblox cursor fun and cool
    -Roblox cursor cute and kawaii
    -Roblox cursor 3d and pixel art
    -Roblox cursor neon and glow
    -Roblox cursor rainbow and gradient
    -Roblox cursor fire and ice
    -Roblox cursor gold and silver
    -Roblox cursor diamond and crystal
    -Roblox cursor heart and star
    -Roblox cursor cat and dog
    -Roblox cursor panda and unicorn
    -Roblox cursor emoji and meme
    -Roblox cursor fortnite and minecraft
    -Roblox cursor among us and piggy

    -

    To download free roblox cursor icons from Icons8 website, follow these steps:

    -
      -
    1. Go to [Icons8 website](^1^) and type "roblox cursor" in the search box.
    2. -
    3. Browse through the results and choose the icon that you like. You can filter the results by style, size, color, etc.
    4. -
    5. Click on the icon that you want to download. You will see a preview of the icon and some options below it.
    6. -
    7. Select the format that you want to download. For cursor icons, you should choose the .cur format.
    8. -
    9. Click on the "Download" button and save the file to your computer.
    10. -
    -

    Congratulations! You have downloaded a free roblox cursor icon from Icons8 website. You can repeat the same steps for other icons that you want to download.

    -

    The steps to download custom Roblox cursor icons from osu!skinner website

    -

    Another way to download cursor roblox icons is to use the osu!skinner website. osu!skinner is a website that allows users to create and download custom skins for the rhythm game osu!. You can find many roblox-themed skins on this website, which include cursor icons as well.

    -

    To download custom roblox cursor icons from osu!skinner website, follow these steps:

    -
      -
    1. Go to [osu!skinner website] and type "roblox" in the search box.
    2. -
    3. Browse through the results and choose the skin that you like. You can preview the skin by clicking on it.
    4. -
    5. Click on the "Download" button and save the file to your computer. The file will be in .osk format, which is a compressed folder that contains all the skin elements.
    6. -
    7. Extract the .osk file using a program like WinRAR or 7-Zip. You will see a folder with the same name as the skin.
    8. -
    9. Open the folder and look for the file named "cursor.png". This is the cursor icon that you want to download.
    10. -
    11. Copy or move the file to another location on your computer. You can rename it if you want.
    12. -
    -

    Well done! You have downloaded a custom roblox cursor icon from osu!skinner website. You can repeat the same steps for other skins that you want to download.

    -

    How to Install Cursor Roblox Icons

    -

    The steps to install the old Roblox cursor icons from 2007-mid 2013 and mid-2013-mid 2021

    -

    If you want to use the old roblox cursor icons that were used from 2007-mid 2013 and mid-2013-mid 2021, you can install them using a simple method. Here are the steps:

    -
      -
    1. Go to [this website] and download the zip file that contains the old roblox cursor icons. The file name is "Roblox Cursors.zip".
    2. -
    3. Extract the zip file using a program like WinRAR or 7-Zip. You will see a folder with two subfolders: "2007-mid 2013" and "mid-2013-mid 2021". Each subfolder contains two files: "cursor.cur" and "cursor2.cur". These are the old roblox cursor icons.
    4. -
    5. Choose which subfolder you want to use, depending on which version of the old roblox cursor icon you prefer.
    6. -
    7. Copy or move both files from the subfolder to another location on your computer. You can rename them if you want.
    8. -
    9. Right-click on your desktop and select "Personalize".
    10. -
    11. Click on "Themes" and then on "Mouse Cursor".
    12. -
    13. Click on "Browse" and navigate to the location where you saved the old roblox cursor icons.
    14. -
    15. Select both files and click on "Open".
    16. -
    17. Click on "Apply" and then on "OK".
    18. -
    -

    Awesome! You have installed the old roblox cursor icons on your computer. You can now enjoy using them on Roblox or any other program.

    -

    The steps to install the new Roblox cursor icons from late 2021+

    -

    If you want to use the new roblox cursor icons that were introduced in late 2021, you can install them using a different method. Here are the steps:

    -
      -
    1. Go to [this website] and download the zip file that contains the new roblox cursor icons. The file name is "Roblox Cursors (New).zip".
    2. -
    3. Extract the zip file using a program like WinRAR or 7-Zip. You will see a folder with four files: "cursor.cur", "cursor2.cur", "cursor3.cur", and "cursor4.cur". These are the new roblox cursor icons.
    4. -
    5. Copy or move all four files from the folder to another location on your computer. You can rename them if you want.
    6. -
    7. Right-click on your desktop and select "Personalize".
    8. -
    9. Click on "Themes" and then on "Mouse Cursor".
    10. -
    11. Click on "Browse" and navigate to the location where you saved the new roblox cursor icons.
    12. -
    13. Select all four files and click on "Open".
    14. -
    15. Click on "Apply" and then on "OK".
    16. -
    -

    Great! You have installed the new roblox cursor icons on your computer. You can now enjoy using them on Roblox or any other program.

    -

    How to Choose the Best Cursor Roblox Icons

    -

    The factors to consider when choosing your cursor icons

    -

    Now that you know how to download and install cursor roblox icons, you might wonder how to choose the best ones for your needs. There are several factors that you should consider when choosing your cursor icons, such as:

    -
      -
    • The size and shape of the cursor icon. You want a cursor icon that is not too big or too small, and that has a clear and distinct shape. You also want a cursor icon that matches the aspect ratio of your screen resolution.
    • -
    • The color and contrast of the cursor icon. You want a cursor icon that has a color that stands out from the background, and that has a good contrast with the elements on the screen. You also want a cursor icon that does not blend in with the game graphics or interface.
    • -
    • The style and theme of the cursor icon. You want a cursor icon that reflects your personality and style, and that suits the theme of the game that you are playing. You also want a cursor icon that is consistent with the other icons and elements on your computer.
    • -
    • The functionality and performance of the cursor icon. You want a cursor icon that does not interfere with your gaming experience, and that does not cause any lag or glitches. You also want a cursor icon that is easy to use and control.
    • -
    -

    By considering these factors, you can choose the best cursor roblox icons for your needs.

    -

    The examples of some popular and effective cursor icons

    -

    To give you some inspiration, here are some examples of some popular and effective cursor roblox icons that you can use or modify:

    - - - - - - - -
    Cursor IconDescription
    Roblox logoThis is the default roblox logo cursor icon that is used in late 2021+. It is simple, recognizable, and versatile. It works well for most games and themes.
    Roblox characterThis is a roblox character cursor icon that is based on your avatar. It is personal, creative, and fun. It works well for games that involve socializing and role-playing.
    Roblox gameThis is a roblox game cursor icon that is based on the game that you are playing. It is relevant, immersive, and engaging. It works well for games that have a specific theme or genre.
    Roblox studioThis is a roblox studio cursor icon that is based on the tool that you use to create games. It is professional, sophisticated, and functional. It works well for games that involve building and designing.
    Roblox rainbowThis is a roblox rainbow cursor icon that is based on the colors of the rainbow. It is colorful, vibrant, and cheerful. It works well for games that involve creativity and fun.
    -

    Conclusion

    -

    In conclusion, downloading cursor roblox icons can be a great way to customize your mouse pointer and enhance your gaming experience on Roblox. You can download free or custom roblox cursor icons from various websites, such as Icons8 or osu!skinner. You can install them easily using different methods, depending on which version of the roblox cursor icon you want to use. You can choose the best roblox cursor icons for your needs by considering factors such as size, shape, color, contrast, style, theme, functionality, and performance. You can also get inspired by some examples of popular and effective roblox cursor icons that we have shown you in this article.

    -

    We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends and fellow Roblox players. Thank you for reading!

    -

    FAQs

    -

    Q1: Can I use any cursor icon on Roblox? -

    Q2: How do I change my cursor icon on Roblox?

    -

    A2: To change your cursor icon on Roblox, you need to install a new cursor icon on your computer first. You can download free or custom cursor roblox icons from various websites, such as Icons8 or osu!skinner. You can install them easily using different methods, depending on which version of the roblox cursor icon you want to use. You can refer to the previous sections of this article for more details on how to download and install cursor roblox icons.

    -

    Q3: What are some of the best cursor icons for Roblox?

    -

    A3: The best cursor icons for Roblox are the ones that suit your needs and preferences. There are several factors that you should consider when choosing your cursor icons, such as size, shape, color, contrast, style, theme, functionality, and performance. You can also get inspired by some examples of popular and effective cursor icons that we have shown you in this article.

    -

    Q4: How do I uninstall or remove my cursor icon on Roblox?

    -

    A4: To uninstall or remove your cursor icon on Roblox, you need to restore the default cursor icon on your computer. You can do this by following these steps:

    -
      -
    1. Right-click on your desktop and select "Personalize".
    2. -
    3. Click on "Themes" and then on "Mouse Cursor".
    4. -
    5. Click on "Use Default" and then on "OK".
    6. -
    -

    This will restore the default Windows cursor icon on your computer. You can then delete the cursor roblox icons that you have downloaded and installed from your computer.

    -

    Q5: Where can I find more cursor icons for Roblox?

    -

    A5: You can find more cursor icons for Roblox by searching online or by creating your own. There are many websites that offer free or custom cursor icons for various purposes and themes. You can also use online tools or software to create your own cursor icons from scratch or from existing images. Some examples of such tools are [Cursor Editor], [RealWorld Cursor Editor], and [Cursor Maker].

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pokemon GO Mod APK and Catch Them All with Ease.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pokemon GO Mod APK and Catch Them All with Ease.md deleted file mode 100644 index 3393a4fd08960311e48ecccf96c8a91ada6a46ab..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pokemon GO Mod APK and Catch Them All with Ease.md +++ /dev/null @@ -1,106 +0,0 @@ - -

    Download Pokemon Go Mod APK: A Guide for Beginners

    -

    Pokemon Go is one of the most popular mobile games in the world, with over 576 million downloads since its launch in 2016. It is an augmented reality game that lets you explore the real world and catch virtual creatures called Pokemon. You can also battle other players, join teams, and participate in events. Pokemon Go is a fun and immersive way to experience the Pokemon universe and discover new places.

    -

    However, if you want to enjoy the game to the fullest, you might want to try Pokemon Go Mod APK, a modified version of the game that gives you access to some amazing features that are not available in the official app. In this article, we will show you how to download and install Pokemon Go Mod APK on your Android device, what features it offers, and some tips and tricks to help you become the best trainer around. We will also discuss the risks and benefits of playing Pokemon Go, both for your health and your social life. Let's get started!

    -

    download pokemon go mod apk


    Downloadhttps://ssurll.com/2uNZWS



    -

    How to download and install Pokemon Go Mod APK on your Android device

    -

    Pokemon Go Mod APK is not available on the Google Play Store, so you will need to download it from a third-party website. Before you do that, make sure you have enough storage space on your device and that you have enabled the option to install apps from unknown sources. To do that, go to Settings > Security > Unknown Sources and toggle it on.

    -

    Next, follow these steps to download and install Pokemon Go Mod APK:

    -
      -
    1. Go to a reliable website that offers Pokemon Go Mod APK, such as [apkasal.com](^1^).
    2. -
    3. Tap on the download button and wait for the file to be downloaded.
    4. -
    5. Locate the file in your device's file manager and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and grant the necessary permissions.
    8. -
    9. Wait for the installation to finish and launch the app.
    10. -
    -

    Congratulations! You have successfully installed Pokemon Go Mod APK on your Android device. Now you can enjoy all the features that it offers.

    -

    Features of Pokemon Go Mod APK: unlimited coins, fake GPS, and more

    -

    Pokemon Go Mod APK is a modified version of the game that gives you some advantages over other players. Here are some of the features that you can enjoy with this app:

    -
      -
    • Unlimited coins: Coins are the premium currency in Pokemon Go that you can use to buy items, such as Poke Balls, potions, revives, incubators, lures, lucky eggs, and more. Normally, you can earn coins by defending gyms or by purchasing them with real money. However, with Pokemon Go Mod APK, you can get unlimited coins for free and spend them as you wish.
    • -
    • Fake GPS: One of the main challenges of playing Pokemon Go is that you have to travel around in order to find different types of Pokemon and complete tasks. This can be time-consuming, exhausting, and even dangerous in some cases. With Pokemon Go Mod APK, you can use a fake GPS feature that lets you spoof your location and teleport anywhere in the world. This way, you can catch rare and legendary Pokemon without leaving your home.
    • -
    • No ads: Pokemon Go is a free-to-play game, but it also has ads that can interrupt your gameplay and annoy you. With Pokemon Go Mod APK, you can get rid of all the ads and enjoy a smooth and uninterrupted gaming experience.
    • -
    • No root required: Some modded apps require you to root your device in order to work properly. This can be risky and void your warranty. However, with Pokemon Go Mod APK, you don't need to root your device at all. You can simply install it as any other app and enjoy all the benefits of Pokemon Go Mod APK.
    • -
    -

    Tips and tricks to master Pokemon Go and catch 'em all

    -

    Pokemon Go is a game that requires skill, strategy, and patience. If you want to become the best trainer around, you need to know some tips and tricks that can help you improve your gameplay and catch more Pokemon. Here are some of them:

    -
      -
    • Use the right type of Poke Ball: There are different types of Poke Balls in Pokemon Go, such as the regular Poke Ball, the Great Ball, the Ultra Ball, and the Master Ball. Each one has a different catch rate, which means the probability of catching a Pokemon with it. The higher the catch rate, the better the Poke Ball. You should use the best Poke Ball you have for the Pokemon you want to catch, especially if it is rare or has a high CP (combat power).
    • -
    • Use berries to increase your chances: Berries are items that you can use to make a Pokemon easier to catch. There are different types of berries in Pokemon Go, such as the Razz Berry, the Nanab Berry, the Pinap Berry, and the Golden Razz Berry. Each one has a different effect, such as making the Pokemon less likely to run away, giving you more candy if you catch it, or increasing the catch rate significantly. You should use berries wisely and strategically to catch more Pokemon.
    • -
    • Throw curveballs to get bonus XP: Curveballs are a technique that you can use to throw a Poke Ball with a spin. This makes it harder for the Pokemon to dodge or escape, and also gives you bonus XP (experience points) if you catch it. To throw a curveball, you need to swipe your finger on the screen in a circular motion before releasing the Poke Ball. You will see a spark on the Poke Ball when it is ready to be thrown. Aim for the center of the circle around the Pokemon and release the Poke Ball. You will get a "Nice", "Great", or "Excellent" bonus depending on how accurate your throw is.
    • -
    • Use incense and lures to attract more Pokemon: Incense and lures are items that you can use to attract more Pokemon to your location. Incense is an item that you can activate from your inventory and it will last for 30 minutes. It will make more Pokemon appear around you, regardless of where you are. Lures are items that you can attach to a PokeStop and they will last for 30 minutes as well. They will make more Pokemon appear near the PokeStop, but only for players who are within its range. You should use incense and lures when you want to catch more Pokemon in a short time.
    • -
    • Join a team and take over gyms: When you reach level 5 in Pokemon Go, you can join one of three teams: Team Instinct (yellow), Team Mystic (blue), or Team Valor (red). Each team has its own leader and philosophy, but they all share the same goal: to take over gyms and defend them from other teams. Gyms are locations where you can battle other players' Pokemon and earn rewards, such as coins, items, and badges. You should join a team that suits your style and personality, and work together with your teammates to conquer gyms and earn glory.
    • -
    -

    Risks and benefits of playing Pokemon Go: health, social, and legal aspects

    -

    Pokemon Go is not just a game; it is also a phenomenon that has an impact on various aspects of life. Playing Pokemon Go can have both positive and negative effects on your health, your social life, and your legal status. Here are some of them:

    - - - - - - - - - - - - - - - - - -
    RisksBenefits
    - Playing Pokemon Go can be addictive and interfere with your daily activities, such as work, school, or sleep.+ Playing Pokemon Go can motivate you to exercise more and improve your physical fitness.
    - Playing Pokemon Go can expose you to potential dangers, such as accidents, injuries, or crimes.+ Playing Pokemon Go can help you explore new places and learn about different cultures.
    - Playing Pokemon Go can violate some laws or regulations, such as trespassing, privacy, or intellectual property.+ Playing Pokemon Go can enhance your social skills and make new friends.
    -

    You should be aware of these risks and benefits and play responsibly and safely. Always follow the rules of the game and respect the rights of others. And most importantly, have fun!

    -

    Conclusion: Summary of the main points and call to actionConclusion: Summary of the main points and call to action

    -

    In this article, we have shown you how to download and install Pokemon Go Mod APK on your Android device, what features it offers, and some tips and tricks to help you master the game. We have also discussed the risks and benefits of playing Pokemon Go, both for your health and your social life.

    -

    How to download pokemon go mod apk with joystick and teleport
    -Pokemon go mod apk latest version 2023 free download
    -Download pokemon go mod apk unlimited coins and candy
    -Pokemon go mod apk download for android no root
    -Best pokemon go mod apk with spoofing and auto walk
    -Pokemon go mod apk download ios without jailbreak
    -Download pokemon go mod apk with shiny pokemon and mega evolution
    -Pokemon go mod apk download for pc windows 10
    -Pokemon go mod apk download link 2023 updated
    -Pokemon go mod apk download reddit best reviews
    -Download pokemon go mod apk with all regions unlocked
    -Pokemon go mod apk download for firestick and smart tv
    -Pokemon go mod apk download with adventure sync and buddy system
    -Pokemon go mod apk download with pvp and trading features
    -Download pokemon go mod apk with legendary and mythical pokemon
    -Pokemon go mod apk download with custom skins and outfits
    -Download pokemon go mod apk with weather and time effects
    -Pokemon go mod apk download with ar mode and camera control
    -Download pokemon go mod apk with team rocket and shadow pokemon
    -Pokemon go mod apk download with quests and rewards system
    -Download pokemon go mod apk with gym battles and raids
    -Pokemon go mod apk download with daily bonuses and events
    -Download pokemon go mod apk with friends and chat feature
    -Pokemon go mod apk download with achievements and leaderboards
    -Download pokemon go mod apk with anti-ban and safe mode

    -

    Pokemon Go Mod APK is a great way to enjoy the game to the fullest, with unlimited coins, fake GPS, no ads, and no root required. You can catch rare and legendary Pokemon, battle other players, join teams, and participate in events. You can also explore new places, learn about different cultures, improve your physical fitness, and make new friends.

    -

    However, you should also be aware of the potential dangers, such as addiction, accidents, injuries, or crimes. You should also respect the laws and regulations of your location and the rights of others. You should play responsibly and safely, and have fun!

    -

    If you are ready to download Pokemon Go Mod APK and start your adventure, click on the link below and follow the instructions. And don't forget to share this article with your friends who might be interested in playing Pokemon Go Mod APK too!

    -

    Download Pokemon Go Mod APK here

    -

    FAQs: Five common questions and answers about Pokemon Go Mod APK

    -

    Here are some of the most frequently asked questions and answers about Pokemon Go Mod APK:

    -
      -
    1. Is Pokemon Go Mod APK safe to use?
      -Pokemon Go Mod APK is generally safe to use, as long as you download it from a reliable website and scan it for viruses before installing it. However, you should also be careful not to use it in a way that might get you banned from the game or cause trouble with the authorities. For example, you should not use fake GPS to spoof your location in restricted areas or countries where Pokemon Go is not available.
    2. -
    3. How can I update Pokemon Go Mod APK?
      -Pokemon Go Mod APK is usually updated regularly by its developers to keep up with the official app updates. However, you cannot update it from the Google Play Store or the app itself. You will need to download the latest version of Pokemon Go Mod APK from the same website where you downloaded it before and install it over the existing app. You will not lose your progress or settings by doing this.
    4. -
    5. Can I play Pokemon Go Mod APK with my friends who use the official app?
      -Yes, you can play Pokemon Go Mod APK with your friends who use the official app, as long as you are on the same server and version of the game. You can also join teams, battle gyms, trade Pokemon, and participate in events with them. However, you should not use any features that might give you an unfair advantage over them or make them suspicious of your gameplay. For example, you should not use fake GPS to teleport to places where they cannot go or catch Pokemon that they cannot see.
    6. -
    7. Will I get banned from Pokemon Go if I use Pokemon Go Mod APK?
      -There is always a risk of getting banned from Pokemon Go if you use any modded or hacked app that violates the terms of service of the game. However, if you use Pokemon Go Mod APK wisely and discreetly, you can minimize this risk and enjoy the game without any problems. You should avoid using any features that might trigger the anti-cheat system of the game or make other players report you. For example, you should not use fake GPS to jump from one location to another in a short time or catch too many rare Pokemon in a short time.
    8. -
    9. What are some alternatives to Pokemon Go Mod APK?
      -If you are looking for some alternatives to Pokemon Go Mod APK that offer similar features or gameplay, you can try some of these apps:

      -
        -
      • Pokemon Masters EX: This is a spin-off game that focuses on trainer battles rather than catching Pokemon. You can team up with famous trainers from the Pokemon series and compete in tournaments.
      • -
      • Pokemon Quest: This is a casual game that features cube-shaped Pokemon that you can collect and train. You can also explore an island and find treasures.
      • -
      • Pokemon Rumble Rush: This is an action game that lets you control a toy Pokemon and fight against other toy Pokemon. You can also collect new Pokemon and upgrade them.
      • -
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/sklearn-docs/sklearn-spectral-clustering/app.py b/spaces/sklearn-docs/sklearn-spectral-clustering/app.py deleted file mode 100644 index 714db82a37010e55fe23d07c7963537dd757e55c..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/sklearn-spectral-clustering/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import numpy as np -from sklearn.feature_extraction import image -from sklearn.cluster import spectral_clustering -import matplotlib -matplotlib.use('Agg') -import matplotlib.pyplot as plt -import gradio as gr -from scipy.cluster.vq import kmeans - - -def get_coordinates_from_mask(mask_in, number_of_centroids): - x_y = np.where(mask_in != [255, 255, 255])[:2] - x_y = np.column_stack((x_y[0], x_y[1])) - x_y = np.float32(x_y) - centroids,_ = kmeans(x_y,number_of_centroids) - centroids = np.int64(centroids) - - return centroids - - -def infer(input_image: np.ndarray, number_of_circles: int, radius: int): - centroids = get_coordinates_from_mask(input_image, number_of_circles) - - img = np.zeros((input_image.shape[1], input_image.shape[0])) - - x, y = np.indices((input_image.shape[1], input_image.shape[0])) - - for centroid in centroids: - circle = (x - centroid[0]) ** 2 + (y - centroid[1]) ** 2 < radius**2 - img += circle - - mask = img.astype(bool) - - img = img.astype(float) - img += 1 + 0.2 * np.random.randn(*img.shape) - - - graph = image.img_to_graph(img, mask=mask) - graph.data = np.exp(-graph.data / graph.data.std()) - - labels = spectral_clustering(graph, n_clusters=len(centroids), eigen_solver="arpack") - label_im = np.full(mask.shape, -1.0) - label_im[mask] = labels - - fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(10, 5)) - axs[0].matshow(img) - axs[1].matshow(label_im) - - return fig - - -article = """
    -Demo by Johannes (johko) Kolbe""" - - -description = """

    This is an interactive demo for the Spectral clustering for image segmentation tutorial from scikit-learn. -

    How To Use -
    The demo lets you mark places in the input where the centers of circles should be. The circles should then be segmented from one another using Spectral Image Clustering. -
    The circles should ideally be close together(connected), to let the algorithm work correctly. -
    As the demo uses k-means to determine the centroids of the circles exactly, you also need to specify the number of circles you want to get. -

    What is Spectral Image clustering? From the tutorial: -
    "The Spectral clustering approach solves the problem know as ‘normalized graph cuts’: the image is seen as a graph of connected voxels, and the spectral clustering algorithm amounts to choosing graph cuts defining regions while minimizing the ratio of the gradient along the cut, and the volume of the region." .

    """ - - -gr.Interface( - title="Spectral Clustering with scikit-learn", - description=description, - article=article, - fn=infer, - inputs=[gr.Image(source="canvas", tool="sketch", label="Mark the Circle Centers", shape=[100, 100]), - gr.Number(label="Number of circles to draw", value=4, precision=0), - gr.Slider(label="Circle Radius", minimum=5, maximum=25, value=15, step=1)], - outputs=[gr.Plot(label="Output Plot")] - ).launch() diff --git a/spaces/snpranav/karenai/README.md b/spaces/snpranav/karenai/README.md deleted file mode 100644 index efb040ee8d7e2e2ddf51bf8b23619f3e989e185f..0000000000000000000000000000000000000000 --- a/spaces/snpranav/karenai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Karen AI -emoji: 🐨 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/springml111/Pegasus_Paraphrase_demo/app.py b/spaces/springml111/Pegasus_Paraphrase_demo/app.py deleted file mode 100644 index 51171ee28a00fcdec2a3eb1689e181bdce1afd5e..0000000000000000000000000000000000000000 --- a/spaces/springml111/Pegasus_Paraphrase_demo/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import torch -from transformers import (PegasusForConditionalGeneration, PegasusTokenizer) -import gradio as gr - -best_model_path = "springml111/Pegasus_Paraphrase_model" -model = PegasusForConditionalGeneration.from_pretrained(best_model_path) -tokenizer = PegasusTokenizer.from_pretrained("springml111/Pegasus_Paraphrase_model") - -def tokenize_data(text): - # Tokenize the review body - input_ = str(text) + '
    ' - max_len = 64 - # tokenize inputs - tokenized_inputs = tokenizer(input_, padding='max_length', truncation=True, max_length=max_len, return_attention_mask=True, return_tensors='pt') - - inputs={"input_ids": tokenized_inputs['input_ids'], - "attention_mask": tokenized_inputs['attention_mask']} - return inputs - -def generate_answers(text): - inputs = tokenize_data(text) - results= model.generate(input_ids= inputs['input_ids'], attention_mask=inputs['attention_mask'], do_sample=True, - max_length=64, - top_k=120, - top_p=0.98, - early_stopping=True, - num_return_sequences=1) - answer = tokenizer.decode(results[0], skip_special_tokens=True) - return answer - -iface = gr.Interface(fn=generate_answers, inputs=[gr.inputs.Textbox(lines=5)],outputs=["text"]) -iface.launch() \ No newline at end of file diff --git a/spaces/sqc1729/bingi/src/lib/bots/bing/types.ts b/spaces/sqc1729/bingi/src/lib/bots/bing/types.ts deleted file mode 100644 index 02cd5e8b01e3529642d28dc1539bf958f4ac420b..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,259 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/adagrad_with_grad_clip.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/adagrad_with_grad_clip.py deleted file mode 100644 index 585ce184ab2d6bbde0d2f7fcafd6536fa8f6d8b6..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/adagrad_with_grad_clip.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from torch.optim import Adagrad - -from fairseq.optim import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adagrad_with_grad_clip") -class FairseqAdagradWithGradClip(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = AdagradWithGradClip(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--adagrad-clip', default=0.0, type=float, metavar='D', - help='internal grad clip') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "weight_decay": self.args.weight_decay, - "grad_clip": self.args.adagrad_clip, - } - - @property - def supports_flat_params(self): - return False - - -def _clip_grad(clr, grad, group_grad_clip): - if group_grad_clip > 0: - norm = grad.norm(2).item() - if norm > group_grad_clip: - clr *= group_grad_clip / (norm + 1e-10) - return clr - - -class AdagradWithGradClip(Adagrad): - """Adagrad algorithm with custom gradient clipping""" - - def __init__( - self, - params, - lr=1e-2, - lr_decay=0, - weight_decay=0, - initial_accumulator_value=0, - grad_clip=0, - ): - Adagrad.__init__( - self, - params, - lr=lr, - lr_decay=lr_decay, - weight_decay=weight_decay, - initial_accumulator_value=initial_accumulator_value, - ) - self.defaults["grad_clip"] = grad_clip - self.param_groups[0].setdefault("grad_clip", grad_clip) - - def step(self, closure=None): - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - - grad = p.grad.data - state = self.state[p] - - state["step"] += 1 - - if group["weight_decay"] != 0: - if p.grad.data.is_sparse: - raise RuntimeError( - "weight_decay option is " - "not compatible with sparse " - "gradients" - ) - grad = grad.add(group["weight_decay"], p.data) - - clr = group["lr"] / (1 + (state["step"] - 1) * group["lr_decay"]) - - # clip - clr = _clip_grad(clr=clr, grad=grad, group_grad_clip=group["grad_clip"]) - - if grad.is_sparse: - # the update is non-linear so indices must be unique - grad = grad.coalesce() - grad_indices = grad._indices() - grad_values = grad._values() - size = grad.size() - - def make_sparse(values): - constructor = grad.new - if grad_indices.dim() == 0 or values.dim() == 0: - return constructor().resize_as_(grad) - return constructor(grad_indices, values, size) - - state["sum"].add_(make_sparse(grad_values.pow(2))) - std = state["sum"]._sparse_mask(grad) - std_values = std._values().sqrt_().add_(1e-10) - p.data.add_(-clr, make_sparse(grad_values / std_values)) - else: - state["sum"].addcmul_(1, grad, grad) - std = state["sum"].sqrt().add_(1e-10) - p.data.addcdiv_(-clr, grad, std) - - return loss diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/hubert/simple_kmeans/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/hubert/simple_kmeans/README.md deleted file mode 100644 index cd17da3b3e6f3e39083f7a76a56ff46c3a63b929..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/hubert/simple_kmeans/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# Sharded Feature Extraction and K-means Application - -This folder contains scripts for preparing HUBERT labels from tsv files, the -steps are: -1. feature extraction -2. k-means clustering -3. k-means application - - -## Data preparation - -`*.tsv` files contains a list of audio, where each line is the root, and -following lines are the subpath for each audio: -``` - - - -... -``` - - -## Feature extraction - -### MFCC feature -Suppose the tsv file is at `${tsv_dir}/${split}.tsv`. To extract 39-D -mfcc+delta+ddelta features for the 1st iteration HUBERT training, run: -```sh -python dump_mfcc_feature.py ${tsv_dir} ${split} ${nshard} ${rank} ${feat_dir} -``` -This would shard the tsv file into `${nshard}` and extract features for the -`${rank}`-th shard, where rank is an integer in `[0, nshard-1]`. Features would -be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - - -### HUBERT feature -To extract features from the `${layer}`-th transformer layer of a trained -HUBERT model saved at `${ckpt_path}`, run: -```sh -python dump_hubert_feature.py ${tsv_dir} ${split} ${ckpt_path} ${layer} ${nshard} ${rank} ${feat_dir} -``` -Features would also be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - -- if out-of-memory, decrease the chunk size with `--max_chunk` - - -## K-means clustering -To fit a k-means model with `${n_clusters}` clusters on 10% of the `${split}` data, run -```sh -python learn_kmeans.py ${feat_dir} ${split} ${nshard} ${km_path} ${n_cluster} --percent 0.1 -``` -This saves the k-means model to `${km_path}`. - -- set `--precent -1` to use all data -- more kmeans options can be found with `-h` flag - - -## K-means application -To apply a trained k-means model `${km_path}` to obtain labels for `${split}`, run -```sh -python dump_km_label.py ${feat_dir} ${split} ${km_path} ${nshard} ${rank} ${lab_dir} -``` -This would extract labels for the `${rank}`-th shard out of `${nshard}` shards -and dump them to `${lab_dir}/${split}_${rank}_${shard}.km` - - -Finally, merge shards for `${split}` by running -```sh -for rank in $(seq 0 $((nshard - 1))); do - cat $lab_dir/${split}_${rank}_${nshard}.km -done > $lab_dir/${split}.km -``` diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/modules/qact.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/modules/qact.py deleted file mode 100644 index c5dd1d63362423ab0cfc381dddabb547a3b44c72..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/modules/qact.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from ..ops import emulate_int - - -class ActivationQuantizer: - """ - Fake scalar quantization of the activations using a forward hook. - - Args: - - module. a nn.Module for which we quantize the *post-activations* - - p: proportion of activations to quantize, set by default to 1 - - update_step: to recompute quantization parameters - - bits: number of bits for quantization - - method: choose among {"tensor", "histogram", "channel"} - - clamp_threshold: to prevent gradients overflow - - Remarks: - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - For the list of quantization methods and number of bits, see ops.py - - To remove the hook from the module, simply call self.handle.remove() - - At test time, the activations are fully quantized - - We use the straight-through estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick - - The activations are hard-clamped in [-clamp_threshold, clamp_threshold] - to prevent overflow during the backward pass - """ - - def __init__( - self, - module, - p=1, - update_step=1000, - bits=8, - method="histogram", - clamp_threshold=5, - ): - self.module = module - self.p = p - self.update_step = update_step - self.counter = 0 - self.bits = bits - self.method = method - self.clamp_threshold = clamp_threshold - self.handle = None - self.register_hook() - - def register_hook(self): - # forward hook - def quantize_hook(module, x, y): - - # update parameters every 1000 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.module.training else 1 - - # quantize activations - y_q, self.scale, self.zero_point = emulate_int( - y.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(y) - mask.bernoulli_(1 - p) - noise = (y_q - y).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - return torch.clamp(y, clamp_low.item(), clamp_high.item()) + noise.detach() - - # register hook - self.handle = self.module.register_forward_hook(quantize_hook) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/tasks/fairseq_task.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/tasks/fairseq_task.py deleted file mode 100644 index 64610e45430b664c461163427fe7444661ec0b7d..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/tasks/fairseq_task.py +++ /dev/null @@ -1,668 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import warnings -from argparse import Namespace -from typing import Any, Callable, Dict, List - -import torch -from fairseq import metrics, search, tokenizer, utils -from fairseq.data import Dictionary, FairseqDataset, data_utils, encoders, iterators -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim.amp_optimizer import AMPOptimizer -from omegaconf import DictConfig - - -logger = logging.getLogger(__name__) - - -class StatefulContainer(object): - - def __init__(self): - self._state = dict() - self._factories = dict() - - def add_factory(self, name, factory: Callable[[], Any]): - self._factories[name] = factory - - def merge_state_dict(self, state_dict: Dict[str, Any]): - self._state.update(state_dict) - - @property - def state_dict(self) -> Dict[str, Any]: - return self._state - - def __getattr__(self, name): - if name not in self._state and name in self._factories: - self._state[name] = self._factories[name]() - - if name in self._state: - return self._state[name] - - raise AttributeError(f"Task state has no factory for attribute {name}") - - -class FairseqTask(object): - """ - Tasks store dictionaries and provide helpers for loading/iterating over - Datasets, initializing the Model/Criterion and calculating the loss. - - Tasks have limited statefulness. In particular, state that needs to be - saved to/loaded from checkpoints needs to be stored in the `self.state` - :class:`StatefulContainer` object. For example:: - - self.state.add_factory("dictionary", self.load_dictionary) - print(self.state.dictionary) # calls self.load_dictionary() - - This is necessary so that when loading checkpoints, we can properly - recreate the task state after initializing the task instance. - """ - - @classmethod - def add_args(cls, parser): - """Add task-specific arguments to the parser.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - @staticmethod - def logging_outputs_can_be_summed(criterion) -> bool: - """ - Whether the logging outputs returned by `train_step` and `valid_step` can - be summed across workers prior to calling `aggregate_logging_outputs`. - Setting this to True will improves distributed training speed. - """ - return criterion.logging_outputs_can_be_summed() - - def __init__(self, cfg: FairseqDataclass, **kwargs): - self.cfg = cfg - self.datasets = dict() - self.dataset_to_epoch_iter = dict() - self.state = StatefulContainer() - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - return Dictionary.load(filename) - - @classmethod - def build_dictionary( - cls, filenames, workers=1, threshold=-1, nwords=-1, padding_factor=8 - ): - """Build the dictionary - - Args: - filenames (list): list of filenames - workers (int): number of concurrent workers - threshold (int): defines the minimum word count - nwords (int): defines the total number of words in the final dictionary, - including special symbols - padding_factor (int): can be used to pad the dictionary size to be a - multiple of 8, which is important on some hardware (e.g., Nvidia - Tensor Cores). - """ - d = Dictionary() - for filename in filenames: - Dictionary.add_file_to_dictionary( - filename, d, tokenizer.tokenize_line, workers - ) - d.finalize(threshold=threshold, nwords=nwords, padding_factor=padding_factor) - return d - - @classmethod - def setup_task(cls, cfg: DictConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - cfg (omegaconf.DictConfig): parsed command-line arguments - """ - return cls(cfg, **kwargs) - - def has_sharded_data(self, split): - return os.pathsep in getattr(self.cfg, "data", "") - - def load_dataset( - self, - split: str, - combine: bool = False, - task_cfg: FairseqDataclass = None, - **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - combine (bool): combines a split segmented into pieces into one dataset - task_cfg (FairseqDataclass): optional task configuration stored in the checkpoint that can be used - to load datasets - """ - raise NotImplementedError - - def dataset(self, split): - """ - Return a loaded dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - - Returns: - a :class:`~fairseq.data.FairseqDataset` corresponding to *split* - """ - from fairseq.data import FairseqDataset - - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - if not isinstance(self.datasets[split], FairseqDataset): - raise TypeError("Datasets are expected to be of type FairseqDataset") - return self.datasets[split] - - def filter_indices_by_size( - self, indices, dataset, max_positions=None, ignore_invalid_inputs=False - ): - """ - Filter examples that are too large - - Args: - indices (np.array): original array of sample indices - dataset (~fairseq.data.FairseqDataset): dataset to batch - max_positions (optional): max sentence length supported by the - model (default: None). - ignore_invalid_inputs (bool, optional): don't raise Exception for - sentences that are too long (default: False). - Returns: - np.array: array of filtered sample indices - """ - indices, ignored = dataset.filter_indices_by_size(indices, max_positions) - if len(ignored) > 0: - if not ignore_invalid_inputs: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - logger.warning( - ( - "{:,} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - def can_reuse_epoch_itr(self, dataset): - # We can reuse the epoch iterator across epochs as long as the dataset - # hasn't disabled it. We default to ``False`` here, although in practice - # this will be ``True`` for most datasets that inherit from - # ``FairseqDataset`` due to the base implementation there. - return getattr(dataset, "can_reuse_epoch_itr_across_epochs", False) - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - ): - """ - Get an iterator that yields batches of data from the given dataset. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset to batch - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - max_positions (optional): max sentence length supported by the - model (default: None). - ignore_invalid_inputs (bool, optional): don't raise Exception for - sentences that are too long (default: False). - required_batch_size_multiple (int, optional): require batch size to - be a multiple of N (default: 1). - seed (int, optional): seed for random number generator for - reproducibility (default: 1). - num_shards (int, optional): shard the data iterator into N - shards (default: 1). - shard_id (int, optional): which shard of the data iterator to - return (default: 0). - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means the data will be loaded in the main process - (default: 0). - epoch (int, optional): the epoch to start the iterator from - (default: 1). - data_buffer_size (int, optional): number of batches to - preload (default: 0). - disable_iterator_cache (bool, optional): don't cache the - EpochBatchIterator (ignores `FairseqTask::can_reuse_epoch_itr`) - (default: False). - Returns: - ~fairseq.iterators.EpochBatchIterator: a batched iterator over the - given dataset split - """ - can_reuse_epoch_itr = not disable_iterator_cache and self.can_reuse_epoch_itr( - dataset - ) - if can_reuse_epoch_itr and dataset in self.dataset_to_epoch_iter: - logger.debug("reusing EpochBatchIterator for epoch {}".format(epoch)) - return self.dataset_to_epoch_iter[dataset] - - assert isinstance(dataset, FairseqDataset) - - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - # get indices ordered by example size - with data_utils.numpy_seed(seed): - indices = dataset.ordered_indices() - - # filter examples that are too large - if max_positions is not None: - indices = self.filter_indices_by_size( - indices, dataset, max_positions, ignore_invalid_inputs - ) - - # create mini-batches with given size constraints - batch_sampler = dataset.batch_by_size( - indices, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - # return a reusable, sharded iterator - epoch_iter = iterators.EpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_sampler=batch_sampler, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - buffer_size=data_buffer_size, - ) - - if can_reuse_epoch_itr: - self.dataset_to_epoch_iter[dataset] = epoch_iter - - return epoch_iter - - def build_model(self, cfg: FairseqDataclass): - """ - Build the :class:`~fairseq.models.BaseFairseqModel` instance for this - task. - - Args: - cfg (FairseqDataclass): configuration object - - Returns: - a :class:`~fairseq.models.BaseFairseqModel` instance - """ - from fairseq import models, quantization_utils - - model = models.build_model(cfg, self) - model = quantization_utils.quantize_model_scalar(model, cfg) - return model - - def build_criterion(self, cfg: DictConfig): - """ - Build the :class:`~fairseq.criterions.FairseqCriterion` instance for - this task. - - Args: - cfg (omegaconf.DictConfig): configration object - - Returns: - a :class:`~fairseq.criterions.FairseqCriterion` instance - """ - from fairseq import criterions - - return criterions.build_criterion(cfg, self) - - def build_generator( - self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None, prefix_allowed_tokens_fn=None, - ): - """ - Build a :class:`~fairseq.SequenceGenerator` instance for this - task. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models - args (fairseq.dataclass.configs.GenerationConfig): - configuration object (dataclass) for generation - extra_gen_cls_kwargs (Dict[str, Any]): extra options to pass - through to SequenceGenerator - prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]]): - If provided, this function constrains the beam search to - allowed tokens only at each step. The provided function - should take 2 arguments: the batch ID (`batch_id: int`) - and a unidimensional tensor of token ids (`inputs_ids: - torch.Tensor`). It has to return a `List[int]` with the - allowed tokens for the next generation step conditioned - on the previously generated tokens (`inputs_ids`) and - the batch ID (`batch_id`). This argument is useful for - constrained generation conditioned on the prefix, as - described in "Autoregressive Entity Retrieval" - (https://arxiv.org/abs/2010.00904) and - https://github.com/facebookresearch/GENRE. - """ - if getattr(args, "score_reference", False): - from fairseq.sequence_scorer import SequenceScorer - - return SequenceScorer( - self.target_dictionary, - compute_alignment=getattr(args, "print_alignment", False), - ) - - from fairseq.sequence_generator import ( - SequenceGenerator, - SequenceGeneratorWithAlignment, - ) - - # Choose search strategy. Defaults to Beam Search. - sampling = getattr(args, "sampling", False) - sampling_topk = getattr(args, "sampling_topk", -1) - sampling_topp = getattr(args, "sampling_topp", -1.0) - diverse_beam_groups = getattr(args, "diverse_beam_groups", -1) - diverse_beam_strength = getattr(args, "diverse_beam_strength", 0.5) - match_source_len = getattr(args, "match_source_len", False) - diversity_rate = getattr(args, "diversity_rate", -1) - constrained = getattr(args, "constraints", False) - if prefix_allowed_tokens_fn is None: - prefix_allowed_tokens_fn = getattr(args, "prefix_allowed_tokens_fn", None) - if ( - sum( - int(cond) - for cond in [ - sampling, - diverse_beam_groups > 0, - match_source_len, - diversity_rate > 0, - ] - ) - > 1 - ): - raise ValueError("Provided Search parameters are mutually exclusive.") - assert sampling_topk < 0 or sampling, "--sampling-topk requires --sampling" - assert sampling_topp < 0 or sampling, "--sampling-topp requires --sampling" - - if sampling: - search_strategy = search.Sampling( - self.target_dictionary, sampling_topk, sampling_topp - ) - elif diverse_beam_groups > 0: - search_strategy = search.DiverseBeamSearch( - self.target_dictionary, diverse_beam_groups, diverse_beam_strength - ) - elif match_source_len: - # this is useful for tagging applications where the output - # length should match the input length, so we hardcode the - # length constraints for simplicity - search_strategy = search.LengthConstrainedBeamSearch( - self.target_dictionary, - min_len_a=1, - min_len_b=0, - max_len_a=1, - max_len_b=0, - ) - elif diversity_rate > -1: - search_strategy = search.DiverseSiblingsSearch( - self.target_dictionary, diversity_rate - ) - elif constrained: - search_strategy = search.LexicallyConstrainedBeamSearch( - self.target_dictionary, args.constraints - ) - elif prefix_allowed_tokens_fn: - search_strategy = search.PrefixConstrainedBeamSearch( - self.target_dictionary, prefix_allowed_tokens_fn - ) - else: - search_strategy = search.BeamSearch(self.target_dictionary) - - extra_gen_cls_kwargs = extra_gen_cls_kwargs or {} - if seq_gen_cls is None: - if getattr(args, "print_alignment", False): - seq_gen_cls = SequenceGeneratorWithAlignment - extra_gen_cls_kwargs["print_alignment"] = args.print_alignment - else: - seq_gen_cls = SequenceGenerator - - return seq_gen_cls( - models, - self.target_dictionary, - beam_size=getattr(args, "beam", 5), - max_len_a=getattr(args, "max_len_a", 0), - max_len_b=getattr(args, "max_len_b", 200), - min_len=getattr(args, "min_len", 1), - normalize_scores=(not getattr(args, "unnormalized", False)), - len_penalty=getattr(args, "lenpen", 1), - unk_penalty=getattr(args, "unkpen", 0), - temperature=getattr(args, "temperature", 1.0), - match_source_len=getattr(args, "match_source_len", False), - no_repeat_ngram_size=getattr(args, "no_repeat_ngram_size", 0), - search_strategy=search_strategy, - **extra_gen_cls_kwargs, - ) - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False, **extra_kwargs - ): - """ - Do forward and backward, and return the loss as computed by *criterion* - for the given *model* and *sample*. - - Args: - sample (dict): the mini-batch. The format is defined by the - :class:`~fairseq.data.FairseqDataset`. - model (~fairseq.models.BaseFairseqModel): the model - criterion (~fairseq.criterions.FairseqCriterion): the criterion - optimizer (~fairseq.optim.FairseqOptimizer): the optimizer - update_num (int): the current update - ignore_grad (bool): multiply loss by 0 if this is set to True - - Returns: - tuple: - - the loss - - the sample size, which is used as the denominator for the - gradient - - logging outputs to display while training - """ - model.train() - model.set_num_updates(update_num) - with torch.autograd.profiler.record_function("forward"): - with torch.cuda.amp.autocast(enabled=(isinstance(optimizer, AMPOptimizer))): - loss, sample_size, logging_output = criterion(model, sample) - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion, **extra_kwargs): - model.eval() - with torch.no_grad(): - loss, sample_size, logging_output = criterion(model, sample) - return loss, sample_size, logging_output - - def optimizer_step(self, optimizer, model, update_num): - optimizer.step() - - def build_dataset_for_inference( - self, src_tokens: List[torch.Tensor], src_lengths: List[int], **kwargs - ) -> torch.utils.data.Dataset: - raise NotImplementedError - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, constraints=constraints - ) - - def begin_epoch(self, epoch, model): - """Hook function called before the start of each epoch.""" - pass - - def begin_valid_epoch(self, epoch, model): - """Hook function called before the start of each validation epoch.""" - pass - - def aggregate_logging_outputs(self, logging_outputs, criterion): - """[deprecated] Aggregate logging outputs from data parallel training.""" - utils.deprecation_warning( - "The aggregate_logging_outputs API is deprecated. " - "Please use the reduce_metrics API instead." - ) - with metrics.aggregate() as agg: - self.reduce_metrics(logging_outputs, criterion) - return agg.get_smoothed_values() - - def reduce_metrics(self, logging_outputs, criterion): - """Aggregate logging outputs from data parallel training.""" - # backward compatibility for tasks that override aggregate_logging_outputs - base_func = FairseqTask.aggregate_logging_outputs - self_func = getattr(self, "aggregate_logging_outputs").__func__ - if self_func is not base_func: - utils.deprecation_warning( - "Tasks should implement the reduce_metrics API. " - "Falling back to deprecated aggregate_logging_outputs API." - ) - agg_logging_outputs = self.aggregate_logging_outputs( - logging_outputs, criterion - ) - for k, v in agg_logging_outputs.items(): - metrics.log_scalar(k, v) - return - - if not any("ntokens" in log for log in logging_outputs): - warnings.warn( - "ntokens not found in Criterion logging outputs, cannot log wpb or wps" - ) - else: - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - metrics.log_scalar("wpb", ntokens, priority=180, round=1) - metrics.log_speed("wps", ntokens, priority=90, round=1) - - if not any("nsentences" in log for log in logging_outputs): - warnings.warn( - "nsentences not found in Criterion logging outputs, cannot log bsz" - ) - else: - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - metrics.log_scalar("bsz", nsentences, priority=190, round=1) - - criterion.__class__.reduce_metrics(logging_outputs) - - def state_dict(self): - if self.state is not None: - return self.state.state_dict - return {} - - def load_state_dict(self, state_dict: Dict[str, Any]): - if self.state is not None: - self.state.merge_state_dict(state_dict) - - def max_positions(self): - """Return the max input length allowed by the task.""" - return None - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - raise NotImplementedError - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - raise NotImplementedError - - def build_tokenizer(self, args): - """Build the pre-tokenizer for this task.""" - return encoders.build_tokenizer(args) - - def build_bpe(self, args): - """Build the tokenizer for this task.""" - return encoders.build_bpe(args) - - def get_interactive_tokens_and_lengths(self, lines, encode_fn): - tokens = [ - self.source_dictionary.encode_line( - encode_fn(src_str), add_if_not_exist=False - ).long() - for src_str in lines - ] - lengths = [t.numel() for t in tokens] - return tokens, lengths - - -class LegacyFairseqTask(FairseqTask): - def __init__(self, args: Namespace): - super().__init__(None) - self.args = args - self.datasets = {} - self.dataset_to_epoch_iter = {} - - @classmethod - def setup_task(cls, args: Namespace, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - return cls(args, **kwargs) - - def has_sharded_data(self, split): - return os.pathsep in getattr(self.args, "data", "") - - def build_model(self, args: Namespace): - """ - Build the :class:`~fairseq.models.BaseFairseqModel` instance for this - task. - - Args: - args (argparse.Namespace): parsed command-line arguments - - Returns: - a :class:`~fairseq.models.BaseFairseqModel` instance - """ - from fairseq import models, quantization_utils - - model = models.build_model(args, self) - model = quantization_utils.quantize_model_scalar(model, args) - return model - - def build_criterion(self, args: Namespace): - """ - Build the :class:`~fairseq.criterions.FairseqCriterion` instance for - this task. - - Args: - args (argparse.Namespace): parsed command-line arguments - - Returns: - a :class:`~fairseq.criterions.FairseqCriterion` instance - """ - from fairseq import criterions - - return criterions.build_criterion(args, self) diff --git a/spaces/sshaileshk/feedsGPT/ingest_data.py b/spaces/sshaileshk/feedsGPT/ingest_data.py deleted file mode 100644 index a0b72c0eca98326e88b42a8d140fcfd6b665a950..0000000000000000000000000000000000000000 --- a/spaces/sshaileshk/feedsGPT/ingest_data.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -import json -from pathlib import Path -from pprint import pprint -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.document_loaders import UnstructuredFileLoader -from langchain.document_loaders import TextLoader -from langchain.document_loaders.csv_loader import CSVLoader -from langchain.document_loaders import DirectoryLoader -from langchain.document_loaders import PyPDFDirectoryLoader -from langchain.document_loaders import UnstructuredWordDocumentLoader -from langchain.document_loaders import JSONLoader -from langchain.vectorstores.faiss import FAISS -from langchain.embeddings import OpenAIEmbeddings -from langchain.document_loaders import Docx2txtLoader -from langchain.document_loaders import JSONLoader -from langchain.document_loaders import PyPDFLoader -import pickle - -# Load Data -folder_path = f'/home/cloudshell-user/feedsData/' -print(os.listdir(folder_path)) -loader = TextLoader("/home/cloudshell-user/feedsData/Datafeeds.txt") -#loader = DirectoryLoader(folder_path, glob="**/*.docx", show_progress=True) -#pdf_folder_path = f'/home/cloudshell-user/feedsData/' -#print(os.listdir(pdf_folder_path)) -#loader = PyPDFDirectoryLoader(pdf_folder_path) -#loader = DirectoryLoader(DRIVE_FOLDER, glob='**/*.json', show_progress=True, loader_cls=JSONLoader, loader_kwargs = {'jq_schema':'.content'}) - -#csvloader = CSVLoader(file_path='./Vendor_feeds.csv', source_column="Feed_Name", csv_args={ -# 'delimiter': ',', -# 'quotechar': '"', -# 'fieldnames': ['Feed_Name', 'Vendor_Name', 'FullDelta', 'Frequency'] -#}) - -#raw_documents = csvloader.load() -raw_documents = loader.load() -#print(raw_documents) - -# Split text -text_splitter = RecursiveCharacterTextSplitter() -documents = text_splitter.split_documents(raw_documents) - - -# Load Data to vectorstore -embeddings = OpenAIEmbeddings() -#Vendor_feeds = FAISS.from_documents(documents, embeddings) -#chatstore = FAISS.from_documents(documents, embeddings) -#vectorstore = FAISS.from_documents(documents, embeddings) -dataFeeds = FAISS.from_documents(documents, embeddings) - -# Save vectorstore -#with open("Vendor_feeds.pkl", "wb") as f: -# pickle.dump(Vendor_feeds, f) -with open("dataFeeds.pkl", "wb") as f: - pickle.dump(dataFeeds, f) diff --git a/spaces/sswam/photo-checker/old/app.py b/spaces/sswam/photo-checker/old/app.py deleted file mode 100644 index ee32ec171cb51429ad46986d6128a50802933edd..0000000000000000000000000000000000000000 --- a/spaces/sswam/photo-checker/old/app.py +++ /dev/null @@ -1,25 +0,0 @@ -#!/usr/bin/env python3 - -from fastai.vision.all import * -import gradio as gr - -learn = load_learner('photos.pkl') -labels = learn.dls.vocab - -def predict(img): - img = PILImage.create(img) - pred, pred_idx, probs = learn.predict(img) - return dict(zip(labels, map(float, probs))) - -iface = gr.Interface( - title = "Photo Checker", - description = """This project checks which of our family photos are "good" or "bad". We have nearly 80,000 photos, so it's not practical to sort them out by hand. I want to exclude screenshots, photos of computer screens, photos of papers, images with lots of text, and very blurry images. I used this to separate the good photos to use for a random slide show on our TV. The trained model achieves around 99% accuracy on the validation set.""", - fn = predict, - inputs = gr.inputs.Image(shape = (512,512)), - outputs = gr.outputs.Label(num_top_classes = 3), - examples = list(map(str, get_image_files('eg'))), - interpretation='default', - enable_queue=True, -) - -iface.launch() diff --git a/spaces/stasimus/p350-fastapi/app/main.py b/spaces/stasimus/p350-fastapi/app/main.py deleted file mode 100644 index 9edb7f6601bd7c0101a86ab447a88a8e1477a232..0000000000000000000000000000000000000000 --- a/spaces/stasimus/p350-fastapi/app/main.py +++ /dev/null @@ -1,101 +0,0 @@ -from typing import Optional -import uvicorn -from fastapi import FastAPI, UploadFile -from fastapi.responses import RedirectResponse -from pydantic import BaseModel - -import tensorflow as tf -from keras import backend as K - -import numpy as np -from matplotlib.cm import ScalarMappable -import cv2 - -app = FastAPI() - - -def f1score(y_true, y_pred): # taken from old keras source code - true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) - possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) - predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) - precision = true_positives / (predicted_positives + K.epsilon()) - recall = true_positives / (possible_positives + K.epsilon()) - f1_val = 2 * (precision * recall) / (precision + recall + K.epsilon()) - return f1_val - - -models = [ - tf.keras.models.load_model( - "./Fourier_Xception_Binary", custom_objects={"f1score": f1score} - ), - tf.keras.models.load_model( - "./Xception - With Augmentations", custom_objects={"f1score": f1score} - ), -] - - -def get_pred(img_string, model_idx=0): - np_img = np.fromstring(img_string, np.uint8) - img = cv2.imdecode(np_img, cv2.IMREAD_GRAYSCALE) - f = np.fft.fft2(img) - fshift = np.fft.fftshift(f) - x = np.abs(fshift) - if not x.all(): - idx = x == 0 - x[idx] = np.inf - x[idx] = x.min() - magnitude_spectrum = 20 * np.log(x) - val = ScalarMappable().to_rgba(magnitude_spectrum, bytes=True)[:, :, :-1] - - val = np.expand_dims(cv2.resize(val, (299, 299)), axis=0) - pred = models[model_idx].predict(val) - return pred - - -@app.get("/") -def index(b: bool = True): - return RedirectResponse("/docs") - - -class Model(BaseModel): - rattr: float = 5.9 - attr: Optional[str] - - -@app.post("/") -def blog(req: Model): - return req - - -@app.post("/model1/") -async def pred_model1(file: UploadFile): - file_str = await file.read() - - return { - "filename": file.filename, - "pred": get_pred(file_str).tolist(), - } - - -@app.post("/model2/") -async def pred_model2(file: UploadFile): - file_str = await file.read() - - return { - "filename": file.filename, - "pred": get_pred(file_str, model_idx=1).tolist(), - } - - -@app.post("/upimg/") -async def create_upload_file(file: UploadFile): - file_str = await file.read() - - return { - "filename": file.filename, - "pred": get_pred(file_str).tolist(), - } - - -if __name__ == "__main__": - uvicorn.run("main:app", host="127.0.0.1", port=6620, reload=True) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Awesome Screenshot Download Mac TOP.md b/spaces/stomexserde/gpt4-ui/Examples/Awesome Screenshot Download Mac TOP.md deleted file mode 100644 index 1daa5c6e0bb3bee918be2e99f6b3a78b1b20e0d8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Awesome Screenshot Download Mac TOP.md +++ /dev/null @@ -1,23 +0,0 @@ - -

    How to Download and Use Awesome Screenshot on Mac

    -

    Awesome Screenshot is a popular browser extension that allows you to capture, annotate and share screenshots of web pages. It is available for Chrome, Firefox, Safari and Edge browsers. In this article, we will show you how to download and use Awesome Screenshot on Mac.

    -

    How to Download Awesome Screenshot on Mac

    -

    To download Awesome Screenshot on Mac, you need to install the extension from the browser's web store. Here are the steps for each browser:

    -

    Awesome Screenshot Download Mac


    Download ✏ ✏ ✏ https://urlgoal.com/2uI9WY



    - -

    How to Use Awesome Screenshot on Mac

    -

    To use Awesome Screenshot on Mac, you need to launch the extension from the browser's toolbar. Here are the steps for each browser:

    -
      -
    • Chrome: Click on the Awesome Screenshot icon in the toolbar. You can choose from different options such as "Capture visible part of page", "Capture selected area", "Capture entire page" or "Capture desktop". After capturing the screenshot, you can edit it with tools such as crop, blur, text, shapes and more. You can also save it to your computer or upload it to Awesome Screenshot's cloud service.
    • -
    • Firefox: Click on the Awesome Screenshot icon in the toolbar. You can choose from different options such as "Capture visible part of page", "Capture selected area", "Capture entire page" or "Capture desktop". After capturing the screenshot, you can edit it with tools such as crop, blur, text, shapes and more. You can also save it to your computer or upload it to Awesome Screenshot's cloud service.
    • -
    • Safari: Open the Awesome Screenshot app from the Launchpad or Applications folder. You can choose from different options such as "Capture visible part of page", "Capture selected area", "Capture entire page" or "Capture desktop". After capturing the screenshot, you can edit it with tools such as crop, blur, text, shapes and more. You can also save it to your computer or upload it to Awesome Screenshot's cloud service.
    • -
    • Edge: Click on the Awesome Screenshot icon in the toolbar. You can choose from different options such as "Capture visible part of page", "Capture selected area", "Capture entire page" or "Capture desktop". After capturing the screenshot, you can edit it with tools such as crop, blur, text, shapes and more. You can also save it to your computer or upload it to Awesome Screenshot's cloud service.
    • -
    -

    That's how you can download and use Awesome Screenshot on Mac. It is a handy tool for taking screenshots of web pages and adding annotations. You can also share your screenshots with others via email or social media.

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cursed Treasure 2 Torrent Download [Ativador] Free.md b/spaces/stomexserde/gpt4-ui/Examples/Cursed Treasure 2 Torrent Download [Ativador] Free.md deleted file mode 100644 index 08ee5280e96d477c72b07f276e07e25663f01b23..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Cursed Treasure 2 Torrent Download [Ativador] Free.md +++ /dev/null @@ -1,28 +0,0 @@ - -Here is what I created: - -

    Cursed Treasure 2 Torrent Download [Ativador]

    -

    Cursed Treasure 2 is a tower defense game where you have to protect your gems from greedy heroes. You can build towers of orcs, demons and undead to unleash your dark powers on the invaders. The game features 21 levels, 3 skill trees, 63 upgrades and a wide variety of enemies and bosses.

    -

    Cursed Treasure 2 Torrent Download [Ativador]


    Download Zip >>> https://urlgoal.com/2uI6BQ



    -

    If you want to download Cursed Treasure 2 for free, you can use a torrent client to get the game files. However, you should be aware that torrenting is illegal in some countries and may expose you to malware and viruses. You should also use a VPN to hide your IP address and avoid legal troubles.

    -

    To activate the game, you will need a crack file that bypasses the DRM protection. You can find the crack file in the torrent folder or on some online forums. You should copy the crack file to the game directory and run it as administrator. Then you can enjoy the game without any restrictions.

    -

    Cursed Treasure 2 is a fun and challenging game that will test your strategic skills and your evil side. If you like tower defense games with a twist, you should give it a try. But remember, torrenting is risky and may harm your computer and your wallet. You should always support the developers by buying the game from official sources.

    -Here is what I created: - -

    Cursed Treasure 2 Torrent Download [Ativador]

    -

    Cursed Treasure 2 is a tower defense game where you have to protect your gems from greedy heroes. You can build towers of orcs, demons and undead to unleash your dark powers on the invaders. The game features 21 levels, 3 skill trees, 63 upgrades and a wide variety of enemies and bosses.

    -

    If you want to download Cursed Treasure 2 for free, you can use a torrent client to get the game files. However, you should be aware that torrenting is illegal in some countries and may expose you to malware and viruses. You should also use a VPN to hide your IP address and avoid legal troubles.

    -

    -

    To activate the game, you will need a crack file that bypasses the DRM protection. You can find the crack file in the torrent folder or on some online forums. You should copy the crack file to the game directory and run it as administrator. Then you can enjoy the game without any restrictions.

    -

    Cursed Treasure 2 is a fun and challenging game that will test your strategic skills and your evil side. If you like tower defense games with a twist, you should give it a try. But remember, torrenting is risky and may harm your computer and your wallet. You should always support the developers by buying the game from official sources.

    -

    Some of the features that make Cursed Treasure 2 stand out from other tower defense games are:

    -
      -
    • The ability to cast spells and use skills to enhance your towers and damage your enemies.
    • -
    • The option to choose between three different types of towers, each with its own advantages and disadvantages.
    • -
    • The possibility to upgrade your towers and skills with gems that you collect from defeated heroes.
    • -
    • The variety of enemies and bosses that have different abilities and weaknesses.
    • -
    • The replay value of the game, as you can try different strategies and challenges on each level.
    • -
    -

    Cursed Treasure 2 is a game that will keep you entertained for hours with its addictive gameplay and its humorous graphics and sounds. If you are looking for a tower defense game that is different from the rest, you should download Cursed Treasure 2 today.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Eztrans V9 Torrent FULL Version 11 [WORK].md b/spaces/stomexserde/gpt4-ui/Examples/Eztrans V9 Torrent FULL Version 11 [WORK].md deleted file mode 100644 index 711e7e569b05751272b78b1c4a1bb5ff1d588a25..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Eztrans V9 Torrent FULL Version 11 [WORK].md +++ /dev/null @@ -1,33 +0,0 @@ - -

    Eztrans V9: A Powerful Offline Translator for Japanese and Korean

    -

    Eztrans V9 is a software that can translate Japanese text into Korean and vice versa. It is widely used by visual novel fans in Korea, as it can work with text hooking tools such as VNR, twocontrol, and Aral Trans. Eztrans V9 can also be used with DLL hooking, which allows users to customize the translation settings and improve the accuracy.

    -

    eztrans v9 torrent FULL Version 11


    DOWNLOAD » https://urlgoal.com/2uI63i



    -

    Eztrans V9 is based on a statistical machine translation engine that analyzes large amounts of parallel texts and learns the patterns and rules of translation. It can handle various types of texts, such as novels, manga, games, websites, and documents. Eztrans V9 also supports user dictionaries, which can be used to add or modify words and phrases according to the user's preference.

    -

    Eztrans V9 is not available for online download or purchase. It is only distributed through torrent sites and other unofficial channels. However, this also means that it is prone to viruses, malware, and legal issues. Users who want to use Eztrans V9 should be careful about the sources they download from and scan their files before installing them.

    -

    Eztrans V9 is a powerful offline translator for Japanese and Korean that can enhance the experience of visual novel fans. However, it is also a risky software that may harm the user's computer or violate the intellectual property rights of the original authors. Users who want to use Eztrans V9 should be aware of these risks and take precautions accordingly.

    -

    To use Eztrans V9, you need to follow these steps:

    -
      -
    1. Download Eztrans V9 from a torrent site or other unofficial source. Be careful about the file quality and security.
    2. -
    3. Download Ehnd, a DLL hooking tool that can work with Eztrans V9. Put Ehnd with Eztrans V9 in the same folder. You can find Ehnd here: [^2^]
    4. -
    5. Download Python 3 x86 and unzip it. Put the files in the parent folder of Eztrans V9. You can find Python 3 x86 here: [^2^]
    6. -
    7. Download a zip file that contains the necessary files for Eztrans V9 to run as a server. Unzip it and put it in the parent folder of Eztrans V9. You can find the zip file here: [^2^]
    8. -
    9. Download ImageTrans plugin files for Eztrans V9 and put them into ImageTrans's plugins folder. ImageTrans is a software that can help you translate images and comics. You can find the plugin files here: [^2^]
    10. -
    11. Double click run.bat to start the Eztrans V9 server.
    12. -
    13. Enable Eztrans V9 in ImageTrans or other text hooking tools that support it.
    14. -
    -

    Now you can use Eztrans V9 to translate Japanese text into Korean and vice versa. You can also adjust the translation settings and use user dictionaries to improve the translation quality.

    Eztrans V9 has some advantages and disadvantages that users should be aware of before using it. Here are some of them:

    -

    Advantages of Eztrans V9

    -
      -
    • Eztrans V9 can translate Japanese text into Korean and vice versa offline, which means users do not need an internet connection or a subscription to use it.
    • -
    • Eztrans V9 can work with various text hooking tools that can extract text from visual novels, games, websites, and other sources. This allows users to enjoy Japanese content in Korean or vice versa.
    • -
    • Eztrans V9 can be customized by using user dictionaries, which can help users add or modify words and phrases according to their preference. This can improve the translation quality and accuracy.
    • -
    -

    Disadvantages of Eztrans V9

    -
      -
    • Eztrans V9 is not an official or legal software. It is only distributed through torrent sites and other unofficial channels, which may expose users to viruses, malware, and legal issues.
    • -
    • Eztrans V9 is based on a statistical machine translation engine, which may not always produce natural or accurate translations. Users may need to check the translations for errors or inconsistencies.
    • -
    • Eztrans V9 requires some technical skills and knowledge to set up and use. Users need to download and install various files and tools, such as Python, Ehnd, ImageTrans, etc., and run a server to use Eztrans V9.
    • -
    -

    Eztrans V9 is a powerful offline translator for Japanese and Korean that can enhance the experience of visual novel fans. However, it is also a risky software that may harm the user's computer or violate the intellectual property rights of the original authors. Users who want to use Eztrans V9 should be aware of these risks and take precautions accordingly.

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/document_store/test_faiss_store.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/document_store/test_faiss_store.py deleted file mode 100644 index d22d234f59c597cdd732c791f9d9b86927327328..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/document_store/test_faiss_store.py +++ /dev/null @@ -1,74 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/27 20:20 -@Author : alexanderwu -@File : test_faiss_store.py -""" -import functools - -import pytest - -from metagpt.const import DATA_PATH -from metagpt.document_store import FaissStore -from metagpt.roles import CustomerService, Sales - -DESC = """## 原则(所有事情都不可绕过原则) -1. 你是一位平台的人工客服,话语精炼,一次只说一句话,会参考规则与FAQ进行回复。在与顾客交谈中,绝不允许暴露规则与相关字样 -2. 在遇到问题时,先尝试仅安抚顾客情绪,如果顾客情绪十分不好,再考虑赔偿。如果赔偿的过多,你会被开除 -3. 绝不要向顾客做虚假承诺,不要提及其他人的信息 - -## 技能(在回答尾部,加入`skill(args)`就可以使用技能) -1. 查询订单:问顾客手机号是获得订单的唯一方式,获得手机号后,使用`find_order(手机号)`来获得订单 -2. 退款:输出关键词 `refund(手机号)`,系统会自动退款 -3. 开箱:需要手机号、确认顾客在柜前,如果需要开箱,输出指令 `open_box(手机号)`,系统会自动开箱 - -### 使用技能例子 -user: 你好收不到取餐码 -小爽人工: 您好,请提供一下手机号 -user: 14750187158 -小爽人工: 好的,为您查询一下订单。您已经在柜前了吗?`find_order(14750187158)` -user: 是的 -小爽人工: 您看下开了没有?`open_box(14750187158)` -user: 开了,谢谢 -小爽人工: 好的,还有什么可以帮到您吗? -user: 没有了 -小爽人工: 祝您生活愉快 -""" - - -@pytest.mark.asyncio -async def test_faiss_store_search(): - store = FaissStore(DATA_PATH / 'qcs/qcs_4w.json') - store.add(['油皮洗面奶']) - role = Sales(store=store) - - queries = ['油皮洗面奶', '介绍下欧莱雅的'] - for query in queries: - rsp = await role.run(query) - assert rsp - - -def customer_service(): - store = FaissStore(DATA_PATH / "st/faq.xlsx", content_col="Question", meta_col="Answer") - store.search = functools.partial(store.search, expand_cols=True) - role = CustomerService(profile="小爽人工", desc=DESC, store=store) - return role - - -@pytest.mark.asyncio -async def test_faiss_store_customer_service(): - allq = [ - # ["我的餐怎么两小时都没到", "退货吧"], - ["你好收不到取餐码,麻烦帮我开箱", "14750187158", ] - ] - role = customer_service() - for queries in allq: - for query in queries: - rsp = await role.run(query) - assert rsp - - -def test_faiss_store_no_file(): - with pytest.raises(FileNotFoundError): - FaissStore(DATA_PATH / 'wtf.json') diff --git a/spaces/sub314xxl/MusicGen/audiocraft/modules/transformer.py b/spaces/sub314xxl/MusicGen/audiocraft/modules/transformer.py deleted file mode 100644 index e69cca829d774d0b8b36c0de9b7924373da81b43..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/audiocraft/modules/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - -_efficient_attention_backend: str = 'torch' - - -def set_efficient_attention_backend(backend: str = 'torch'): - # Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster). - global _efficient_attention_backend - assert _efficient_attention_backend in ['xformers', 'torch'] - _efficient_attention_backend = backend - - -def _get_attention_time_dimension() -> int: - if _efficient_attention_backend == 'torch': - return 2 - else: - return 1 - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers""" - if n_rep == 1: - return x - if _efficient_attention_backend == 'torch': - bs, n_kv_heads, slen, head_dim = x.shape - return ( - x[:, :, None, :, :] - .expand(bs, n_kv_heads, n_rep, slen, head_dim) - .reshape(bs, n_kv_heads * n_rep, slen, head_dim) - ) - else: - bs, slen, n_kv_heads, head_dim = x.shape - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonaly the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype or None): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding` or None): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - intepret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Sevice on which to initialize. - dtype (torch.dtype or None): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - time_dim = _get_attention_time_dimension() - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError('Not supported at the moment') - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[time_dim] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - time_dim = _get_attention_time_dimension() - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=time_dim) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=time_dim) - else: - nk = k - nv = v - - assert nk.shape[time_dim] == nv.shape[time_dim] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[time_dim] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # TODO: fix and verify layout. - assert _efficient_attention_backend == 'xformers', 'Rope not supported with torch attn.' - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("new param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - time_dim = _get_attention_time_dimension() - if time_dim == 2: - layout = "b h t d" - else: - layout = "b t h d" - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - if time_dim == 2: - bound_layout = "b h p t d" - else: - bound_layout = "b t p h d" - packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads) - k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads) - v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - if _efficient_attention_backend == 'torch': - x = torch.nn.functional.scaled_dot_product_attention( - q, k, v, is_causal=attn_mask is not None, dropout_p=p) - else: - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - key_layout = layout.replace('t', 'k') - query_layout = layout - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - else: - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - # Key and value have the same format. - x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v) - x = x.to(dtype) - x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float or None): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding` or None): Rope embedding to use. - attention_dropout (float or None): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float or None): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float or None): learning rate override through the `make_optim_group` API. - weight_decay (float or None): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of Audiocraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Chess Game Full Version For Pc Free Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Chess Game Full Version For Pc Free Download.md deleted file mode 100644 index cf9a45982fc4fa2cbceca711de4f4efd14f5456c..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Chess Game Full Version For Pc Free Download.md +++ /dev/null @@ -1,12 +0,0 @@ -

    chess game full version for pc free download


    DOWNLOAD 🆗 https://cinurl.com/2uEYtX



    - -Try playing online chess against the best chess computer. millions of chess games every day on ...‎Chess Computer · ‎7 steps to get started · ‎10:00 · ‎Download the #1 chess appPeople also ask how to set up the app to get a chess clock and a chess clock for Android. -In this article, we will show you how to set up a chess game on Android. -Here you can find how to play chess, how to set up android chess game. -What should you do to set up chess on Android? -Download chess on Android using our free android chess tool. -It automatically installs the Android Chess app. -If you want to play chess on Android, 8a78ff9644
    -
    -
    -

    diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/AVG Internet Security 2016 16.0.7294 (2015).md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/AVG Internet Security 2016 16.0.7294 (2015).md deleted file mode 100644 index 562b14b21662ff37cc6f16b3093556610807357d..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/AVG Internet Security 2016 16.0.7294 (2015).md +++ /dev/null @@ -1,10 +0,0 @@ -

    AVG Internet Security 2016 16.0.7294 (2015)


    DOWNLOADhttps://urluss.com/2uCGsN



    -
    -12-Jul-2017 — 04 August 2016 . (2016) PC 360 Total Security 8.6.0.1109 Free Download 360 Total Security 9.0.0.1157 - Download 2017 360 total security. How to install and register for free 360 ​​total security? -2017 Download for free 360 ​​Total Security 8.0.0.1159 in Russian with the key 2017 Download for free 360 ​​Total Security 8.0.1.1171 in Russian. -360 total security 2018 free download in Russian with a key. -What needs to be done to make 360 ​​total security paid? -Answers to questions and tips. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/tbxg34/Satellite-Image-Recognition/app.py b/spaces/tbxg34/Satellite-Image-Recognition/app.py deleted file mode 100644 index 63eafdfe0e2e9e6f4c148b51e9245e1cef4a7254..0000000000000000000000000000000000000000 --- a/spaces/tbxg34/Satellite-Image-Recognition/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import streamlit as st -from io import StringIO -from PIL import Image -import pandas as pd -import numpy as np -import torch -import torch.nn as nn -import torchvision.models as models -import albumentations as A # Albumentations is a computer vision tool that boosts the performance of deep convolutional neural networks. (https://albumentations.ai/) -import cv2 -import json -from albumentations.pytorch.transforms import ToTensorV2 - -#Load the model -num_classes = 21 -model = models.resnet50(pretrained=False) -model.fc = nn.Linear(2048, num_classes) -model.load_state_dict(torch.load('resnet_best 2.pth', map_location=torch.device('cpu')), strict=True) - -# The labels are numbers while the classes are strings. We need a dictionary mapping between the two. -file = open('label_map.json','r') -class2id = json.load(file) -id2class = {v:k for k, v in class2id.items()} - -#Create the app -st.title('Satellite Imagery Classifier') - -uploaded_file = st.file_uploader('Choose a .png or .jpg file') - -if uploaded_file is not None: - if '.jpg' in uploaded_file.name or 'png' in uploaded_file.name: - image = Image.open(uploaded_file) - st.image(image) - np_img = np.array(image) - img = cv2.cvtColor(np_img, cv2.COLOR_BGR2RGB) - - cust_transform = A.Compose([A.Resize(height=256, width=256, p=1.0),ToTensorV2(p=1.0)], p=1.0) - tensor = cust_transform(image=img) - tensor = tensor['image'].float().resize(1,3,256,256) - - model.eval() # For inference, we need to tell pytorch to enable inference mode. The model is loaded in training mode. - custom_pred = model.forward(tensor).detach().numpy() # Forward is the python method defined inside the resnet. - ind = np.argpartition(custom_pred, -4)[-4:] - st.write(ind) - # for i in ind: - # st.write('Confidence: ', custom_pred[i], 'Predicted land-use', id2class[custom_pred[i]]) - st.write('Confidence: ', custom_pred) - predictions = np.argmax(custom_pred) - st.write('Predicted land-use class: ', id2class[predictions]) # id2class is a dictionary that returns the class name given an index. - - elif '.csv' in uploaded_file: - df = pd.read_csv(uploaded_file) - st.write(df) diff --git a/spaces/terfces0erbo/CollegeProjectV2/AlcorMP AU6981 - AU6982 - AU6983 And More [BETTER] Download Pc.md b/spaces/terfces0erbo/CollegeProjectV2/AlcorMP AU6981 - AU6982 - AU6983 And More [BETTER] Download Pc.md deleted file mode 100644 index a2ec13d2726fc6d6a3d3a52145a303e9bd09930a..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/AlcorMP AU6981 - AU6982 - AU6983 And More [BETTER] Download Pc.md +++ /dev/null @@ -1,6 +0,0 @@ -

    AlcorMP AU6981 - AU6982 - AU6983 and more download pc


    Download Zip · https://bytlly.com/2uGiUG



    - -Uninstall and reinstall the USB driver for the computer. ... Download Alcor MP Format Tool for repairing corrupted Alcor Chip Controllers . ... Alcor AU6389, Alcor AU6980, Alcor AU6981, Alcor AU6982,Alcor AU6983, Alcor AU6984, ... If the above steps fail then contact Belkin support for more information about the device ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Dxcpl.exe Download Windows 7 32-bit.md b/spaces/terfces0erbo/CollegeProjectV2/Dxcpl.exe Download Windows 7 32-bit.md deleted file mode 100644 index 83a49ba6a44e4cfdfff6670f33637045568706a1..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Dxcpl.exe Download Windows 7 32-bit.md +++ /dev/null @@ -1,6 +0,0 @@ -

    dxcpl.exe download windows 7 32-bit


    Download File 🆓 https://bytlly.com/2uGjXA



    - -Where and how to download and update DirectX. ... Open the dxwebsetup.exe file and complete the DirectX installation by following directions from Microsoft's website ... DirectX 11.0 is supported in Windows 10, Windows 8, and Windows 7. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Farm Together Update 32 Hack Working.md b/spaces/terfces0erbo/CollegeProjectV2/Farm Together Update 32 Hack Working.md deleted file mode 100644 index f6514a26bad5de097eca97d9f1783ea857d11ce7..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Farm Together Update 32 Hack Working.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Farm Together Update 32 hack working


    Download Zip » https://bytlly.com/2uGiSP



    -
    -No lie will remain hidden from you, as you hack into the minds of those you interrogate. ... Develop your own personal routine as you care for your farm and your animals. ... Unlock, upgrade and customize the weapons and defenses of more than 100 planes ... A light-hearted and playful puzzle game about working together. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Inam Danish Pathology Pdf Download REPACK.md b/spaces/terfces0erbo/CollegeProjectV2/Inam Danish Pathology Pdf Download REPACK.md deleted file mode 100644 index 5d3453e22a5abbe63e3a78df7c9ab71cbdcc415d..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Inam Danish Pathology Pdf Download REPACK.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    dr mohammad inam danish is one of the pioneers in the field of medical writing in a text-book of dental pathology and therapeutics: for students and
    1 apr 2016 this book can be found in: science, technology & medicine > medicine > medicine: general issues. short textbook of pathology (paperback).
    please do not purchase the pirated book; otherwise you will be the pail of cruelty and injustice. 9nam danisfi dr. inam danish k arachi pakistan contents 1. ok coronary clinical presentation pathology stable angina ischemia
    inam danish pathology pdf download. warning: im warning yall this book contains -smut/lemon (kinda rare for me to do it but eh) -a few fluff :3 -jealousy
    mohammad inam danish. 4.10 rating short textbook of pathology danis pakistan roger s port id like to see preview of this book, and how to order it.
    10 () 2012 inam danish short text book of medical diagnosis and treatment essentials of rubins pathology, 6th edition pdf in 2013pdf drive is your search engine for pdf files. as of today we have 76,534,353 ebooks for you to download for free. no annoying ads, no download limits, enjoy
    all about short textbook of pathology by mohammad inam danish,. librarything is loading sign up for librarything to find out whether youll like this book.
    1 apr 2016 short textbook of pathology by muhammad inam danish, 9789696371090, available at book depository with free delivery worldwide.
    short textbook of pathology [muhammad inam danish] on amazon.com. story time just got better with prime book box, a subscription that delivers editorially

    -

    inam danish pathology pdf download


    Download ---> https://bytlly.com/2uGiGc



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/thejagstudio/procom/main/models.py b/spaces/thejagstudio/procom/main/models.py deleted file mode 100644 index 8850db833d36ec8ad2f75d416070d3d14faa8048..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/main/models.py +++ /dev/null @@ -1,38 +0,0 @@ -from django.db import models -from django.db import connection -# Create your models here. - - -class Products(models.Model): - name = models.CharField(max_length=500, unique=True) - score = models.IntegerField(null=True) - image = models.CharField(max_length=5000) - propGroupsMini = models.JSONField(default=dict) - propScore = models.JSONField() - category = models.ForeignKey('Categories', on_delete=models.CASCADE, null=True) - link = models.CharField(max_length=500,default="") - terms = models.JSONField(default=dict) - suggestions = models.JSONField(default=dict) - tldr = models.JSONField(default=dict) - propGroups = models.JSONField(default=dict) - notApplicableProps = models.JSONField(default=dict) - cheapAlternatives = models.JSONField(default=dict) - topProps = models.JSONField(default=dict) - popularCompare = models.JSONField(default=dict) - toplist = models.JSONField(default=dict) - def __str__(self): - return self.name - - # @classmethod - # def truncate(cls): - # with connection.cursor() as cursor: - # cursor.execute( - # 'TRUNCATE TABLE {} CASCADE'.format(cls._meta.db_table)) - - -class Categories(models.Model): - name = models.CharField(max_length=1000, unique=True) - link = models.CharField(max_length=1000) - - def __str__(self): - return self.name diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Descargar Juegos Supercomprimidos Para Pc Zip 1 LINK.md b/spaces/tialenAdioni/chat-gpt-api/logs/Descargar Juegos Supercomprimidos Para Pc Zip 1 LINK.md deleted file mode 100644 index ee3075b73e9dc35e7ae3d1e5886847c5296776ef..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Descargar Juegos Supercomprimidos Para Pc Zip 1 LINK.md +++ /dev/null @@ -1,70 +0,0 @@ -
    -

    Descargar Juegos Supercomprimidos Para Pc Zip 1: Una Guía Rápida

    -

    Si te gustan los juegos de PC pero no quieres ocupar mucho espacio en tu disco duro, una opción es descargar juegos supercomprimidos para PC zip 1. Estos son juegos que han sido reducidos al máximo sin perder calidad ni funcionalidad, y que se pueden descargar en un solo archivo zip de poco peso.

    -

    Para descargar juegos supercomprimidos para PC zip 1, solo tienes que seguir estos pasos:

    -

    Descargar Juegos Supercomprimidos Para Pc Zip 1


    DOWNLOAD >>> https://urlcod.com/2uK7qh



    -
      -
    1. Busca el juego que te interese en algún sitio web especializado en juegos supercomprimidos, como supercomprimidosgratis.blogspot.com o juegosdepcfull.com/juegos-de-pc/.
    2. -
    3. Elige el servidor de descarga que prefieras, como Mega o Mediafire, y haz clic en el enlace correspondiente.
    4. -
    5. Espera a que se complete la descarga del archivo zip y guárdalo en tu PC.
    6. -
    7. Descomprime el archivo zip con algún programa como WinRAR o 7-Zip y ejecuta el instalador del juego.
    8. -
    9. Sigue las instrucciones de instalación y disfruta de tu juego supercomprimido para PC zip 1.
    10. -
    -

    Algunos ejemplos de juegos supercomprimidos para PC zip 1 que puedes descargar son:

    -
      -
    • FIFA 2010: Un juego de fútbol con gráficos realistas y una gran variedad de modos y opciones. Pesa solo 1.5 GB.
    • -
    • Wild Hearts: Un juego de carreras con un estilo retro y una banda sonora electrizante. Pesa solo 200 MB.
    • -
    • Chrono Cross: The Radical Dreamers Edition: Un juego de rol con una historia épica y un sistema de combate innovador. Pesa solo 400 MB.
    • -
    -

    Estos son solo algunos ejemplos, pero hay muchos más juegos supercomprimidos para PC zip 1 que puedes encontrar y descargar fácilmente. Solo tienes que buscar el que más te guste y seguir los pasos anteriores. Así podrás disfrutar de tus juegos favoritos sin ocupar mucho espacio en tu PC.

    Además de descargar juegos supercomprimidos para PC zip 1, también puedes optimizar el rendimiento de tu PC para que los juegos funcionen mejor. Algunas cosas que puedes hacer son:

    -
      -
    • Actualizar los drivers de tu tarjeta gráfica y de sonido.
    • -
    • Cerrar los programas que no estés usando y que consuman recursos.
    • -
    • Limpiar el disco duro de archivos innecesarios y desfragmentarlo.
    • -
    • Ajustar la configuración de los juegos según las características de tu PC.
    • -
    -

    De esta forma, podrás aprovechar al máximo tus juegos supercomprimidos para PC zip 1 y disfrutar de una experiencia de juego fluida y divertida.

    -

    Por último, te recomendamos que siempre descargues los juegos supercomprimidos para PC zip 1 desde sitios web confiables y seguros, que no contengan virus ni malware. Así evitarás poner en riesgo tu PC y tu información personal. También te aconsejamos que respetes los derechos de autor de los creadores de los juegos y que solo descargues aquellos que sean gratuitos o que hayas comprado legalmente.

    -

    Esperamos que esta guía te haya sido útil y que hayas aprendido cómo descargar juegos supercomprimidos para PC zip 1. Ahora solo te queda elegir el juego que más te guste y empezar a jugar. ¡Que te diviertas!

    -

    Descargar Juegos Supercomprimidos Para Pc Mega 1
    -Descargar Juegos Supercomprimidos Para Pc Gratis 1
    -Descargar Juegos Supercomprimidos Para Pc Full 1
    -Descargar Juegos Supercomprimidos Para Pc Sin Virus 1
    -Descargar Juegos Supercomprimidos Para Pc Windows 10 1
    -Descargar Juegos Supercomprimidos Para Pc Pocos Requisitos 1
    -Descargar Juegos Supercomprimidos Para Pc Mediafire 1
    -Descargar Juegos Supercomprimidos Para Pc Rar 1
    -Descargar Juegos Supercomprimidos Para Pc Español 1
    -Descargar Juegos Supercomprimidos Para Pc Online 1
    -Descargar Juegos Supercomprimidos Para Pc Livianos 1
    -Descargar Juegos Supercomprimidos Para Pc De Accion 1
    -Descargar Juegos Supercomprimidos Para Pc De Aventura 1
    -Descargar Juegos Supercomprimidos Para Pc De Carreras 1
    -Descargar Juegos Supercomprimidos Para Pc De Futbol 1
    -Descargar Juegos Supercomprimidos Para Pc De Guerra 1
    -Descargar Juegos Supercomprimidos Para Pc De Terror 1
    -Descargar Juegos Supercomprimidos Para Pc De Zombies 1
    -Descargar Juegos Supercomprimidos Para Pc Facil Y Rapido 1
    -Descargar Juegos Supercomprimidos Para Pc Google Drive 1
    -Descargar Juegos Supercomprimidos Para Pc Iso 1
    -Descargar Juegos Supercomprimidos Para Pc Mf 1
    -Descargar Juegos Supercomprimidos Para Pc Por Utorrent 1
    -Descargar Juegos Supercomprimidos Para Pc Sin Emulador 1
    -Descargar Juegos Supercomprimidos Para Pc Sin Instalar Nada 1
    -Descargar Juegos Supercomprimidos Para Pc Steam 1
    -Descargar Juegos Supercomprimidos Para Pc Torrents 1
    -Descargar Juegos Supercomprimidos Para Pc Ultima Version 1
    -Descargar Juegos Supercomprimidos Para Pc Un Link 1
    -Descargar Juegos Supercomprimidos Para Pc Youtube 1
    -Como Descargar Juegos Supercomprimidos Para Pc Zip 1
    -Donde Descargar Juegos Supercomprimidos Para Pc Zip 1
    -Los Mejores Juegos Supercomprimidos Para Pc Zip 1
    -Paginas Para Descargar Juegos Supercomprimidos Para Pc Zip 1
    -Programas Para Descargar Juegos Supercomprimidos Para Pc Zip 1
    -Que Son Los Juegos Supercomprimidos Para Pc Zip 1
    -Top 10 Juegos Supercomprimidos Para Pc Zip 1
    -Tutorial Como Descargar Juegos Supercomprimidos Para Pc Zip 1
    -Ventajas De Los Juegos Supercomprimidos Para Pc Zip 1
    -Videos De Como Descargar Juegos Supercomprimidos Para Pc Zip 1

    e753bf7129
    -
    -
    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Windows 11 Free 32 Bit with Crack and Enjoy the New Design Features and Performance.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Windows 11 Free 32 Bit with Crack and Enjoy the New Design Features and Performance.md deleted file mode 100644 index a9df9f914b503cb521103f7dd8b7c5eea85af62a..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Windows 11 Free 32 Bit with Crack and Enjoy the New Design Features and Performance.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    Windows 11 Free Download 32 Bit with Crack Full Version

    -

    Windows 11 is the latest operating system from Microsoft that offers a new design, features, and performance improvements. Windows 11 is expected to be released in late 2023, but some users may want to try it out before then. However, Windows 11 is not a free upgrade for Windows 10 users, and it requires a compatible hardware and software configuration. If you want to download Windows 11 free 32 bit with crack full version, you need to follow some steps and be aware of the risks involved.

    -

    windows 11 free download 32 bit with crack full version


    DOWNLOADhttps://urlcod.com/2uK5W3



    -

    Steps to Download Windows 11 Free 32 Bit with Crack Full Version

    -
      -
    1. Find a reliable source for the cracked version of Windows 11. You can search online for websites that offer Windows 11 free download 32 bit with crack full version, such as FileCR, EaseUS, MiniTool, etc. However, be careful that these websites may contain viruses, malware, or other harmful content that can damage your computer or steal your personal information. You should always scan the files before downloading them and use a VPN to protect your privacy.
    2. -
    3. Download the Windows 11 ISO file from the source you have chosen. The file size may vary depending on the source, but it should be around 5 GB. You may need to complete some surveys, captcha, or other verification steps before you can access the download link. You should also avoid clicking on any ads or pop-ups that may appear on the website.
    4. -
    5. Extract the Windows 11 ISO file using a software like WinRAR or 7-Zip. You should see a folder named Windows 11 or something similar. Inside the folder, you should find an ISO file named Windows 11.iso or something similar. This is the image file for Windows 11.
    6. -
    7. Create a bootable USB drive or hard disk with the Windows 11 ISO file. You can use a software like Rufus or other tools to create a bootable USB drive or hard disk with the Windows 11 ISO file. You should have at least an 8 GB USB drive or hard disk for this purpose. You should also make sure that your USB drive or hard disk is formatted as NTFS and has an active partition.
    8. -
    9. Boot your computer from the USB drive or hard disk with Windows 11. You can do this by changing the boot order in your BIOS settings or using a boot menu key. You should see a Windows logo and a setup screen. Follow the instructions on the screen to install Windows 11 on your computer. You may need to enter a product key or use a keygen to activate Windows 11.
    10. -
    -

    Risks of Downloading Windows 11 Free 32 Bit with Crack Full Version

    -

    While downloading Windows 11 free 32 bit with crack full version may seem tempting, you should be aware of the risks and consequences involved. Some of them are:

    -
      -
    • You may violate the intellectual property rights of Microsoft and face legal action or penalties.
    • -
    • You may compromise the security and performance of your computer by exposing it to viruses, malware, or other harmful content.
    • -
    • You may lose your data or personal information by downloading files from untrusted sources or clicking on malicious links.
    • -
    • You may not get the best quality or functionality of Windows 11 by using a cracked version that may have bugs, errors, or missing features.
    • -
    • You may not get any support or updates from Microsoft or access their online services or resources.
    • -
    -

    Conclusion

    -

    Windows 11 is a promising and exciting operating system that can enhance your user experience and productivity. However, it is not a free upgrade for Windows 10 users, and it requires a compatible hardware and software configuration. If you want to download Windows 11 free 32 bit with crack full version, you need to follow some steps and be careful of the risks involved. Alternatively, you can wait for the official release of Windows 11 and upgrade legally and safely.

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Free AHP Software Excel How to Perform AHP with Excel in a Simple and Effective Way.md b/spaces/tialenAdioni/chat-gpt-api/logs/Free AHP Software Excel How to Perform AHP with Excel in a Simple and Effective Way.md deleted file mode 100644 index b713d97f0fa80fa8990904ef334675c1449b61a1..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Free AHP Software Excel How to Perform AHP with Excel in a Simple and Effective Way.md +++ /dev/null @@ -1,18 +0,0 @@ - -

    Free AHP Software Excel: A Simple and Effective Tool for Multi-Criteria Decision Making

    -

    AHP stands for Analytic Hierarchy Process, a method that helps you make complex decisions by breaking down a problem into smaller and simpler parts. AHP allows you to compare and prioritize different criteria and alternatives based on your preferences and judgments. AHP can be used for various purposes, such as project selection, resource allocation, vendor evaluation, risk assessment, and more.

    -

    However, applying AHP can be challenging and time-consuming, especially if you have a large number of criteria and alternatives to consider. You need to construct a hierarchy of the problem, assign weights to the criteria and sub-criteria, perform pairwise comparisons of the alternatives, calculate the scores and rankings of the alternatives, and check the consistency of your judgments. You also need to present and communicate your results in a clear and understandable way.

    -

    free ahp software excel


    Download Zip 🗹 https://urlcod.com/2uK7eu



    -

    That's why you might need a tool that can help you perform AHP in a simple and effective way. One of the most popular and widely used tools for AHP is Excel. Excel is a spreadsheet software that allows you to organize, manipulate, and analyze data in various ways. Excel also has many features and functions that can help you apply AHP, such as formulas, charts, tables, pivot tables, solver, data validation, conditional formatting, and more.

    -

    But how can you use Excel for AHP? There are two main ways: you can either create your own spreadsheet from scratch or use a ready-made template or add-in. Creating your own spreadsheet can give you more flexibility and control over your AHP process, but it can also be more complicated and prone to errors. Using a ready-made template or add-in can save you time and effort, but it can also limit your options and customization.

    -

    If you are looking for a free AHP software Excel that can help you perform AHP in a simple and effective way, you might want to check out some of these options:

    -
      -
    • AHP Excel Template: This is a free Excel template that allows you to perform AHP with up to 10 criteria and 10 alternatives. It has a user-friendly interface that guides you through the steps of AHP. It also provides you with various outputs, such as charts, tables, matrices, consistency ratios, sensitivity analysis, and more.
    • -
    • Super Decisions: This is a free software that integrates with Excel and allows you to perform AHP with an unlimited number of criteria and alternatives. It has a graphical interface that helps you construct the hierarchy of the problem and perform pairwise comparisons. It also provides you with various outputs, such as charts, tables, matrices, consistency ratios, sensitivity analysis, group decision making, and more.
    • -
    • Excel-AHP: This is a free Excel add-in that allows you to perform AHP with up to 15 criteria and 15 alternatives. It has a simple interface that helps you enter the data and perform pairwise comparisons. It also provides you with various outputs, such as charts, tables, matrices, consistency ratios, sensitivity analysis, and more.
    • -
    -

    These are some of the free AHP software Excel that can help you perform AHP in a simple and effective way. However, these are not the only options available. You can also find other free or paid tools that can help you apply AHP with Excel or other software. The choice depends on your needs and preferences.

    -

    AHP is a powerful and useful method that can help you make complex decisions in a systematic and rational way. By using Excel or other tools for AHP, you can simplify and enhance your AHP process and improve your decision making quality.

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Carman Scan Lite Update !!TOP!! Download Serial.md b/spaces/tioseFevbu/cartoon-converter/scripts/Carman Scan Lite Update !!TOP!! Download Serial.md deleted file mode 100644 index cbea9e0206d71c0d4a53939dd2535f41b75405b1..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Carman Scan Lite Update !!TOP!! Download Serial.md +++ /dev/null @@ -1,153 +0,0 @@ - -

    Carman Scan Lite Update Download Serial: A Guide for Car Owners and Mechanics

    -

    If you own a car or work as a mechanic, you know how important it is to have a reliable and professional diagnostic tool that can help you diagnose and fix various problems in your vehicle. One of the most popular and trusted tools in the market is the Carman Scan Lite, a handheld device that offers in-depth and unrivalled diagnostics for Asian, European, and American vehicles. But how do you update the software and firmware of your Carman Scan Lite? And how do you troubleshoot and support it if you encounter any issues? In this article, we will answer these questions and more, so you can get the most out of your Carman Scan Lite.

    -

    What is Carman Scan Lite and why do you need it?

    -

    Carman Scan Lite is a handheld diagnostic tool that can check error status, read and erase fault codes, view live data, perform actuator tests, program control units, and perform special functions for various systems in your vehicle, such as engine, transmission, ABS, airbag, power steering, TPMS, immobilizer, and more. It supports OBD-II, ISO 9141-2, J1850, KWP-2000, CAN, and J1587 communication protocols, and has a large internal memory that can store software updates, technical bulletins, flight records, and vehicle data. It also has a high-resolution LCD display, a soft touch keypad, a rubber shroud for anti-shock protection, and a rechargeable battery for portability.

    -

    carman scan lite update download serial


    Download File --->>> https://urlcod.com/2uHydC



    -

    Carman Scan Lite features and benefits

    -

    Some of the main features and benefits of Carman Scan Lite are:

    -
      -
    • It offers comprehensive diagnostics for over 50 vehicle manufacturers from Asia, Europe, and America, covering more than 1,000 models.
    • -
    • It provides fast and accurate diagnosis with its powerful processor and advanced software algorithms.
    • -
    • It supports various special functions such as injector coding, sensor calibration, key programming, service reset, DPF regeneration, ECU initialization, throttle adaptation, steering angle reset, brake bleeding, TPMS relearn, immobilizer matching, etc.
    • -
    • It has a user-friendly interface that allows easy navigation through menus and functions.
    • -
    • It has a large data storage area that can store software updates, technical bulletins, flight records, vehicle data, etc.
    • -
    • It has a USB port that allows easy connection to a PC for data transfer and software update.
    • -
    • It has an SD card slot that allows easy backup and restore of data.
    • -
    • It has a built-in TPMS antenna that allows wireless communication with TPMS sensors.
    • -
    • It has a rubber shroud that protects the device from shock and damage.
    • -
    • It has a rechargeable battery that allows cordless operation for up to 4 hours.
    • -
    -

    Carman Scan Lite compatibility and coverage

    -

    Carman Scan Lite is compatible with most vehicles that have an OBD-II port or an adapter cable. It covers over 50 vehicle manufacturers from Asia (such as Hyundai, Kia, Toyota, Honda, Nissan, Mazda, Mitsubishi, Suzuki, Subaru, Daewoo, Ssangyong, etc.), Europe (such as Audi, VW, BMW, Mercedes-Benz, Volvo, Renault, Peugeot, Citroen, Fiat, Alfa Romeo, etc.), and America (such as Ford, GM, Chrysler, etc.). It also covers some special vehicles such as Ferrari, Maserati, Lamborghini, Bentley, Rolls-Royce, etc. You can check the full list of supported vehicles and functions on the official website of Carman International.

    -

    How to update Carman Scan Lite software and firmware?

    -

    Updating the software and firmware of your Carman Scan Lite is essential to keep it up to date with the latest vehicle models and functions. You can update your Carman Scan Lite via USB cable or SD card. Here are the requirements and steps for each method.

    -

    Requirements for updating Carman Scan Lite

    -

    Before you update your Carman Scan Lite, you need to have the following:

    -
      -
    • A PC with Windows XP or higher operating system.
    • -
    • A stable internet connection.
    • -
    • A USB cable or an SD card reader.
    • -
    • A registered account on the Carman International website.
    • -
    • A valid serial number and password for your Carman Scan Lite device.
    • -
    • A backup of your data on your Carman Scan Lite device.
    • -
    -

    Steps for updating Carman Scan Lite via USB cable

    -

    To update your Carman Scan Lite via USB cable, follow these steps:

    -
      -
    1. Connect your Carman Scan Lite device to your PC using the USB cable.
    2. -
    3. Turn on your Carman Scan Lite device and select "USB Mode" from the main menu.
    4. -
    5. On your PC, open a web browser and go to the Carman International website.
    6. -
    7. Login with your registered account and password.
    8. -
    9. Go to the "Download" section and select "Carman Scan Lite Update".
    10. -
    11. Enter your serial number and password for your Carman Scan Lite device.
    12. -
    13. Select the software and firmware versions that you want to update and click "Download".
    14. -
    15. Wait for the download to complete and then click "Install".
    16. -
    17. Follow the instructions on the screen to install the updates on your Carman Scan Lite device.
    18. -
    19. When the installation is finished, disconnect your Carman Scan Lite device from your PC and restart it.
    20. -
    -

    Steps for updating Carman Scan Lite via SD card

    -

    To update your Carman Scan Lite via SD card, follow these steps:

    -
      -
    1. Insert an SD card into your PC using an SD card reader.
    2. -
    3. On your PC, open a web browser and go to the Carman International website.
    4. -
    5. Login with your registered account and password.
    6. -
    7. Go to the "Download" section and select "Carman Scan Lite Update".
    8. -
    9. Enter your serial number and password for your Carman Scan Lite device.
    10. -
    11. Select the software and firmware versions that you want to update and click "Download".
    12. -
    13. Wait for the download to complete and then copy the files to the root directory of your SD card.
    14. -
    15. Eject the SD card from your PC and insert it into your Carman Scan Lite device.
    16. -
    17. Turn on your Carman Scan Lite device and select "SD Card Mode" from the main menu.
    18. -
    19. Select the files that you want to update and click "OK".
    20. -
    21. Follow the instructions on the screen to install the updates on your Carman Scan Lite device.
    22. -
    23. When the installation is finished, eject the SD card from your Carman Scan Lite device and restart it.
    24. -
    -

    How to troubleshoot and support Carman Scan Lite?

    -

    Sometimes, you may encounter some issues or errors when using your Carman Scan Lite device. Don't worry, most of them can be easily solved by following some simple steps. Here are some common issues and solutions for Carman Scan Lite.

    -

    -

    Common issues and solutions for Carman Scan Lite

    - - - - - - - -
    IssueSolution
    The device does not turn on or charge.- Check if the battery is properly installed.
    - Check if the power adapter is working.
    - Check if the power outlet is working.
    - Replace the battery or power adapter if defective.
    - Contact customer service if none of the above works.
    The device does not communicate with the vehicle.- Check if the vehicle ignition is on.
    - Check if the OBD-II connector or adapter cable is properly connected.
    - Check if the vehicle is supported by the device.
    - Check if the device software and firmware are updated.
    - Check if the device battery is charged.
    - Restart the device and try again.
    - Contact customer service if none of the above works.
    The device displays an error code or message.- Refer to the user manual or the Carman International website for the meaning and solution of the error code or message.
    - Follow the instructions on the screen to resolve the issue.
    - Contact customer service if the issue persists.
    The device freezes or crashes.- Press and hold the power button for 10 seconds to force shut down the device.
    - Turn on the device and check if it works normally.
    - If not, try to reset the device by inserting a pin into the reset hole on the back of the device.
    - If still not, try to update the device software and firmware via USB cable or SD card.
    - If none of these works, contact customer service for further assistance.
    The device data is corrupted or lost.- Try to restore the data from your backup on your PC or SD card.
    - If you don't have a backup, contact customer service for possible data recovery options.
    - To prevent data loss, always backup your data regularly on your PC or SD card.
    -

    Contact information and resources for Carman Scan Lite

    -

    If you need more help or support for your Carman Scan Lite device, you can contact Carman International through the following channels:

    -
      -
    • Email: support@carmanit.com
    • -
    • Phone: +82-31-459-8200
    • -
    • Fax: +82-31-459-8210
    • -
    • Website: www.carmanit.com
    • -
    -

    On their website, you can also find more information and resources for your Carman Scan Lite device, such as:

    -
      -
    • User manual
    • -
    • Software update
    • -
    • Technical bulletin
    • -
    • Vehicle coverage list
    • -
    • FAQs
    • -
    • Online shop
    • -
    • News and events
    • -
    -

    How to compare Carman Scan Lite with other diagnostic tools?

    -

    Carman Scan Lite is one of the best diagnostic tools in the market, but it is not the only one. There are many other diagnostic tools that offer similar or different features and functions. How do you compare Carman Scan Lite with other diagnostic tools? Here are some factors to consider:

    -

    Advantages and disadvantages of Carman Scan Lite

    -

    Carman Scan Lite has many advantages over other diagnostic tools, such as:

    -
      -
    • It has a wide range of vehicle coverage, especially for Asian vehicles.
    • -
    • It has a fast and accurate diagnosis with its powerful processor and advanced software algorithms.
    • -
    • It has a user-friendly interface that allows easy navigation through menus and functions.
    • -
    • It has a large data storage area that can store software updates, technical bulletins, flight records, vehicle data, etc.
    • -
    • It has a USB port that allows easy connection to a PC for data transfer and software update.
    • -
    • It has an SD card slot that allows easy backup and restore of data.
    • -
    • It has a built-in TPMS antenna that allows wireless communication with TPMS sensors.
    • -
    • It has a rubber shroud that protects the device from shock and damage.
    • -
    • It has a rechargeable battery that allows cordless operation for up to 4 hours.
    • -
    -

    However, Carman Scan Lite also has some disadvantages compared to other diagnostic tools, such as:

    -
      -
    • It is more expensive than some other diagnostic tools.
    • -
    • It requires a serial number and password for software update, which may be inconvenient for some users.
    • -
    • It does not support some special functions that other diagnostic tools do, such as oscilloscope, multimeter, battery tester, etc.
    • -
    • It does not have a touch screen or a color display, which may affect the user experience.
    • -
    -

    Comparison table of Carman Scan Lite and other popular tools

    -

    To help you compare Carman Scan Lite with other popular diagnostic tools, we have prepared a comparison table based on some key features and functions. Note that this table is not exhaustive and may not reflect the latest updates of each tool. You should always check the official website of each tool for more accurate and detailed information.

    - - - - - - - -< td>Wi-Fi or USB cable - - - - - -< td>Requires external battery tester module -
    Feature/Function Carman Scan LiteAutel MaxiCOM MK808Launch X431 V+Foxwell NT680 Pro
    Price$1,499$479$1,149$399
    Vehicle coverageOver 50 manufacturers from Asia, Europe, and AmericaOver 80 manufacturers from Asia, Europe, and AmericaOver 100 manufacturers from Asia, Europe, and AmericaOver 70 manufacturers from Asia, Europe, and America
    System coverageAll systems (engine, transmission, ABS, airbag, etc.)All systems (engine, transmission, ABS, airbag, etc.)All systems (engine, transmission, ABS, airbag, etc.)All systems (engine, transmission, ABS, airbag, etc.)
    Special functionsInjector coding, sensor calibration, key programming, service reset, DPF regeneration, ECU initialization, throttle adaptation, steering angle reset, brake bleeding, TPMS relearn, immobilizer matching, etc.Oil reset, EPB reset, SAS reset, DPF regeneration, BMS reset, IMMO service, ABS bleeding, TPMS relearn, Throttle adaptation, etc.Oil reset, EPB reset, SAS reset, DPF regeneration, BMS reset, IMMO service, ABS bleeding, TPMS relearn, Throttle adaptation, Injector coding, Gear learning, etc.Oil reset, EPB reset, SAS reset, DPF regeneration, BMS reset, TBA/TPS reset, CVT reset, Injector coding, Gear learning, etc.
    Data storageLarge internal memory and SD card slot32 GB internal memory and SD card slot16 GB internal memory and SD card slotNo internal memory and no SD card slot
    Data transfer and updateUSB cable or SD cardWi-Fi or USB cableUSB cable
    Display4.3 inch LCD (480 x 272 pixels)7 inch LCD touch screen (1024 x 600 pixels)10.1 inch LCD touch screen (1280 x 800 pixels)4.3 inch LCD (480 x 272 pixels)
    BatteryRechargeable Li-ion battery (up to 4 hours)Rechargeable Li-polymer battery (up to 4.5 hours)Rechargeable Li-polymer battery (up to 8 hours)No battery (requires external power source)
    TPMS functionBuilt-in TPMS antennaRequires external TPMS toolRequires external TPMS toolNo TPMS function
    Oscilloscope functionNo oscilloscope functionNo oscilloscope functionRequires external oscilloscope moduleNo oscilloscope function
    Multimeter functionNo multimeter functionNo multimeter functionRequires external multimeter moduleNo multimeter function
    Battery tester functionNo battery tester functionNo battery tester functionNo battery tester function
    -

    Conclusion

    -

    Carman Scan Lite is a powerful and professional diagnostic tool that can help you diagnose and fix various problems in your vehicle. It offers comprehensive diagnostics for over 50 vehicle manufacturers from Asia, Europe, and America, covering more than 1,000 models. It supports various special functions such as injector coding, sensor calibration, key programming, service reset, DPF regeneration, ECU initialization, throttle adaptation, steering angle reset, brake bleeding, TPMS relearn, immobilizer matching, etc. It has a user-friendly interface that allows easy navigation through menus and functions. It has a large data storage area that can store software updates, technical bulletins, flight records, vehicle data, etc. It has a USB port that allows easy connection to a PC for data transfer and software update. It has an SD card slot that allows easy backup and restore of data. It has a built-in TPMS antenna that allows wireless communication with TPMS sensors. It has a rubber shroud that protects the device from shock and damage. It has a rechargeable battery that allows cordless operation for up to 4 hours.

    -

    However, Carman Scan Lite also has some drawbacks compared to other diagnostic tools. It is more expensive than some other diagnostic tools. It requires a serial number and password for software update, which may be inconvenient for some users. It does not support some special functions that other diagnostic tools do, such as oscilloscope, multimeter, battery tester, etc. It does not have a touch screen or a color display, which may affect the user experience.

    -

    In conclusion, Carman Scan Lite is a great diagnostic tool for car owners and mechanics who need a reliable and professional device that can handle various vehicle systems and functions. However, it may not be the best choice for everyone, depending on their budget, preferences, and needs. Therefore, it is important to compare Carman Scan Lite with other diagnostic tools before making a purchase decision.

    -

    FAQs

    -

    Here are some frequently asked questions about Carman Scan Lite:

    -
      -
    1. How much does Carman Scan Lite cost?
      -Carman Scan Lite costs $1,499 on the official website of Carman International. However, you may find different prices on other online platforms or local dealers.
    2. -
    3. How often do I need to update Carman Scan Lite?
      -You need to update Carman Scan Lite regularly to keep it up to date with the latest vehicle models and functions. You can check the availability of new updates on the official website of Carman International or on your device itself.
    4. -
    5. How long does the battery of Carman Scan Lite last?
      -The battery of Carman Scan Lite can last up to 4 hours on a full charge. However, the actual battery life may vary depending on the usage and environmental conditions.
    6. -
    7. What is the warranty of Carman Scan Lite?
      -Carman Scan Lite comes with a one-year warranty from the date of purchase. The warranty covers any defects in materials or workmanship under normal use and service. However, the warranty does not cover any damages caused by misuse, abuse, negligence, accidents, modifications, alterations, or unauthorized repairs. For more details, please refer to the user manual or the official website of Carman International.
    8. -
    9. Where can I buy Carman Scan Lite?
      -You can buy Carman Scan Lite on the official website of Carman International or from their authorized distributors and dealers. You can also find Carman Scan Lite on some online platforms such as Amazon, eBay, AliExpress, etc. However, you should always check the authenticity and quality of the product before making a purchase.
    10. -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Impro Improvisacion Y El Teatro Keith Johnstone.pdf.md b/spaces/tioseFevbu/cartoon-converter/scripts/Impro Improvisacion Y El Teatro Keith Johnstone.pdf.md deleted file mode 100644 index f159710ead5b93993bb76748d476a9d14c69a65a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Impro Improvisacion Y El Teatro Keith Johnstone.pdf.md +++ /dev/null @@ -1,14 +0,0 @@ - -

    Impro: Improvisación y el teatro by Keith Johnstone - A Review

    -

    Impro: Improvisación y el teatro by Keith Johnstone is a celebrated book by the British playwright, director and teacher of theatre. It should be a mandatory reading not only for people involved in the theatrical art, but also for teachers, educators and students of all humanistic careers. In this book, Johnstone shares his insights and experiences on the art and craft of improvisation, which he considers as a way of liberating the innate creativity in everyone.

    -

    This book is divided into four parts: Status, Spontaneity, Narrative Skills and Masks and Trance. In each part, Johnstone explores different aspects of improvisation, such as how to use status transactions to create interesting scenes, how to overcome the fear of being original and spontaneous, how to tell stories that engage the audience and how to use masks and trance to access deeper levels of expression. He also provides many exercises and games that can be used to practice and improve improvisation skills.

    -

    Impro Improvisacion Y El Teatro Keith Johnstone.pdf


    Download ⚙⚙⚙ https://urlcod.com/2uHw2b



    -

    Impro: Improvisación y el teatro by Keith Johnstone is a dynamic, entertaining, funny, wise and provocative book that challenges and inspires anyone who wants to learn more about theatre and improvisation. It is based on Johnstone's successful career as a Professor of Improvisation at the Royal Court Theatre in London, and with his own companies, The Theatre Machine and The Loose Moose Theatre Company. This book has been translated into many languages and has influenced many theatre practitioners, educators and therapists around the world.

    -

    If you are interested in reading this book, you can find it on Amazon[^1^] or Google Books[^2^]. You can also learn more about Keith Johnstone and his work on his website[^3^].

    - -

    In this article, I will focus on the first part of the book, Status, and how it can help us understand and improve our interactions with others. Status is a concept that Johnstone borrowed from sociology and anthropology, and it refers to the relative position of power and authority that we assume or assign to ourselves and others in any situation. Johnstone argues that status is not a fixed attribute, but a fluid and dynamic one that can change depending on the context and the behavior of the participants.

    -

    Johnstone illustrates this idea with many examples from his theatre workshops, where he asked his students to play scenes with different status levels. He noticed that by changing their posture, voice, eye contact, gestures and movements, the actors could create different impressions of status and affect the outcome of the scene. He also observed that status transactions are not always conscious or intentional, but often unconscious or habitual. For example, some people tend to lower their status when they meet someone they admire or fear, while others tend to raise their status when they feel insecure or threatened.

    -

    Johnstone suggests that by becoming aware of our own and others' status signals, we can learn to control and manipulate them to achieve our goals and improve our relationships. He also warns us of the dangers of playing too high or too low status for too long, as it can lead to boredom, resentment or alienation. He advises us to find a balance between high and low status, and to be flexible and adaptable to different situations. He also encourages us to experiment with different status levels and see how they affect our feelings and reactions.

    -

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/bin/Activate.ps1 b/spaces/tjburns/ask_marcus_aurelius/.venv/bin/Activate.ps1 deleted file mode 100644 index eeea3583fa130d4702a05012a2103152daf51487..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/bin/Activate.ps1 +++ /dev/null @@ -1,247 +0,0 @@ -<# -.Synopsis -Activate a Python virtual environment for the current PowerShell session. - -.Description -Pushes the python executable for a virtual environment to the front of the -$Env:PATH environment variable and sets the prompt to signify that you are -in a Python virtual environment. Makes use of the command line switches as -well as the `pyvenv.cfg` file values present in the virtual environment. - -.Parameter VenvDir -Path to the directory that contains the virtual environment to activate. The -default value for this is the parent of the directory that the Activate.ps1 -script is located within. - -.Parameter Prompt -The prompt prefix to display when this virtual environment is activated. By -default, this prompt is the name of the virtual environment folder (VenvDir) -surrounded by parentheses and followed by a single space (ie. '(.venv) '). - -.Example -Activate.ps1 -Activates the Python virtual environment that contains the Activate.ps1 script. - -.Example -Activate.ps1 -Verbose -Activates the Python virtual environment that contains the Activate.ps1 script, -and shows extra information about the activation as it executes. - -.Example -Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv -Activates the Python virtual environment located in the specified location. - -.Example -Activate.ps1 -Prompt "MyPython" -Activates the Python virtual environment that contains the Activate.ps1 script, -and prefixes the current prompt with the specified string (surrounded in -parentheses) while the virtual environment is active. - -.Notes -On Windows, it may be required to enable this Activate.ps1 script by setting the -execution policy for the user. You can do this by issuing the following PowerShell -command: - -PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser - -For more information on Execution Policies: -https://go.microsoft.com/fwlink/?LinkID=135170 - -#> -Param( - [Parameter(Mandatory = $false)] - [String] - $VenvDir, - [Parameter(Mandatory = $false)] - [String] - $Prompt -) - -<# Function declarations --------------------------------------------------- #> - -<# -.Synopsis -Remove all shell session elements added by the Activate script, including the -addition of the virtual environment's Python executable from the beginning of -the PATH variable. - -.Parameter NonDestructive -If present, do not remove this function from the global namespace for the -session. - -#> -function global:deactivate ([switch]$NonDestructive) { - # Revert to original values - - # The prior prompt: - if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { - Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt - Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT - } - - # The prior PYTHONHOME: - if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { - Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME - Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME - } - - # The prior PATH: - if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { - Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH - Remove-Item -Path Env:_OLD_VIRTUAL_PATH - } - - # Just remove the VIRTUAL_ENV altogether: - if (Test-Path -Path Env:VIRTUAL_ENV) { - Remove-Item -Path env:VIRTUAL_ENV - } - - # Just remove VIRTUAL_ENV_PROMPT altogether. - if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) { - Remove-Item -Path env:VIRTUAL_ENV_PROMPT - } - - # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: - if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { - Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force - } - - # Leave deactivate function in the global namespace if requested: - if (-not $NonDestructive) { - Remove-Item -Path function:deactivate - } -} - -<# -.Description -Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the -given folder, and returns them in a map. - -For each line in the pyvenv.cfg file, if that line can be parsed into exactly -two strings separated by `=` (with any amount of whitespace surrounding the =) -then it is considered a `key = value` line. The left hand string is the key, -the right hand is the value. - -If the value starts with a `'` or a `"` then the first and last character is -stripped from the value before being captured. - -.Parameter ConfigDir -Path to the directory that contains the `pyvenv.cfg` file. -#> -function Get-PyVenvConfig( - [String] - $ConfigDir -) { - Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" - - # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). - $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue - - # An empty map will be returned if no config file is found. - $pyvenvConfig = @{ } - - if ($pyvenvConfigPath) { - - Write-Verbose "File exists, parse `key = value` lines" - $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath - - $pyvenvConfigContent | ForEach-Object { - $keyval = $PSItem -split "\s*=\s*", 2 - if ($keyval[0] -and $keyval[1]) { - $val = $keyval[1] - - # Remove extraneous quotations around a string value. - if ("'""".Contains($val.Substring(0, 1))) { - $val = $val.Substring(1, $val.Length - 2) - } - - $pyvenvConfig[$keyval[0]] = $val - Write-Verbose "Adding Key: '$($keyval[0])'='$val'" - } - } - } - return $pyvenvConfig -} - - -<# Begin Activate script --------------------------------------------------- #> - -# Determine the containing directory of this script -$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition -$VenvExecDir = Get-Item -Path $VenvExecPath - -Write-Verbose "Activation script is located in path: '$VenvExecPath'" -Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" -Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" - -# Set values required in priority: CmdLine, ConfigFile, Default -# First, get the location of the virtual environment, it might not be -# VenvExecDir if specified on the command line. -if ($VenvDir) { - Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" -} -else { - Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." - $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") - Write-Verbose "VenvDir=$VenvDir" -} - -# Next, read the `pyvenv.cfg` file to determine any required value such -# as `prompt`. -$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir - -# Next, set the prompt from the command line, or the config file, or -# just use the name of the virtual environment folder. -if ($Prompt) { - Write-Verbose "Prompt specified as argument, using '$Prompt'" -} -else { - Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" - if ($pyvenvCfg -and $pyvenvCfg['prompt']) { - Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" - $Prompt = $pyvenvCfg['prompt']; - } - else { - Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)" - Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" - $Prompt = Split-Path -Path $venvDir -Leaf - } -} - -Write-Verbose "Prompt = '$Prompt'" -Write-Verbose "VenvDir='$VenvDir'" - -# Deactivate any currently active virtual environment, but leave the -# deactivate function in place. -deactivate -nondestructive - -# Now set the environment variable VIRTUAL_ENV, used by many tools to determine -# that there is an activated venv. -$env:VIRTUAL_ENV = $VenvDir - -if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { - - Write-Verbose "Setting prompt to '$Prompt'" - - # Set the prompt to include the env name - # Make sure _OLD_VIRTUAL_PROMPT is global - function global:_OLD_VIRTUAL_PROMPT { "" } - Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT - New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt - - function global:prompt { - Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " - _OLD_VIRTUAL_PROMPT - } - $env:VIRTUAL_ENV_PROMPT = $Prompt -} - -# Clear PYTHONHOME -if (Test-Path -Path Env:PYTHONHOME) { - Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME - Remove-Item -Path Env:PYTHONHOME -} - -# Add the venv to the PATH -Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH -$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_legacy.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_legacy.py deleted file mode 100644 index c5f0492ccbe9c727c835c12c84a1d8340366fa1e..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_legacy.py +++ /dev/null @@ -1,102 +0,0 @@ -import logging -import os.path -from typing import List, Optional - -from pip._internal.cli.spinners import open_spinner -from pip._internal.utils.setuptools_build import make_setuptools_bdist_wheel_args -from pip._internal.utils.subprocess import call_subprocess, format_command_args - -logger = logging.getLogger(__name__) - - -def format_command_result( - command_args: List[str], - command_output: str, -) -> str: - """Format command information for logging.""" - command_desc = format_command_args(command_args) - text = f"Command arguments: {command_desc}\n" - - if not command_output: - text += "Command output: None" - elif logger.getEffectiveLevel() > logging.DEBUG: - text += "Command output: [use --verbose to show]" - else: - if not command_output.endswith("\n"): - command_output += "\n" - text += f"Command output:\n{command_output}" - - return text - - -def get_legacy_build_wheel_path( - names: List[str], - temp_dir: str, - name: str, - command_args: List[str], - command_output: str, -) -> Optional[str]: - """Return the path to the wheel in the temporary build directory.""" - # Sort for determinism. - names = sorted(names) - if not names: - msg = ("Legacy build of wheel for {!r} created no files.\n").format(name) - msg += format_command_result(command_args, command_output) - logger.warning(msg) - return None - - if len(names) > 1: - msg = ( - "Legacy build of wheel for {!r} created more than one file.\n" - "Filenames (choosing first): {}\n" - ).format(name, names) - msg += format_command_result(command_args, command_output) - logger.warning(msg) - - return os.path.join(temp_dir, names[0]) - - -def build_wheel_legacy( - name: str, - setup_py_path: str, - source_dir: str, - global_options: List[str], - build_options: List[str], - tempd: str, -) -> Optional[str]: - """Build one unpacked package using the "legacy" build process. - - Returns path to wheel if successfully built. Otherwise, returns None. - """ - wheel_args = make_setuptools_bdist_wheel_args( - setup_py_path, - global_options=global_options, - build_options=build_options, - destination_dir=tempd, - ) - - spin_message = f"Building wheel for {name} (setup.py)" - with open_spinner(spin_message) as spinner: - logger.debug("Destination directory: %s", tempd) - - try: - output = call_subprocess( - wheel_args, - command_desc="python setup.py bdist_wheel", - cwd=source_dir, - spinner=spinner, - ) - except Exception: - spinner.finish("error") - logger.error("Failed building wheel for %s", name) - return None - - names = os.listdir(tempd) - wheel_path = get_legacy_build_wheel_path( - names=names, - temp_dir=tempd, - name=name, - command_args=wheel_args, - command_output=output, - ) - return wheel_path diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langturkishmodel.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langturkishmodel.py deleted file mode 100644 index 291857c25c83f91a151c1d7760e8e5e09c1ee238..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langturkishmodel.py +++ /dev/null @@ -1,4380 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -TURKISH_LANG_MODEL = { - 23: { # 'A' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 37: { # 'B' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 47: { # 'C' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 39: { # 'D' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 0, # 'ş' - }, - 29: { # 'E' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 52: { # 'F' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 1, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 2, # 'ş' - }, - 36: { # 'G' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 2, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 1, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 45: { # 'H' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 2, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 2, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 53: { # 'I' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 60: { # 'J' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 16: { # 'K' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 1, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 49: { # 'L' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 2, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 20: { # 'M' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 0, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 46: { # 'N' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 42: { # 'O' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 2, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 48: { # 'P' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 44: { # 'R' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, - 35: { # 'S' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 1, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 2, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 31: { # 'T' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 2, # 't' - 14: 2, # 'u' - 32: 1, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 51: { # 'U' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 1, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 38: { # 'V' - 23: 1, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 1, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 62: { # 'W' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 43: { # 'Y' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 0, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 1, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 1, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 56: { # 'Z' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 1: { # 'a' - 23: 3, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 1, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 21: { # 'b' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 3, # 'g' - 25: 1, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 2, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 28: { # 'c' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 2, # 'T' - 51: 2, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 3, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 1, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 1, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 2, # 'ş' - }, - 12: { # 'd' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 2: { # 'e' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 2, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 18: { # 'f' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 1, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 1, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 27: { # 'g' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 25: { # 'h' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 3: { # 'i' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 1, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 3, # 'g' - 25: 1, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 1, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 1, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 24: { # 'j' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 2, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 10: { # 'k' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 2, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 3, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 5: { # 'l' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 1, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 2, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 13: { # 'm' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 2, # 'u' - 32: 2, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 4: { # 'n' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 3, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 15: { # 'o' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 2, # 'L' - 20: 0, # 'M' - 46: 2, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 2, # 'İ' - 6: 3, # 'ı' - 40: 2, # 'Ş' - 19: 2, # 'ş' - }, - 26: { # 'p' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 1, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 7: { # 'r' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 1, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 1, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 3, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 8: { # 's' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 2, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 9: { # 't' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 2, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 2, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 14: { # 'u' - 23: 3, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 2, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 3, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 2, # 'Z' - 1: 2, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 2, # 'e' - 18: 2, # 'f' - 27: 3, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 2, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 32: { # 'v' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 1, # 'k' - 5: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 57: { # 'w' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 1, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 1, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 2, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 58: { # 'x' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 2, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 11: { # 'y' - 23: 1, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 2, # 'r' - 8: 1, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 3, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 22: { # 'z' - 23: 2, # 'A' - 37: 2, # 'B' - 47: 1, # 'C' - 39: 2, # 'D' - 29: 3, # 'E' - 52: 1, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 2, # 'N' - 42: 2, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 3, # 'T' - 51: 2, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 1, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 2, # 'e' - 18: 3, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 3, # 'y' - 22: 2, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 3, # 'ı' - 40: 1, # 'Ş' - 19: 2, # 'ş' - }, - 63: { # '·' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 54: { # 'Ç' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 1, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 0, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 3, # 'i' - 24: 0, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 2, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 2, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 50: { # 'Ö' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 2, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 2, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 1, # 'N' - 42: 2, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 1, # 'f' - 27: 1, # 'g' - 25: 1, # 'h' - 3: 2, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 2, # 'p' - 7: 3, # 'r' - 8: 1, # 's' - 9: 2, # 't' - 14: 0, # 'u' - 32: 1, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 2, # 'ü' - 30: 1, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 55: { # 'Ü' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 1, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 1, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 1, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 1, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 0, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 59: { # 'â' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 0, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 2, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 0, # 'ş' - }, - 33: { # 'ç' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 3, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 0, # 'Z' - 1: 0, # 'a' - 21: 3, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 2, # 'f' - 27: 1, # 'g' - 25: 3, # 'h' - 3: 3, # 'i' - 24: 0, # 'j' - 10: 3, # 'k' - 5: 0, # 'l' - 13: 0, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 3, # 't' - 14: 0, # 'u' - 32: 2, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 1, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 61: { # 'î' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 0, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 0, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 2, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 1, # 'j' - 10: 0, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 15: 0, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 1, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 1, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 1, # 'î' - 34: 0, # 'ö' - 17: 0, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 1, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 34: { # 'ö' - 23: 0, # 'A' - 37: 1, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 1, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 1, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 2, # 'h' - 3: 1, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 0, # 'r' - 8: 3, # 's' - 9: 1, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 1, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 2, # 'ğ' - 41: 1, # 'İ' - 6: 1, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 17: { # 'ü' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 0, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 1, # 'J' - 16: 1, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 0, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 0, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 0, # 'c' - 12: 1, # 'd' - 2: 3, # 'e' - 18: 1, # 'f' - 27: 2, # 'g' - 25: 0, # 'h' - 3: 1, # 'i' - 24: 1, # 'j' - 10: 2, # 'k' - 5: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 2, # 'p' - 7: 2, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 3, # 'u' - 32: 1, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 2, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 30: { # 'ğ' - 23: 0, # 'A' - 37: 2, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 1, # 'M' - 46: 2, # 'N' - 42: 2, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 0, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 2, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 0, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 2, # 'e' - 18: 0, # 'f' - 27: 0, # 'g' - 25: 0, # 'h' - 3: 0, # 'i' - 24: 3, # 'j' - 10: 1, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 1, # 'o' - 26: 0, # 'p' - 7: 1, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 2, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 2, # 'İ' - 6: 2, # 'ı' - 40: 2, # 'Ş' - 19: 1, # 'ş' - }, - 41: { # 'İ' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 2, # 'G' - 45: 2, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 0, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 0, # 'Z' - 1: 1, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 2, # 'd' - 2: 1, # 'e' - 18: 0, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 2, # 'i' - 24: 2, # 'j' - 10: 2, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 1, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 2, # 't' - 14: 0, # 'u' - 32: 0, # 'v' - 57: 1, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 1, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 1, # 'ö' - 17: 1, # 'ü' - 30: 2, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 1, # 'ş' - }, - 6: { # 'ı' - 23: 2, # 'A' - 37: 0, # 'B' - 47: 0, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 2, # 'J' - 16: 3, # 'K' - 49: 0, # 'L' - 20: 3, # 'M' - 46: 1, # 'N' - 42: 0, # 'O' - 48: 0, # 'P' - 44: 0, # 'R' - 35: 0, # 'S' - 31: 2, # 'T' - 51: 0, # 'U' - 38: 0, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 3, # 'a' - 21: 2, # 'b' - 28: 1, # 'c' - 12: 3, # 'd' - 2: 3, # 'e' - 18: 3, # 'f' - 27: 3, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 3, # 'j' - 10: 3, # 'k' - 5: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 15: 0, # 'o' - 26: 3, # 'p' - 7: 3, # 'r' - 8: 3, # 's' - 9: 3, # 't' - 14: 3, # 'u' - 32: 3, # 'v' - 57: 1, # 'w' - 58: 1, # 'x' - 11: 3, # 'y' - 22: 0, # 'z' - 63: 1, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 2, # 'ç' - 61: 0, # 'î' - 34: 0, # 'ö' - 17: 3, # 'ü' - 30: 0, # 'ğ' - 41: 0, # 'İ' - 6: 3, # 'ı' - 40: 0, # 'Ş' - 19: 0, # 'ş' - }, - 40: { # 'Ş' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 1, # 'D' - 29: 1, # 'E' - 52: 0, # 'F' - 36: 1, # 'G' - 45: 2, # 'H' - 53: 1, # 'I' - 60: 0, # 'J' - 16: 0, # 'K' - 49: 0, # 'L' - 20: 2, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 2, # 'P' - 44: 2, # 'R' - 35: 1, # 'S' - 31: 1, # 'T' - 51: 0, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 2, # 'Y' - 56: 1, # 'Z' - 1: 0, # 'a' - 21: 2, # 'b' - 28: 0, # 'c' - 12: 2, # 'd' - 2: 0, # 'e' - 18: 3, # 'f' - 27: 0, # 'g' - 25: 2, # 'h' - 3: 3, # 'i' - 24: 2, # 'j' - 10: 1, # 'k' - 5: 0, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 15: 2, # 'o' - 26: 0, # 'p' - 7: 3, # 'r' - 8: 2, # 's' - 9: 2, # 't' - 14: 1, # 'u' - 32: 3, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 2, # 'y' - 22: 0, # 'z' - 63: 0, # '·' - 54: 0, # 'Ç' - 50: 0, # 'Ö' - 55: 1, # 'Ü' - 59: 0, # 'â' - 33: 0, # 'ç' - 61: 0, # 'î' - 34: 2, # 'ö' - 17: 1, # 'ü' - 30: 2, # 'ğ' - 41: 0, # 'İ' - 6: 2, # 'ı' - 40: 1, # 'Ş' - 19: 2, # 'ş' - }, - 19: { # 'ş' - 23: 0, # 'A' - 37: 0, # 'B' - 47: 1, # 'C' - 39: 0, # 'D' - 29: 0, # 'E' - 52: 2, # 'F' - 36: 1, # 'G' - 45: 0, # 'H' - 53: 0, # 'I' - 60: 0, # 'J' - 16: 3, # 'K' - 49: 2, # 'L' - 20: 0, # 'M' - 46: 1, # 'N' - 42: 1, # 'O' - 48: 1, # 'P' - 44: 1, # 'R' - 35: 1, # 'S' - 31: 0, # 'T' - 51: 1, # 'U' - 38: 1, # 'V' - 62: 0, # 'W' - 43: 1, # 'Y' - 56: 0, # 'Z' - 1: 3, # 'a' - 21: 1, # 'b' - 28: 2, # 'c' - 12: 0, # 'd' - 2: 3, # 'e' - 18: 0, # 'f' - 27: 2, # 'g' - 25: 1, # 'h' - 3: 1, # 'i' - 24: 0, # 'j' - 10: 2, # 'k' - 5: 2, # 'l' - 13: 3, # 'm' - 4: 0, # 'n' - 15: 0, # 'o' - 26: 1, # 'p' - 7: 3, # 'r' - 8: 0, # 's' - 9: 0, # 't' - 14: 3, # 'u' - 32: 0, # 'v' - 57: 0, # 'w' - 58: 0, # 'x' - 11: 0, # 'y' - 22: 2, # 'z' - 63: 0, # '·' - 54: 1, # 'Ç' - 50: 2, # 'Ö' - 55: 0, # 'Ü' - 59: 0, # 'â' - 33: 1, # 'ç' - 61: 1, # 'î' - 34: 2, # 'ö' - 17: 0, # 'ü' - 30: 1, # 'ğ' - 41: 1, # 'İ' - 6: 1, # 'ı' - 40: 1, # 'Ş' - 19: 1, # 'ş' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -ISO_8859_9_TURKISH_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 255, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 255, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 255, # ' ' - 33: 255, # '!' - 34: 255, # '"' - 35: 255, # '#' - 36: 255, # '$' - 37: 255, # '%' - 38: 255, # '&' - 39: 255, # "'" - 40: 255, # '(' - 41: 255, # ')' - 42: 255, # '*' - 43: 255, # '+' - 44: 255, # ',' - 45: 255, # '-' - 46: 255, # '.' - 47: 255, # '/' - 48: 255, # '0' - 49: 255, # '1' - 50: 255, # '2' - 51: 255, # '3' - 52: 255, # '4' - 53: 255, # '5' - 54: 255, # '6' - 55: 255, # '7' - 56: 255, # '8' - 57: 255, # '9' - 58: 255, # ':' - 59: 255, # ';' - 60: 255, # '<' - 61: 255, # '=' - 62: 255, # '>' - 63: 255, # '?' - 64: 255, # '@' - 65: 23, # 'A' - 66: 37, # 'B' - 67: 47, # 'C' - 68: 39, # 'D' - 69: 29, # 'E' - 70: 52, # 'F' - 71: 36, # 'G' - 72: 45, # 'H' - 73: 53, # 'I' - 74: 60, # 'J' - 75: 16, # 'K' - 76: 49, # 'L' - 77: 20, # 'M' - 78: 46, # 'N' - 79: 42, # 'O' - 80: 48, # 'P' - 81: 69, # 'Q' - 82: 44, # 'R' - 83: 35, # 'S' - 84: 31, # 'T' - 85: 51, # 'U' - 86: 38, # 'V' - 87: 62, # 'W' - 88: 65, # 'X' - 89: 43, # 'Y' - 90: 56, # 'Z' - 91: 255, # '[' - 92: 255, # '\\' - 93: 255, # ']' - 94: 255, # '^' - 95: 255, # '_' - 96: 255, # '`' - 97: 1, # 'a' - 98: 21, # 'b' - 99: 28, # 'c' - 100: 12, # 'd' - 101: 2, # 'e' - 102: 18, # 'f' - 103: 27, # 'g' - 104: 25, # 'h' - 105: 3, # 'i' - 106: 24, # 'j' - 107: 10, # 'k' - 108: 5, # 'l' - 109: 13, # 'm' - 110: 4, # 'n' - 111: 15, # 'o' - 112: 26, # 'p' - 113: 64, # 'q' - 114: 7, # 'r' - 115: 8, # 's' - 116: 9, # 't' - 117: 14, # 'u' - 118: 32, # 'v' - 119: 57, # 'w' - 120: 58, # 'x' - 121: 11, # 'y' - 122: 22, # 'z' - 123: 255, # '{' - 124: 255, # '|' - 125: 255, # '}' - 126: 255, # '~' - 127: 255, # '\x7f' - 128: 180, # '\x80' - 129: 179, # '\x81' - 130: 178, # '\x82' - 131: 177, # '\x83' - 132: 176, # '\x84' - 133: 175, # '\x85' - 134: 174, # '\x86' - 135: 173, # '\x87' - 136: 172, # '\x88' - 137: 171, # '\x89' - 138: 170, # '\x8a' - 139: 169, # '\x8b' - 140: 168, # '\x8c' - 141: 167, # '\x8d' - 142: 166, # '\x8e' - 143: 165, # '\x8f' - 144: 164, # '\x90' - 145: 163, # '\x91' - 146: 162, # '\x92' - 147: 161, # '\x93' - 148: 160, # '\x94' - 149: 159, # '\x95' - 150: 101, # '\x96' - 151: 158, # '\x97' - 152: 157, # '\x98' - 153: 156, # '\x99' - 154: 155, # '\x9a' - 155: 154, # '\x9b' - 156: 153, # '\x9c' - 157: 152, # '\x9d' - 158: 151, # '\x9e' - 159: 106, # '\x9f' - 160: 150, # '\xa0' - 161: 149, # '¡' - 162: 148, # '¢' - 163: 147, # '£' - 164: 146, # '¤' - 165: 145, # '¥' - 166: 144, # '¦' - 167: 100, # '§' - 168: 143, # '¨' - 169: 142, # '©' - 170: 141, # 'ª' - 171: 140, # '«' - 172: 139, # '¬' - 173: 138, # '\xad' - 174: 137, # '®' - 175: 136, # '¯' - 176: 94, # '°' - 177: 80, # '±' - 178: 93, # '²' - 179: 135, # '³' - 180: 105, # '´' - 181: 134, # 'µ' - 182: 133, # '¶' - 183: 63, # '·' - 184: 132, # '¸' - 185: 131, # '¹' - 186: 130, # 'º' - 187: 129, # '»' - 188: 128, # '¼' - 189: 127, # '½' - 190: 126, # '¾' - 191: 125, # '¿' - 192: 124, # 'À' - 193: 104, # 'Á' - 194: 73, # 'Â' - 195: 99, # 'Ã' - 196: 79, # 'Ä' - 197: 85, # 'Å' - 198: 123, # 'Æ' - 199: 54, # 'Ç' - 200: 122, # 'È' - 201: 98, # 'É' - 202: 92, # 'Ê' - 203: 121, # 'Ë' - 204: 120, # 'Ì' - 205: 91, # 'Í' - 206: 103, # 'Î' - 207: 119, # 'Ï' - 208: 68, # 'Ğ' - 209: 118, # 'Ñ' - 210: 117, # 'Ò' - 211: 97, # 'Ó' - 212: 116, # 'Ô' - 213: 115, # 'Õ' - 214: 50, # 'Ö' - 215: 90, # '×' - 216: 114, # 'Ø' - 217: 113, # 'Ù' - 218: 112, # 'Ú' - 219: 111, # 'Û' - 220: 55, # 'Ü' - 221: 41, # 'İ' - 222: 40, # 'Ş' - 223: 86, # 'ß' - 224: 89, # 'à' - 225: 70, # 'á' - 226: 59, # 'â' - 227: 78, # 'ã' - 228: 71, # 'ä' - 229: 82, # 'å' - 230: 88, # 'æ' - 231: 33, # 'ç' - 232: 77, # 'è' - 233: 66, # 'é' - 234: 84, # 'ê' - 235: 83, # 'ë' - 236: 110, # 'ì' - 237: 75, # 'í' - 238: 61, # 'î' - 239: 96, # 'ï' - 240: 30, # 'ğ' - 241: 67, # 'ñ' - 242: 109, # 'ò' - 243: 74, # 'ó' - 244: 87, # 'ô' - 245: 102, # 'õ' - 246: 34, # 'ö' - 247: 95, # '÷' - 248: 81, # 'ø' - 249: 108, # 'ù' - 250: 76, # 'ú' - 251: 72, # 'û' - 252: 17, # 'ü' - 253: 6, # 'ı' - 254: 19, # 'ş' - 255: 107, # 'ÿ' -} - -ISO_8859_9_TURKISH_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-9", - language="Turkish", - char_to_order_map=ISO_8859_9_TURKISH_CHAR_TO_ORDER, - language_model=TURKISH_LANG_MODEL, - typical_positive_ratio=0.97029, - keep_ascii_letters=True, - alphabet="ABCDEFGHIJKLMNOPRSTUVYZabcdefghijklmnoprstuvyzÂÇÎÖÛÜâçîöûüĞğİıŞş", -) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/lexers/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/lexers/__init__.py deleted file mode 100644 index 3f404e4f747cc2446923642774ca9c44d224ee11..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/lexers/__init__.py +++ /dev/null @@ -1,345 +0,0 @@ -""" - pygments.lexers - ~~~~~~~~~~~~~~~ - - Pygments lexers. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import sys -import types -import fnmatch -from os.path import basename - -from pip._vendor.pygments.lexers._mapping import LEXERS -from pip._vendor.pygments.modeline import get_filetype_from_buffer -from pip._vendor.pygments.plugin import find_plugin_lexers -from pip._vendor.pygments.util import ClassNotFound, guess_decode - -COMPAT = { - 'Python3Lexer': 'PythonLexer', - 'Python3TracebackLexer': 'PythonTracebackLexer', -} - -__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class', - 'guess_lexer', 'load_lexer_from_file'] + list(LEXERS) + list(COMPAT) - -_lexer_cache = {} -_pattern_cache = {} - - -def _fn_matches(fn, glob): - """Return whether the supplied file name fn matches pattern filename.""" - if glob not in _pattern_cache: - pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob)) - return pattern.match(fn) - return _pattern_cache[glob].match(fn) - - -def _load_lexers(module_name): - """Load a lexer (and all others in the module too).""" - mod = __import__(module_name, None, None, ['__all__']) - for lexer_name in mod.__all__: - cls = getattr(mod, lexer_name) - _lexer_cache[cls.name] = cls - - -def get_all_lexers(plugins=True): - """Return a generator of tuples in the form ``(name, aliases, - filenames, mimetypes)`` of all know lexers. - - If *plugins* is true (the default), plugin lexers supplied by entrypoints - are also returned. Otherwise, only builtin ones are considered. - """ - for item in LEXERS.values(): - yield item[1:] - if plugins: - for lexer in find_plugin_lexers(): - yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes - - -def find_lexer_class(name): - """Lookup a lexer class by name. - - Return None if not found. - """ - if name in _lexer_cache: - return _lexer_cache[name] - # lookup builtin lexers - for module_name, lname, aliases, _, _ in LEXERS.values(): - if name == lname: - _load_lexers(module_name) - return _lexer_cache[name] - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if cls.name == name: - return cls - - -def find_lexer_class_by_name(_alias): - """Lookup a lexer class by alias. - - Like `get_lexer_by_name`, but does not instantiate the class. - - .. versionadded:: 2.2 - """ - if not _alias: - raise ClassNotFound('no lexer for alias %r found' % _alias) - # lookup builtin lexers - for module_name, name, aliases, _, _ in LEXERS.values(): - if _alias.lower() in aliases: - if name not in _lexer_cache: - _load_lexers(module_name) - return _lexer_cache[name] - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if _alias.lower() in cls.aliases: - return cls - raise ClassNotFound('no lexer for alias %r found' % _alias) - - -def get_lexer_by_name(_alias, **options): - """Get a lexer by an alias. - - Raises ClassNotFound if not found. - """ - if not _alias: - raise ClassNotFound('no lexer for alias %r found' % _alias) - - # lookup builtin lexers - for module_name, name, aliases, _, _ in LEXERS.values(): - if _alias.lower() in aliases: - if name not in _lexer_cache: - _load_lexers(module_name) - return _lexer_cache[name](**options) - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if _alias.lower() in cls.aliases: - return cls(**options) - raise ClassNotFound('no lexer for alias %r found' % _alias) - - -def load_lexer_from_file(filename, lexername="CustomLexer", **options): - """Load a lexer from a file. - - This method expects a file located relative to the current working - directory, which contains a Lexer class. By default, it expects the - Lexer to be name CustomLexer; you can specify your own class name - as the second argument to this function. - - Users should be very careful with the input, because this method - is equivalent to running eval on the input file. - - Raises ClassNotFound if there are any problems importing the Lexer. - - .. versionadded:: 2.2 - """ - try: - # This empty dict will contain the namespace for the exec'd file - custom_namespace = {} - with open(filename, 'rb') as f: - exec(f.read(), custom_namespace) - # Retrieve the class `lexername` from that namespace - if lexername not in custom_namespace: - raise ClassNotFound('no valid %s class found in %s' % - (lexername, filename)) - lexer_class = custom_namespace[lexername] - # And finally instantiate it with the options - return lexer_class(**options) - except OSError as err: - raise ClassNotFound('cannot read %s: %s' % (filename, err)) - except ClassNotFound: - raise - except Exception as err: - raise ClassNotFound('error when loading custom lexer: %s' % err) - - -def find_lexer_class_for_filename(_fn, code=None): - """Get a lexer for a filename. - - If multiple lexers match the filename pattern, use ``analyse_text()`` to - figure out which one is more appropriate. - - Returns None if not found. - """ - matches = [] - fn = basename(_fn) - for modname, name, _, filenames, _ in LEXERS.values(): - for filename in filenames: - if _fn_matches(fn, filename): - if name not in _lexer_cache: - _load_lexers(modname) - matches.append((_lexer_cache[name], filename)) - for cls in find_plugin_lexers(): - for filename in cls.filenames: - if _fn_matches(fn, filename): - matches.append((cls, filename)) - - if isinstance(code, bytes): - # decode it, since all analyse_text functions expect unicode - code = guess_decode(code) - - def get_rating(info): - cls, filename = info - # explicit patterns get a bonus - bonus = '*' not in filename and 0.5 or 0 - # The class _always_ defines analyse_text because it's included in - # the Lexer class. The default implementation returns None which - # gets turned into 0.0. Run scripts/detect_missing_analyse_text.py - # to find lexers which need it overridden. - if code: - return cls.analyse_text(code) + bonus, cls.__name__ - return cls.priority + bonus, cls.__name__ - - if matches: - matches.sort(key=get_rating) - # print "Possible lexers, after sort:", matches - return matches[-1][0] - - -def get_lexer_for_filename(_fn, code=None, **options): - """Get a lexer for a filename. - - If multiple lexers match the filename pattern, use ``analyse_text()`` to - figure out which one is more appropriate. - - Raises ClassNotFound if not found. - """ - res = find_lexer_class_for_filename(_fn, code) - if not res: - raise ClassNotFound('no lexer for filename %r found' % _fn) - return res(**options) - - -def get_lexer_for_mimetype(_mime, **options): - """Get a lexer for a mimetype. - - Raises ClassNotFound if not found. - """ - for modname, name, _, _, mimetypes in LEXERS.values(): - if _mime in mimetypes: - if name not in _lexer_cache: - _load_lexers(modname) - return _lexer_cache[name](**options) - for cls in find_plugin_lexers(): - if _mime in cls.mimetypes: - return cls(**options) - raise ClassNotFound('no lexer for mimetype %r found' % _mime) - - -def _iter_lexerclasses(plugins=True): - """Return an iterator over all lexer classes.""" - for key in sorted(LEXERS): - module_name, name = LEXERS[key][:2] - if name not in _lexer_cache: - _load_lexers(module_name) - yield _lexer_cache[name] - if plugins: - yield from find_plugin_lexers() - - -def guess_lexer_for_filename(_fn, _text, **options): - """ - Lookup all lexers that handle those filenames primary (``filenames``) - or secondary (``alias_filenames``). Then run a text analysis for those - lexers and choose the best result. - - usage:: - - >>> from pygments.lexers import guess_lexer_for_filename - >>> guess_lexer_for_filename('hello.html', '<%= @foo %>') - - >>> guess_lexer_for_filename('hello.html', '

    {{ title|e }}

    ') - - >>> guess_lexer_for_filename('style.css', 'a { color: }') - - """ - fn = basename(_fn) - primary = {} - matching_lexers = set() - for lexer in _iter_lexerclasses(): - for filename in lexer.filenames: - if _fn_matches(fn, filename): - matching_lexers.add(lexer) - primary[lexer] = True - for filename in lexer.alias_filenames: - if _fn_matches(fn, filename): - matching_lexers.add(lexer) - primary[lexer] = False - if not matching_lexers: - raise ClassNotFound('no lexer for filename %r found' % fn) - if len(matching_lexers) == 1: - return matching_lexers.pop()(**options) - result = [] - for lexer in matching_lexers: - rv = lexer.analyse_text(_text) - if rv == 1.0: - return lexer(**options) - result.append((rv, lexer)) - - def type_sort(t): - # sort by: - # - analyse score - # - is primary filename pattern? - # - priority - # - last resort: class name - return (t[0], primary[t[1]], t[1].priority, t[1].__name__) - result.sort(key=type_sort) - - return result[-1][1](**options) - - -def guess_lexer(_text, **options): - """Guess a lexer by strong distinctions in the text (eg, shebang).""" - - if not isinstance(_text, str): - inencoding = options.get('inencoding', options.get('encoding')) - if inencoding: - _text = _text.decode(inencoding or 'utf8') - else: - _text, _ = guess_decode(_text) - - # try to get a vim modeline first - ft = get_filetype_from_buffer(_text) - - if ft is not None: - try: - return get_lexer_by_name(ft, **options) - except ClassNotFound: - pass - - best_lexer = [0.0, None] - for lexer in _iter_lexerclasses(): - rv = lexer.analyse_text(_text) - if rv == 1.0: - return lexer(**options) - if rv > best_lexer[0]: - best_lexer[:] = (rv, lexer) - if not best_lexer[0] or best_lexer[1] is None: - raise ClassNotFound('no lexer matching the text found') - return best_lexer[1](**options) - - -class _automodule(types.ModuleType): - """Automatically import lexers.""" - - def __getattr__(self, name): - info = LEXERS.get(name) - if info: - _load_lexers(info[0]) - cls = _lexer_cache[info[1]] - setattr(self, name, cls) - return cls - if name in COMPAT: - return getattr(self, COMPAT[name]) - raise AttributeError(name) - - -oldmod = sys.modules[__name__] -newmod = _automodule(__name__) -newmod.__dict__.update(oldmod.__dict__) -sys.modules[__name__] = newmod -del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py deleted file mode 100644 index 8765b907d70c4a530bc90dc88f24b3df73473b01..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py +++ /dev/null @@ -1,36 +0,0 @@ -""" -This module provides means to detect the App Engine environment. -""" - -import os - - -def is_appengine(): - return is_local_appengine() or is_prod_appengine() - - -def is_appengine_sandbox(): - """Reports if the app is running in the first generation sandbox. - - The second generation runtimes are technically still in a sandbox, but it - is much less restrictive, so generally you shouldn't need to check for it. - see https://cloud.google.com/appengine/docs/standard/runtimes - """ - return is_appengine() and os.environ["APPENGINE_RUNTIME"] == "python27" - - -def is_local_appengine(): - return "APPENGINE_RUNTIME" in os.environ and os.environ.get( - "SERVER_SOFTWARE", "" - ).startswith("Development/") - - -def is_prod_appengine(): - return "APPENGINE_RUNTIME" in os.environ and os.environ.get( - "SERVER_SOFTWARE", "" - ).startswith("Google App Engine/") - - -def is_prod_appengine_mvms(): - """Deprecated.""" - return False diff --git a/spaces/tomg-group-umd/pez-dispenser/open_clip/transformer.py b/spaces/tomg-group-umd/pez-dispenser/open_clip/transformer.py deleted file mode 100644 index d73d7050cc5fb4c23a97c7a73e81c08b800e0880..0000000000000000000000000000000000000000 --- a/spaces/tomg-group-umd/pez-dispenser/open_clip/transformer.py +++ /dev/null @@ -1,487 +0,0 @@ -from collections import OrderedDict -import math -from typing import Callable, Optional, Sequence - -import torch -from torch import nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint - -from .utils import to_2tuple - - -class LayerNormFp32(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16 (by casting to float32 and back).""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - x = F.layer_norm(x.to(torch.float32), self.normalized_shape, self.weight, self.bias, self.eps) - return x.to(orig_type) - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm (with cast back to input dtype).""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) - return x.to(orig_type) - - -class QuickGELU(nn.Module): - # NOTE This is slower than nn.GELU or nn.SiLU and uses more GPU memory - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class LayerScale(nn.Module): - def __init__(self, dim, init_values=1e-5, inplace=False): - super().__init__() - self.inplace = inplace - self.gamma = nn.Parameter(init_values * torch.ones(dim)) - - def forward(self, x): - return x.mul_(self.gamma) if self.inplace else x * self.gamma - - -class PatchDropout(nn.Module): - """ - https://arxiv.org/abs/2212.00794 - """ - - def __init__(self, prob, exclude_first_token=True): - super().__init__() - assert 0 <= prob < 1. - self.prob = prob - self.exclude_first_token = exclude_first_token # exclude CLS token - - def forward(self, x): - if not self.training or self.prob == 0.: - return x - - if self.exclude_first_token: - cls_tokens, x = x[:, :1], x[:, 1:] - else: - cls_tokens = torch.jit.annotate(torch.Tensor, x[:, :1]) - - batch = x.size()[0] - num_tokens = x.size()[1] - - batch_indices = torch.arange(batch) - batch_indices = batch_indices[..., None] - - keep_prob = 1 - self.prob - num_patches_keep = max(1, int(num_tokens * keep_prob)) - - rand = torch.randn(batch, num_tokens) - patch_indices_keep = rand.topk(num_patches_keep, dim=-1).indices - - x = x[batch_indices, patch_indices_keep] - - if self.exclude_first_token: - x = torch.cat((cls_tokens, x), dim=1) - - return x - - -class Attention(nn.Module): - def __init__( - self, - dim, - num_heads=8, - qkv_bias=True, - scaled_cosine=False, - scale_heads=False, - logit_scale_max=math.log(1. / 0.01), - attn_drop=0., - proj_drop=0. - ): - super().__init__() - self.scaled_cosine = scaled_cosine - self.scale_heads = scale_heads - assert dim % num_heads == 0, 'dim should be divisible by num_heads' - self.num_heads = num_heads - self.head_dim = dim // num_heads - self.scale = self.head_dim ** -0.5 - self.logit_scale_max = logit_scale_max - - # keeping in_proj in this form (instead of nn.Linear) to match weight scheme of original - self.in_proj_weight = nn.Parameter(torch.randn((dim * 3, dim)) * self.scale) - if qkv_bias: - self.in_proj_bias = nn.Parameter(torch.zeros(dim * 3)) - else: - self.in_proj_bias = None - - if self.scaled_cosine: - self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1)))) - else: - self.logit_scale = None - self.attn_drop = nn.Dropout(attn_drop) - if self.scale_heads: - self.head_scale = nn.Parameter(torch.ones((num_heads, 1, 1))) - else: - self.head_scale = None - self.out_proj = nn.Linear(dim, dim) - self.out_drop = nn.Dropout(proj_drop) - - def forward(self, x, attn_mask: Optional[torch.Tensor] = None): - L, N, C = x.shape - q, k, v = F.linear(x, self.in_proj_weight, self.in_proj_bias).chunk(3, dim=-1) - q = q.contiguous().view(L, N * self.num_heads, -1).transpose(0, 1) - k = k.contiguous().view(L, N * self.num_heads, -1).transpose(0, 1) - v = v.contiguous().view(L, N * self.num_heads, -1).transpose(0, 1) - - if self.logit_scale is not None: - attn = torch.bmm(F.normalize(q, dim=-1), F.normalize(k, dim=-1).transpose(-1, -2)) - logit_scale = torch.clamp(self.logit_scale, max=self.logit_scale_max).exp() - attn = attn.view(N, self.num_heads, L, L) * logit_scale - attn = attn.view(-1, L, L) - else: - q = q * self.scale - attn = torch.bmm(q, k.transpose(-1, -2)) - - if attn_mask is not None: - if attn_mask.dtype == torch.bool: - new_attn_mask = torch.zeros_like(attn_mask, dtype=q.dtype) - new_attn_mask.masked_fill_(attn_mask, float("-inf")) - attn_mask = new_attn_mask - attn += attn_mask - - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = torch.bmm(attn, v) - if self.head_scale is not None: - x = x.view(N, self.num_heads, L, C) * self.head_scale - x = x.view(-1, L, C) - x = x.transpose(0, 1).reshape(L, N, C) - x = self.out_proj(x) - x = self.out_drop(x) - return x - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, - d_model: int, - n_head: int, - mlp_ratio: float = 4.0, - ls_init_value: float = None, - act_layer: Callable = nn.GELU, - norm_layer: Callable = LayerNorm, - ): - super().__init__() - - self.ln_1 = norm_layer(d_model) - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ls_1 = LayerScale(d_model, ls_init_value) if ls_init_value is not None else nn.Identity() - - self.ln_2 = norm_layer(d_model) - mlp_width = int(d_model * mlp_ratio) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, mlp_width)), - ("gelu", act_layer()), - ("c_proj", nn.Linear(mlp_width, d_model)) - ])) - self.ls_2 = LayerScale(d_model, ls_init_value) if ls_init_value is not None else nn.Identity() - - def attention(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - attn_mask = attn_mask.to(x.dtype) if attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0] - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - x = x + self.ls_1(self.attention(self.ln_1(x), attn_mask=attn_mask)) - x = x + self.ls_2(self.mlp(self.ln_2(x))) - return x - - -class CustomResidualAttentionBlock(nn.Module): - def __init__( - self, - d_model: int, - n_head: int, - mlp_ratio: float = 4.0, - ls_init_value: float = None, - act_layer: Callable = nn.GELU, - norm_layer: Callable = LayerNorm, - scale_cosine_attn: bool = False, - scale_heads: bool = False, - scale_attn: bool = False, - scale_fc: bool = False, - ): - super().__init__() - - self.ln_1 = norm_layer(d_model) - self.attn = Attention( - d_model, n_head, - scaled_cosine=scale_cosine_attn, - scale_heads=scale_heads, - ) - self.ln_attn = norm_layer(d_model) if scale_attn else nn.Identity() - self.ls_1 = LayerScale(d_model, ls_init_value) if ls_init_value is not None else nn.Identity() - - self.ln_2 = norm_layer(d_model) - mlp_width = int(d_model * mlp_ratio) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, mlp_width)), - ('ln', norm_layer(mlp_width) if scale_fc else nn.Identity()), - ("gelu", act_layer()), - ("c_proj", nn.Linear(mlp_width, d_model)) - ])) - self.ls_2 = LayerScale(d_model, ls_init_value) if ls_init_value is not None else nn.Identity() - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - x = x + self.ls_1(self.ln_attn(self.attn(self.ln_1(x), attn_mask=attn_mask))) - x = x + self.ls_2(self.mlp(self.ln_2(x))) - return x - - -class Transformer(nn.Module): - def __init__( - self, - width: int, - layers: int, - heads: int, - mlp_ratio: float = 4.0, - ls_init_value: float = None, - act_layer: Callable = nn.GELU, - norm_layer: Callable = LayerNorm, - ): - super().__init__() - self.width = width - self.layers = layers - self.grad_checkpointing = False - - self.resblocks = nn.ModuleList([ - ResidualAttentionBlock( - width, heads, mlp_ratio, ls_init_value=ls_init_value, act_layer=act_layer, norm_layer=norm_layer) - for _ in range(layers) - ]) - - def get_cast_dtype(self) -> torch.dtype: - return self.resblocks[0].mlp.c_fc.weight.dtype - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - for r in self.resblocks: - if self.grad_checkpointing and not torch.jit.is_scripting(): - x = checkpoint(r, x, attn_mask) - else: - x = r(x, attn_mask=attn_mask) - return x - - -class VisionTransformer(nn.Module): - def __init__( - self, - image_size: int, - patch_size: int, - width: int, - layers: int, - heads: int, - mlp_ratio: float, - ls_init_value: float = None, - global_average_pool: bool = False, - output_dim: int = 512, - patch_dropout: float = 0., - act_layer: Callable = nn.GELU, - norm_layer: Callable = LayerNorm, - ): - super().__init__() - self.image_size = to_2tuple(image_size) - self.patch_size = to_2tuple(patch_size) - self.grid_size = (self.image_size[0] // self.patch_size[0], self.image_size[1] // self.patch_size[1]) - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn(self.grid_size[0] * self.grid_size[1] + 1, width)) - - # setting a patch_dropout of 0. would mean it is disabled and this function would be the identity fn - self.patch_dropout = PatchDropout(patch_dropout) if patch_dropout > 0. else nn.Identity() - - self.ln_pre = norm_layer(width) - self.transformer = Transformer( - width, - layers, - heads, - mlp_ratio, - ls_init_value=ls_init_value, - act_layer=act_layer, - norm_layer=norm_layer, - ) - - self.global_average_pool = global_average_pool - self.ln_post = norm_layer(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - self.init_parameters() - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - for param in self.parameters(): - param.requires_grad = False - - if unlocked_groups != 0: - groups = [ - [ - self.conv1, - self.class_embedding, - self.positional_embedding, - self.ln_pre, - ], - *self.transformer.resblocks[:-1], - [ - self.transformer.resblocks[-1], - self.ln_post, - ], - self.proj, - ] - - def _unlock(x): - if isinstance(x, Sequence): - for g in x: - _unlock(g) - else: - if isinstance(x, torch.nn.Parameter): - x.requires_grad = True - else: - for p in x.parameters(): - p.requires_grad = True - - _unlock(groups[-unlocked_groups:]) - - def init_parameters(self): - # FIXME OpenAI CLIP did not define an init for the VisualTransformer - # TODO experiment if default PyTorch init, below, or alternate init is best. - - # nn.init.normal_(self.class_embedding, std=self.scale) - # nn.init.normal_(self.positional_embedding, std=self.scale) - # - # proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - # attn_std = self.transformer.width ** -0.5 - # fc_std = (2 * self.transformer.width) ** -0.5 - # for block in self.transformer.resblocks: - # nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - # nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - # nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - # nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - # - # if self.text_projection is not None: - # nn.init.normal_(self.text_projection, std=self.scale) - pass - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.transformer.grad_checkpointing = enable - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat( - [self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), - x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - - # a patch_dropout of 0. would mean it is disabled and this function would do nothing but return what was passed in - x = self.patch_dropout(x) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - - if self.global_average_pool: - x = x.mean(dim=1) - else: - x = x[:, 0] - - x = self.ln_post(x) - - if self.proj is not None: - x = x @ self.proj - - return x - - -class TextTransformer(nn.Module): - - def __init__( - self, - context_length: int = 77, - vocab_size: int = 49408, - width: int = 512, - heads: int = 8, - layers: int = 12, - ls_init_value: float = None, - output_dim: int = 512, - act_layer: Callable = nn.GELU, - norm_layer: Callable = LayerNorm, - ): - super().__init__() - self.context_length = context_length - self.vocab_size = vocab_size - self.width = width - self.output_dim = output_dim - - self.token_embedding = nn.Embedding(vocab_size, width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, width)) - self.transformer = Transformer( - width=width, - layers=layers, - heads=heads, - ls_init_value=ls_init_value, - act_layer=act_layer, - norm_layer=norm_layer, - ) - self.ln_final = norm_layer(width) - self.text_projection = nn.Parameter(torch.empty(width, output_dim)) - - self.register_buffer('attn_mask', self.build_attention_mask(), persistent=False) - - self.init_parameters() - - def init_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.transformer.grad_checkpointing = enable - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def forward(self, text): - cast_dtype = self.transformer.get_cast_dtype() - - x = self.token_embedding(text).to(cast_dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.to(cast_dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, attn_mask=self.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/.dev_scripts/gather_benchmark_metric.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/.dev_scripts/gather_benchmark_metric.py deleted file mode 100644 index d9eb3e4d32f2e33360714c1cc82e99f5d93e26c0..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/.dev_scripts/gather_benchmark_metric.py +++ /dev/null @@ -1,142 +0,0 @@ -import argparse -import glob -import os.path as osp - -import mmcv -from gather_models import get_final_results - -try: - import xlrd -except ImportError: - xlrd = None -try: - import xlutils - from xlutils.copy import copy -except ImportError: - xlutils = None - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Gather benchmarked models metric') - parser.add_argument( - 'root', - type=str, - help='root path of benchmarked models to be gathered') - parser.add_argument( - 'benchmark_json', type=str, help='json path of benchmark models') - parser.add_argument( - '--out', type=str, help='output path of gathered metrics to be stored') - parser.add_argument( - '--not-show', action='store_true', help='not show metrics') - parser.add_argument( - '--excel', type=str, help='input path of excel to be recorded') - parser.add_argument( - '--ncol', type=int, help='Number of column to be modified or appended') - - args = parser.parse_args() - return args - - -if __name__ == '__main__': - args = parse_args() - - if args.excel: - assert args.ncol, 'Please specify "--excel" and "--ncol" ' \ - 'at the same time' - if xlrd is None: - raise RuntimeError( - 'xlrd is not installed,' - 'Please use “pip install xlrd==1.2.0” to install') - if xlutils is None: - raise RuntimeError( - 'xlutils is not installed,' - 'Please use “pip install xlutils==2.0.0” to install') - readbook = xlrd.open_workbook(args.excel) - sheet = readbook.sheet_by_name('Sheet1') - sheet_info = {} - total_nrows = sheet.nrows - for i in range(3, sheet.nrows): - sheet_info[sheet.row_values(i)[0]] = i - xlrw = copy(readbook) - table = xlrw.get_sheet(0) - - root_path = args.root - metrics_out = args.out - benchmark_json_path = args.benchmark_json - model_configs = mmcv.load(benchmark_json_path)['models'] - - result_dict = {} - for config in model_configs: - config_name = osp.split(config)[-1] - config_name = osp.splitext(config_name)[0] - result_path = osp.join(root_path, config_name) - if osp.exists(result_path): - # 1 read config - cfg = mmcv.Config.fromfile(config) - total_epochs = cfg.runner.max_epochs - final_results = cfg.evaluation.metric - if not isinstance(final_results, list): - final_results = [final_results] - final_results_out = [] - for key in final_results: - if 'proposal_fast' in key: - final_results_out.append('AR@1000') # RPN - elif 'mAP' not in key: - final_results_out.append(key + '_mAP') - - # 2 determine whether total_epochs ckpt exists - ckpt_path = f'epoch_{total_epochs}.pth' - if osp.exists(osp.join(result_path, ckpt_path)): - log_json_path = list( - sorted(glob.glob(osp.join(result_path, '*.log.json'))))[-1] - - # 3 read metric - model_performance = get_final_results(log_json_path, - total_epochs, - final_results_out) - if model_performance is None: - print(f'log file error: {log_json_path}') - continue - for performance in model_performance: - if performance in ['AR@1000', 'bbox_mAP', 'segm_mAP']: - metric = round(model_performance[performance] * 100, 1) - model_performance[performance] = metric - result_dict[config] = model_performance - - # update and append excel content - if args.excel: - if 'AR@1000' in model_performance: - metrics = f'{model_performance["AR@1000"]}(AR@1000)' - elif 'segm_mAP' in model_performance: - metrics = f'{model_performance["bbox_mAP"]}/' \ - f'{model_performance["segm_mAP"]}' - else: - metrics = f'{model_performance["bbox_mAP"]}' - - row_num = sheet_info.get(config, None) - if row_num: - table.write(row_num, args.ncol, metrics) - else: - table.write(total_nrows, 0, config) - table.write(total_nrows, args.ncol, metrics) - total_nrows += 1 - - else: - print(f'{config} not exist: {ckpt_path}') - else: - print(f'not exist: {config}') - - # 4 save or print results - if metrics_out: - mmcv.mkdir_or_exist(metrics_out) - mmcv.dump(result_dict, osp.join(metrics_out, 'model_metric_info.json')) - if not args.not_show: - print('===================================') - for config_name, metrics in result_dict.items(): - print(config_name, metrics) - print('===================================') - if args.excel: - filename, sufflx = osp.splitext(args.excel) - xlrw.save(f'{filename}_o{sufflx}') - print(f'>>> Output {filename}_o{sufflx}') diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/datasets/api_wrappers/coco_api.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/datasets/api_wrappers/coco_api.py deleted file mode 100644 index 57077f9ba15afd35ef4bfca388b547bf6ae7b59d..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/datasets/api_wrappers/coco_api.py +++ /dev/null @@ -1,46 +0,0 @@ -# This file add snake case alias for coco api - -import warnings - -import pycocotools -from pycocotools.coco import COCO as _COCO -from pycocotools.cocoeval import COCOeval as _COCOeval - - -class COCO(_COCO): - """This class is almost the same as official pycocotools package. - - It implements some snake case function aliases. So that the COCO class has - the same interface as LVIS class. - """ - - def __init__(self, annotation_file=None): - if getattr(pycocotools, '__version__', '0') >= '12.0.2': - warnings.warn( - 'mmpycocotools is deprecated. Please install official pycocotools by "pip install pycocotools"', # noqa: E501 - UserWarning) - super().__init__(annotation_file=annotation_file) - self.img_ann_map = self.imgToAnns - self.cat_img_map = self.catToImgs - - def get_ann_ids(self, img_ids=[], cat_ids=[], area_rng=[], iscrowd=None): - return self.getAnnIds(img_ids, cat_ids, area_rng, iscrowd) - - def get_cat_ids(self, cat_names=[], sup_names=[], cat_ids=[]): - return self.getCatIds(cat_names, sup_names, cat_ids) - - def get_img_ids(self, img_ids=[], cat_ids=[]): - return self.getImgIds(img_ids, cat_ids) - - def load_anns(self, ids): - return self.loadAnns(ids) - - def load_cats(self, ids): - return self.loadCats(ids) - - def load_imgs(self, ids): - return self.loadImgs(ids) - - -# just for the ease of import -COCOeval = _COCOeval diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/setup.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/setup.py deleted file mode 100644 index a24d541676407eee1bea271179ffd1d80c6a8e79..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/setup.py +++ /dev/null @@ -1,13 +0,0 @@ -from setuptools import setup, find_packages - -setup( - name='latent-diffusion', - version='0.0.1', - description='', - packages=find_packages(), - install_requires=[ - 'torch', - 'numpy', - 'tqdm', - ], -) \ No newline at end of file diff --git a/spaces/trttung1610/musicgen/audiocraft/utils/notebook.py b/spaces/trttung1610/musicgen/audiocraft/utils/notebook.py deleted file mode 100644 index 019b9d19e5bef976bedddf428fd25da42a8a9726..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/utils/notebook.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -try: - import IPython.display as ipd # type: ignore -except ImportError: - # Note in a notebook... - pass - - -import torch - - -def display_audio(samples: torch.Tensor, sample_rate: int): - """Renders an audio player for the given audio samples. - - Args: - samples (torch.Tensor): a Tensor of decoded audio samples - with shapes [B, C, T] or [C, T] - sample_rate (int): sample rate audio should be displayed with. - """ - assert samples.dim() == 2 or samples.dim() == 3 - - samples = samples.detach().cpu() - if samples.dim() == 2: - samples = samples[None, ...] - - for audio in samples: - ipd.display(ipd.Audio(audio, rate=sample_rate)) diff --git a/spaces/trysem/AnimeGANv2/README.md b/spaces/trysem/AnimeGANv2/README.md deleted file mode 100644 index 238a8e927e3d898be11593c1cc6894e8e4232315..0000000000000000000000000000000000000000 --- a/spaces/trysem/AnimeGANv2/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: AnimeGANv2 -emoji: ⚡ -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.0.5 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/tvrsimhan/music-sep/README.md b/spaces/tvrsimhan/music-sep/README.md deleted file mode 100644 index 3a493fd8314ccbfede66b0826228fb4dc9bed219..0000000000000000000000000000000000000000 --- a/spaces/tvrsimhan/music-sep/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Demucs -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -duplicated_from: akhaliq/demucs ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/umair007/ChatGPT-prompt-generator/app.py b/spaces/umair007/ChatGPT-prompt-generator/app.py deleted file mode 100644 index 5da2e5088053267553b6f5af9760a0a7d58c2a1f..0000000000000000000000000000000000000000 --- a/spaces/umair007/ChatGPT-prompt-generator/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompts-bart-long") -model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompts-bart-long", from_tf=True) - -def generate(prompt): - - batch = tokenizer(prompt, return_tensors="pt") - generated_ids = model.generate(batch["input_ids"], max_new_tokens=150) - output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -input_component = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer") -output_component = gr.Textbox(label = "Prompt") -examples = [["photographer"], ["developer"]] -description = "This app generates ChatGPT prompts, it's based on a BART model trained on [this dataset](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts). 📓 Simply enter a persona that you want the prompt to be generated based on. 🧙🏻🧑🏻‍🚀🧑🏻‍🎨🧑🏻‍🔬🧑🏻‍💻🧑🏼‍🏫🧑🏽‍🌾" -gr.Interface(generate, inputs = input_component, outputs=output_component, examples=examples, title = "👨🏻‍🎤 ChatGPT Prompt Generator 👨🏻‍🎤", description=description).launch() diff --git a/spaces/umichVision/virtex-redcaps/virtex/modules/textual_heads.py b/spaces/umichVision/virtex-redcaps/virtex/modules/textual_heads.py deleted file mode 100644 index 5a5efb266e361e72ac6ca0e037116eed624da56b..0000000000000000000000000000000000000000 --- a/spaces/umichVision/virtex-redcaps/virtex/modules/textual_heads.py +++ /dev/null @@ -1,450 +0,0 @@ -r""" -A textual head accepts visual features from the visual backbone, and performs -task specific modeling (captioning, classification etc.) to predict an output -distribution over vocabulary tokens for one or multiple time-steps in the batch. -""" -import torch -from torch import nn -from typing import Optional - -from virtex.modules.embedding import WordAndPositionalEmbedding -from virtex.modules.transformer import ( - PreNormTransformerEncoderLayer, - PreNormTransformerDecoderLayer, -) - - -class TextualHead(nn.Module): - r""" - Base class for all textual heads. All child classes can simply inherit - from :class:`~torch.nn.Module`, however this is kept here for uniform - type annotations. - - Parameters - ---------- - visual_feature_size: int - Size (number of channels) of the input features from the visual backbone. - vocab_size: int - Number of tokens in the output vocabulary. - hidden_size: int - Size of the token embedding vectors, or hidden state vector of the - language model. - """ - - def __init__(self, visual_feature_size: int, vocab_size: int, hidden_size: int): - super().__init__() - self.visual_feature_size = visual_feature_size - self.vocab_size = vocab_size - self.hidden_size = hidden_size - - @property - def textual_feature_size(self): - r""" - Size of the last dimension of output right before the output linear - layer (which predicts a distribution over vocabulary tokens). This is - typically same as :attr:`hidden_size` for most modules. This property - is used to add more modules on top of this. - """ - return self.hidden_size - - -class LinearTextualHead(TextualHead): - r""" - A textual head containing a single linear layer projecting from the visual - feature size to the output vocabulary size. - - Parameters - ---------- - visual_feature_size: int - Size (number of channels) of the input features from the visual backbone. - vocab_size: int - Number of tokens in the output vocabulary. - """ - - def __init__(self, visual_feature_size: int, vocab_size: int, **kwargs): - # For API consistency. - hidden_size = visual_feature_size - super().__init__(visual_feature_size, vocab_size, hidden_size) - self.output = nn.Linear(visual_feature_size, vocab_size) - - def forward( - self, - visual_features: torch.Tensor, - caption_tokens: Optional[torch.Tensor] = None, - caption_lengths: Optional[torch.Tensor] = None, - ) -> torch.Tensor: - r""" - Project visual features directly to predict a distribution over - vocabulary tokens through a single linear layer. This textual head - ignores arguments ``caption_tokens`` and ``caption_lengths``, they - are here for API consistency. - - Parameters - ---------- - visual_features: torch.Tensor - A tensor of shape ``(batch_size, channels, height, width)`` containing - features from visual backbone. - - Returns - ------- - torch.Tensor - A tensor of shape ``(batch_size, vocab_size)`` containing output - vocabulary logits. - """ - - # Convert to NHWC and project visual features to textual feature size. - batch_size, channels, height, width = visual_features.size() - visual_features = visual_features.view(batch_size, channels, -1) - visual_features = visual_features.permute(0, 2, 1) - - # Perform global average pooling of visual features. - # shape: (batch_size, channels) - visual_features = visual_features.mean(dim=1) - - # shape: (batch_size, max_caption_length, vocab_size) - output_logits = self.output(visual_features) - return output_logits - - -class TransformerDecoderTextualHead(TextualHead): - r""" - A textual head composed of four main modules: (1) input projection (linear - layer) for visual features to match size with textual features, (2) word - and positional embedding for input captions, (3) a unidirectional transformer - decoder, and (4) and output projection (linear layer) to predict a - distribution over vocabulary tokens. The word embedding weights are tied - with output projection; the latter still has its own learnable bias. - - .. note:: - - For the "bicaptioning" pretraining task, our *textual head* (as defined - in the paper) must have two transformer decoders: one each to decode - caption in either direction. This class however will always have one - transformer per object. - - Refer :class:`~virtex.models.captioning.BidirectionalCaptioningModel` - source to understand how an object of this class is cloned, along with - tying embedding and output weights, for bicaptioning. - - Hence, while there are *two objects* of this class, it is pragmatically - a *single* textual head as a whole, according to the terminology used - in paper. - - Parameters - ---------- - visual_feature_size: int - Size (number of channels) of the input features from the visual backbone. - vocab_size: int - Number of tokens in the output vocabulary. - hidden_size: int - Size of the token embedding vectors, or hidden state vector of the - language model. - num_layers: int - Number of layers in the transformer. - attention_heads: int - Number of attention heads in the transformer. - feedforward_size: int - Size of feedforward layers in the transformer. - dropout: float, optional (default = 0.1) - Dropout probability for transformer (applied after layer normalization). - norm_type: str, optional (default = "post") - Type of transformer layer: pre-normalization (like GPT-2) or - post-normalization (like BERT). One of ``{"pre", "post"}``. - mask_future_positions: bool, optional (default = True) - Whether to mask future positions for self-attention over caption tokens. - This must be ``True`` for captioning (and bicaptioning) tasks to prevent - the language model from cheating, and ``False`` for masked language - modeling, as the self-attention should consider all tokens. - max_caption_length: int, optional (default = 30) - Maximum length of input captions; this is used to create a fixed - positional embedding lookup table. - padding_idx: int, optional (default = 0) - Token index of ``[PAD]`` token, word embedding for these tokens will - be a vector of zeroes (and not trainable). - """ - - def __init__( - self, - visual_feature_size: int, - vocab_size: int, - hidden_size: int, - num_layers: int, - attention_heads: int, - feedforward_size: int, - dropout: float = 0.1, - norm_type: str = "post", - mask_future_positions: bool = True, - max_caption_length: int = 30, - padding_idx: int = 0, - ): - super().__init__(visual_feature_size, vocab_size, hidden_size) - self.num_layers = num_layers - self.attention_heads = attention_heads - self.feedforward_size = feedforward_size - self.dropout = dropout - self.mask_future_positions = mask_future_positions - self.padding_idx = padding_idx - - self.visual_projection = nn.Linear( - visual_feature_size, self.textual_feature_size - ) - self.embedding = WordAndPositionalEmbedding( - self.vocab_size, - self.textual_feature_size, - dropout=dropout, - max_caption_length=max_caption_length, - padding_idx=padding_idx, - ) - # Make decoder layer depending on whether it's a Pre-Norm or Post-Norm. - LayerClass = ( - nn.TransformerDecoderLayer - if norm_type == "post" - else PreNormTransformerDecoderLayer - ) - _layer = LayerClass( - self.textual_feature_size, - self.attention_heads, - dim_feedforward=self.feedforward_size, - dropout=dropout, - activation="gelu", - ) - self.transformer = nn.TransformerDecoder(_layer, self.num_layers) - self.apply(self._init_weights) - - # Create an output linear layer and tie the input and output word - # embeddings to reduce parameters. - self.output = nn.Linear(self.textual_feature_size, vocab_size) - self.output.weight = self.embedding.words.weight - - @staticmethod - def _init_weights(module): - r"""Initialize weights like BERT - N(0.0, 0.02), bias = 0.""" - - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=0.02) - elif isinstance(module, nn.MultiheadAttention): - module.in_proj_weight.data.normal_(mean=0.0, std=0.02) - module.out_proj.weight.data.normal_(mean=0.0, std=0.02) - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=0.02) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def forward( - self, - visual_features: torch.Tensor, - caption_tokens: torch.Tensor, - caption_lengths: torch.Tensor, - ) -> torch.Tensor: - r""" - Given (projected) visual features from visual backbone and caption - tokens, predict the output logits for next time-step. - - Parameters - ---------- - visual_features: torch.Tensor - A tensor of shape ``(batch_size, channels, height, width)`` containing - features from visual backbone. - caption_tokens: torch.Tensor - A tensor of shape ``(batch_size, max_caption_length)`` of caption - tokens padded to the right by ``padding_idx``. - caption_lengths: torch.Tensor - A tensor of shape ``(batch_size, )`` containing lengths of caption - tokens in the batch. - - Returns - ------- - torch.Tensor - A tensor of shape ``(batch_size, max_caption_length, vocab_size)`` - containing output vocabulary logits for each time-step. - """ - - # Convert to NHWC and project visual features to textual feature size. - batch_size, channels, height, width = visual_features.size() - visual_features = visual_features.view(batch_size, channels, -1) - visual_features = visual_features.permute(0, 2, 1) - - # shape: (batch_size, height * width, textual_feature_size) - projected_visual_features = self.visual_projection(visual_features) - # Now visual and textual features are of same size. - - # Note that `max_caption_length` here may be less than the - # `max_caption_length` passed in `__init__`, but it does not matter. - batch_size, max_caption_length = caption_tokens.size() - - # Create a mask based on caption lengths, shape: (batch_size, ) - # Form a binary mask: it is True for padding positions. - # These positions will be ignored for multi-headed attention. - ones = torch.ones_like(caption_tokens) - caption_mask = caption_lengths.unsqueeze(1) < ones.cumsum(dim=1) - - # shape: (batch_size, max_caption_length, textual_feature_size) - caption_embeddings = self.embedding(caption_tokens) - - if self.mask_future_positions: - # An additive mask for masking the future (one direction). - unidirectional_mask = self._generate_future_mask( - max_caption_length, caption_embeddings.dtype, caption_embeddings.device - ) - else: - unidirectional_mask = None - - # We transpose the first two dimensions of tokens embeddings and visual - # features, as required by decoder. - caption_embeddings = caption_embeddings.transpose(0, 1) - projected_visual_features = projected_visual_features.transpose(0, 1) - - # shape: (max_caption_length, batch_size, hidden_size) - textual_features = self.transformer( - caption_embeddings, - projected_visual_features, - tgt_mask=unidirectional_mask, - tgt_key_padding_mask=caption_mask, - ) - # Undo the transpose and bring batch to dim 0. - # shape: (batch_size, max_caption_length, hidden_size) - textual_features = textual_features.transpose(0, 1) - - # shape: (batch_size, max_caption_length, vocab_size) - output_logits = self.output(textual_features) - return output_logits - - def _generate_future_mask( - self, size: int, dtype: torch.dtype, device: torch.device - ) -> torch.Tensor: - r""" - Generate a mask for "future" positions, useful when using this module - for language modeling. - - Parameters - ---------- - size: int - """ - # Default mask is for forward direction. Flip for backward direction. - mask = torch.triu( - torch.ones(size, size, device=device, dtype=dtype), diagonal=1 - ) - mask = mask.masked_fill(mask == 1, float("-inf")) - return mask - - -class TransformerEncoderTextualHead(TextualHead): - def __init__( - self, - visual_feature_size: int, - vocab_size: int, - hidden_size: int, - num_layers: int, - attention_heads: int, - feedforward_size: int, - dropout: float = 0.1, - norm_type: str = "pre", - mask_future_positions: bool = True, - max_caption_length: int = 30, - padding_idx: int = 0, - ): - super().__init__(visual_feature_size, vocab_size, hidden_size) - self.num_layers = num_layers - self.attention_heads = attention_heads - self.feedforward_size = feedforward_size - self.dropout = dropout - self.mask_future_positions = mask_future_positions - self.padding_idx = padding_idx - - self.embedding = WordAndPositionalEmbedding( - self.vocab_size, - self.textual_feature_size, - dropout=dropout, - max_caption_length=max_caption_length, - padding_idx=padding_idx, - ) - # Make decoder layer depending on whether it's a Pre-Norm or Post-Norm. - LayerClass = ( - nn.TransformerEncoderLayer - if norm_type == "post" - else PreNormTransformerEncoderLayer - ) - _layer = LayerClass( - self.textual_feature_size, - self.attention_heads, - dim_feedforward=self.feedforward_size, - dropout=dropout, - activation="gelu", - ) - self.transformer = nn.TransformerEncoder(_layer, self.num_layers) - - self.final_ln = nn.LayerNorm(self.textual_feature_size) - self._init_weights() - - def _init_weights(self): - nn.init.normal_(self.embedding.words.weight, std=0.02) - nn.init.normal_(self.embedding.positions.weight, std=0.01) - - proj_std = (self.hidden_size ** -0.5) * ((2 * self.num_layers) ** -0.5) - for layer in self.transformer.layers: - nn.init.normal_(layer.self_attn.in_proj_weight, std=self.hidden_size ** -0.5) - nn.init.normal_(layer.self_attn.out_proj.weight, std=proj_std) - nn.init.normal_(layer.linear1.weight, std=(2 * self.hidden_size) ** -0.5) - nn.init.normal_(layer.linear2.weight, std=proj_std) - - def forward( - self, - caption_tokens: torch.Tensor, - caption_lengths: torch.Tensor, - ) -> torch.Tensor: - - # Note that `max_caption_length` here may be less than the - # `max_caption_length` passed in `__init__`, but it does not matter. - batch_size, max_caption_length = caption_tokens.size() - - # Create a mask based on caption lengths, shape: (batch_size, ) - # Form a binary mask: it is True for padding positions. - # These positions will be ignored for multi-headed attention. - ones = torch.ones_like(caption_tokens) - caption_mask = caption_lengths.unsqueeze(1) < ones.cumsum(dim=1) - - # shape: (batch_size, max_caption_length, textual_feature_size) - caption_embeddings = self.embedding(caption_tokens) - - if self.mask_future_positions: - # An additive mask for masking the future (one direction). - unidirectional_mask = self._generate_future_mask( - max_caption_length, caption_embeddings.dtype, caption_embeddings.device - ) - else: - unidirectional_mask = None - - # We transpose the first two dimensions of tokens embeddings and visual - # features, as required by decoder. - caption_embeddings = caption_embeddings.transpose(0, 1) - - # shape: (max_caption_length, batch_size, hidden_size) - textual_features = self.transformer( - caption_embeddings, - mask=unidirectional_mask, - src_key_padding_mask=caption_mask, - ) - # Undo the transpose and bring batch to dim 0. - # shape: (batch_size, max_caption_length, hidden_size) - textual_features = textual_features.transpose(0, 1) - textual_features = self.final_ln(textual_features) - return textual_features - - @staticmethod - def _generate_future_mask( - size: int, dtype: torch.dtype, device: torch.device - ) -> torch.Tensor: - r""" - Generate a mask for "future" positions, useful when using this module - for language modeling. - - Parameters - ---------- - size: int - """ - # Default mask is for forward direction. Flip for backward direction. - mask = torch.triu( - torch.ones(size, size, device=device, dtype=dtype), diagonal=1 - ) - mask = mask.masked_fill(mask == 1, float("-inf")) - return mask diff --git a/spaces/umoubuton/atri-bert-vits2/commons.py b/spaces/umoubuton/atri-bert-vits2/commons.py deleted file mode 100644 index d3fa07f65b1681e1f469b04b2fe689b7c174eaaa..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/commons.py +++ /dev/null @@ -1,160 +0,0 @@ -import math -import torch -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - layer = pad_shape[::-1] - pad_shape = [item for sublist in layer for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - layer = pad_shape[::-1] - pad_shape = [item for sublist in layer for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/unstructuredio/irs-manuals/download_data.py b/spaces/unstructuredio/irs-manuals/download_data.py deleted file mode 100644 index 94d59fff04228da96c08fcd10b7b5770281d53c4..0000000000000000000000000000000000000000 --- a/spaces/unstructuredio/irs-manuals/download_data.py +++ /dev/null @@ -1,44 +0,0 @@ -import sys -import urllib -import requests -from bs4 import BeautifulSoup -import re -import zipfile - - -def get_zip_urls(base="https://www.irs.gov/downloads/irm", start_page=1, max_page=74): - urls = [] - for page_num in range(start_page, max_page + 1): - url = f"{base}?page={page_num}" - response = requests.get(url) - html_content = response.text - soup = BeautifulSoup(html_content, "html.parser") - for link in soup.find_all("a", href=re.compile(r"\.zip$")): - urls.append(link.get("href")) - return urls - - -def download_and_unzip(urls, unzip_dir): - for zip_url in urls[:10]: - filename = zip_url.split("/")[-1] - urllib.request.urlretrieve(zip_url, filename) - with zipfile.ZipFile(filename, "r") as zip_ref: - for file_info in zip_ref.infolist(): - # check if the file has a PDF extension - if file_info.filename.lower().endswith(".pdf"): - # extract the file to the PDF directory - zip_ref.extract(file_info, unzip_dir) - - -if __name__ == "__main__": - base_url = sys.argv[1] - page_start = int(sys.argv[2]) - page_max = int(sys.argv[3]) - pdf_dir = sys.argv[4] - print(f"Grabbing zip urls from {base_url}") - zip_urls = get_zip_urls(base_url, page_start, page_max) - print( - f"Found {len(zip_urls)} zip urls, downloading and unzipping pdfs into {pdf_dir}" - ) - download_and_unzip(zip_urls, pdf_dir) - print(f"Finished unzipping") diff --git a/spaces/vagmi/isai/Dockerfile b/spaces/vagmi/isai/Dockerfile deleted file mode 100644 index 53a9c333ecf5fc3fd430e61a03e967aa04f146a2..0000000000000000000000000000000000000000 --- a/spaces/vagmi/isai/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -FROM --platform=arm64 python:3.10 - -ARG GRADIO_SERVER_PORT=7860 -ARG GRADIO_SERVER_NAME="0.0.0.0" - -ENV PYTHONFAULTHANDLER=1 \ - PYTHONUNBUFFERED=1 \ - PYTHONHASHSEED=random \ - PIP_NO_CACHE_DIR=1 \ - PIP_DISABLE_PIP_VERSION_CHECK=1 \ - PIP_DEFAULT_TIMEOUT=100 \ - GRADIO_SERVER_PORT=${GRADIO_SERVER_PORT} \ - GRADIO_SERVER_NAME=${GRADIO_SERVER_NAME} - -# Install Gradio dependency -RUN apt-get update && apt-get install -y ffmpeg - -WORKDIR /app -COPY requirements.txt /app - -# Strip out GPU packages as we will only use CPU -RUN sed -i '/nvidia\|triton/d' requirements.txt \ - && pip install -r requirements.txt - -COPY . /app - -EXPOSE $GRADIO_SERVER_PORT -CMD ["python", "/app/app.py"] diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/base_model.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/base_model.py deleted file mode 100644 index 5cf430239b47ec5ec07531263f26f5c24a2311cd..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/base_model.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch - - -class BaseModel(torch.nn.Module): - def load(self, path): - """Load model from file. - - Args: - path (str): file path - """ - parameters = torch.load(path, map_location=torch.device('cpu')) - - if "optimizer" in parameters: - parameters = parameters["model"] - - self.load_state_dict(parameters) diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/nas/val.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/nas/val.py deleted file mode 100644 index 474cf6bd04d30563784fa21e469f91df53f0b3e0..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/nas/val.py +++ /dev/null @@ -1,25 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import torch - -from ultralytics.yolo.utils import ops -from ultralytics.yolo.utils.ops import xyxy2xywh -from ultralytics.yolo.v8.detect import DetectionValidator - -__all__ = ['NASValidator'] - - -class NASValidator(DetectionValidator): - - def postprocess(self, preds_in): - """Apply Non-maximum suppression to prediction outputs.""" - boxes = xyxy2xywh(preds_in[0][0]) - preds = torch.cat((boxes, preds_in[0][1]), -1).permute(0, 2, 1) - return ops.non_max_suppression(preds, - self.args.conf, - self.args.iou, - labels=self.lb, - multi_label=False, - agnostic=self.args.single_cls, - max_det=self.args.max_det, - max_time_img=0.5) diff --git a/spaces/versus666/ml_message_moderation/README.md b/spaces/versus666/ml_message_moderation/README.md deleted file mode 100644 index e858f3f0fc7f97fd19a072d12e45ea6992e18143..0000000000000000000000000000000000000000 --- a/spaces/versus666/ml_message_moderation/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: ML message moderation -emoji: 🤳 📨 → ✅ -colorFrom: indigo -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -python_version: 3.9 -app_file: src/app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/vinayreddy10/gpt3/README.md b/spaces/vinayreddy10/gpt3/README.md deleted file mode 100644 index 250b2a9bf81ea8fa25234bd332bd0fedbf38b29a..0000000000000000000000000000000000000000 --- a/spaces/vinayreddy10/gpt3/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gpt3 -emoji: 🌍 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vivym/image-matting-app/ppmatting/models/backbone/gca_enc.py b/spaces/vivym/image-matting-app/ppmatting/models/backbone/gca_enc.py deleted file mode 100644 index 2afeb5df8c398d89ac1d4fe8e411571afebec5b6..0000000000000000000000000000000000000000 --- a/spaces/vivym/image-matting-app/ppmatting/models/backbone/gca_enc.py +++ /dev/null @@ -1,395 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# The gca code was heavily based on https://github.com/Yaoyi-Li/GCA-Matting -# and https://github.com/open-mmlab/mmediting - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F -from paddleseg.cvlibs import manager, param_init -from paddleseg.utils import utils - -from ppmatting.models.layers import GuidedCxtAtten - - -class ResNet_D(nn.Layer): - def __init__(self, - input_channels, - layers, - late_downsample=False, - pretrained=None): - - super().__init__() - - self.pretrained = pretrained - - self._norm_layer = nn.BatchNorm - self.inplanes = 64 - self.late_downsample = late_downsample - self.midplanes = 64 if late_downsample else 32 - self.start_stride = [1, 2, 1, 2] if late_downsample else [2, 1, 2, 1] - self.conv1 = nn.utils.spectral_norm( - nn.Conv2D( - input_channels, - 32, - kernel_size=3, - stride=self.start_stride[0], - padding=1, - bias_attr=False)) - self.conv2 = nn.utils.spectral_norm( - nn.Conv2D( - 32, - self.midplanes, - kernel_size=3, - stride=self.start_stride[1], - padding=1, - bias_attr=False)) - self.conv3 = nn.utils.spectral_norm( - nn.Conv2D( - self.midplanes, - self.inplanes, - kernel_size=3, - stride=self.start_stride[2], - padding=1, - bias_attr=False)) - self.bn1 = self._norm_layer(32) - self.bn2 = self._norm_layer(self.midplanes) - self.bn3 = self._norm_layer(self.inplanes) - self.activation = nn.ReLU() - self.layer1 = self._make_layer( - BasicBlock, 64, layers[0], stride=self.start_stride[3]) - self.layer2 = self._make_layer(BasicBlock, 128, layers[1], stride=2) - self.layer3 = self._make_layer(BasicBlock, 256, layers[2], stride=2) - self.layer_bottleneck = self._make_layer( - BasicBlock, 512, layers[3], stride=2) - - self.init_weight() - - def _make_layer(self, block, planes, block_num, stride=1): - if block_num == 0: - return nn.Sequential(nn.Identity()) - norm_layer = self._norm_layer - downsample = None - if stride != 1: - downsample = nn.Sequential( - nn.AvgPool2D(2, stride), - nn.utils.spectral_norm( - conv1x1(self.inplanes, planes * block.expansion)), - norm_layer(planes * block.expansion), ) - elif self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.utils.spectral_norm( - conv1x1(self.inplanes, planes * block.expansion, stride)), - norm_layer(planes * block.expansion), ) - - layers = [block(self.inplanes, planes, stride, downsample, norm_layer)] - self.inplanes = planes * block.expansion - for _ in range(1, block_num): - layers.append(block(self.inplanes, planes, norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.activation(x) - x = self.conv2(x) - x = self.bn2(x) - x1 = self.activation(x) # N x 32 x 256 x 256 - x = self.conv3(x1) - x = self.bn3(x) - x2 = self.activation(x) # N x 64 x 128 x 128 - - x3 = self.layer1(x2) # N x 64 x 128 x 128 - x4 = self.layer2(x3) # N x 128 x 64 x 64 - x5 = self.layer3(x4) # N x 256 x 32 x 32 - x = self.layer_bottleneck(x5) # N x 512 x 16 x 16 - - return x, (x1, x2, x3, x4, x5) - - def init_weight(self): - - for layer in self.sublayers(): - if isinstance(layer, nn.Conv2D): - - if hasattr(layer, "weight_orig"): - param = layer.weight_orig - else: - param = layer.weight - param_init.xavier_uniform(param) - - elif isinstance(layer, (nn.BatchNorm, nn.SyncBatchNorm)): - param_init.constant_init(layer.weight, value=1.0) - param_init.constant_init(layer.bias, value=0.0) - - elif isinstance(layer, BasicBlock): - param_init.constant_init(layer.bn2.weight, value=0.0) - - if self.pretrained is not None: - utils.load_pretrained_model(self, self.pretrained) - - -@manager.MODELS.add_component -class ResShortCut_D(ResNet_D): - def __init__(self, - input_channels, - layers, - late_downsample=False, - pretrained=None): - super().__init__( - input_channels, - layers, - late_downsample=late_downsample, - pretrained=pretrained) - - self.shortcut_inplane = [input_channels, self.midplanes, 64, 128, 256] - self.shortcut_plane = [32, self.midplanes, 64, 128, 256] - - self.shortcut = nn.LayerList() - for stage, inplane in enumerate(self.shortcut_inplane): - self.shortcut.append( - self._make_shortcut(inplane, self.shortcut_plane[stage])) - - def _make_shortcut(self, inplane, planes): - return nn.Sequential( - nn.utils.spectral_norm( - nn.Conv2D( - inplane, planes, kernel_size=3, padding=1, - bias_attr=False)), - nn.ReLU(), - self._norm_layer(planes), - nn.utils.spectral_norm( - nn.Conv2D( - planes, planes, kernel_size=3, padding=1, bias_attr=False)), - nn.ReLU(), - self._norm_layer(planes)) - - def forward(self, x): - - out = self.conv1(x) - out = self.bn1(out) - out = self.activation(out) - out = self.conv2(out) - out = self.bn2(out) - x1 = self.activation(out) # N x 32 x 256 x 256 - out = self.conv3(x1) - out = self.bn3(out) - out = self.activation(out) - - x2 = self.layer1(out) # N x 64 x 128 x 128 - x3 = self.layer2(x2) # N x 128 x 64 x 64 - x4 = self.layer3(x3) # N x 256 x 32 x 32 - out = self.layer_bottleneck(x4) # N x 512 x 16 x 16 - - fea1 = self.shortcut[0](x) # input image and trimap - fea2 = self.shortcut[1](x1) - fea3 = self.shortcut[2](x2) - fea4 = self.shortcut[3](x3) - fea5 = self.shortcut[4](x4) - - return out, { - 'shortcut': (fea1, fea2, fea3, fea4, fea5), - 'image': x[:, :3, ...] - } - - -@manager.MODELS.add_component -class ResGuidedCxtAtten(ResNet_D): - def __init__(self, - input_channels, - layers, - late_downsample=False, - pretrained=None): - super().__init__( - input_channels, - layers, - late_downsample=late_downsample, - pretrained=pretrained) - self.input_channels = input_channels - self.shortcut_inplane = [input_channels, self.midplanes, 64, 128, 256] - self.shortcut_plane = [32, self.midplanes, 64, 128, 256] - - self.shortcut = nn.LayerList() - for stage, inplane in enumerate(self.shortcut_inplane): - self.shortcut.append( - self._make_shortcut(inplane, self.shortcut_plane[stage])) - - self.guidance_head = nn.Sequential( - nn.Pad2D( - 1, mode="reflect"), - nn.utils.spectral_norm( - nn.Conv2D( - 3, 16, kernel_size=3, padding=0, stride=2, - bias_attr=False)), - nn.ReLU(), - self._norm_layer(16), - nn.Pad2D( - 1, mode="reflect"), - nn.utils.spectral_norm( - nn.Conv2D( - 16, 32, kernel_size=3, padding=0, stride=2, - bias_attr=False)), - nn.ReLU(), - self._norm_layer(32), - nn.Pad2D( - 1, mode="reflect"), - nn.utils.spectral_norm( - nn.Conv2D( - 32, - 128, - kernel_size=3, - padding=0, - stride=2, - bias_attr=False)), - nn.ReLU(), - self._norm_layer(128)) - - self.gca = GuidedCxtAtten(128, 128) - - self.init_weight() - - def init_weight(self): - - for layer in self.sublayers(): - if isinstance(layer, nn.Conv2D): - initializer = nn.initializer.XavierUniform() - if hasattr(layer, "weight_orig"): - param = layer.weight_orig - else: - param = layer.weight - initializer(param, param.block) - - elif isinstance(layer, (nn.BatchNorm, nn.SyncBatchNorm)): - param_init.constant_init(layer.weight, value=1.0) - param_init.constant_init(layer.bias, value=0.0) - - elif isinstance(layer, BasicBlock): - param_init.constant_init(layer.bn2.weight, value=0.0) - - if self.pretrained is not None: - utils.load_pretrained_model(self, self.pretrained) - - def _make_shortcut(self, inplane, planes): - return nn.Sequential( - nn.utils.spectral_norm( - nn.Conv2D( - inplane, planes, kernel_size=3, padding=1, - bias_attr=False)), - nn.ReLU(), - self._norm_layer(planes), - nn.utils.spectral_norm( - nn.Conv2D( - planes, planes, kernel_size=3, padding=1, bias_attr=False)), - nn.ReLU(), - self._norm_layer(planes)) - - def forward(self, x): - - out = self.conv1(x) - out = self.bn1(out) - out = self.activation(out) - out = self.conv2(out) - out = self.bn2(out) - x1 = self.activation(out) # N x 32 x 256 x 256 - out = self.conv3(x1) - out = self.bn3(out) - out = self.activation(out) - - im_fea = self.guidance_head( - x[:, :3, ...]) # downsample origin image and extract features - if self.input_channels == 6: - unknown = F.interpolate( - x[:, 4:5, ...], scale_factor=1 / 8, mode='nearest') - else: - unknown = x[:, 3:, ...].equal(paddle.to_tensor([1.])) - unknown = paddle.cast(unknown, dtype='float32') - unknown = F.interpolate(unknown, scale_factor=1 / 8, mode='nearest') - - x2 = self.layer1(out) # N x 64 x 128 x 128 - x3 = self.layer2(x2) # N x 128 x 64 x 64 - x3 = self.gca(im_fea, x3, unknown) # contextual attention - x4 = self.layer3(x3) # N x 256 x 32 x 32 - out = self.layer_bottleneck(x4) # N x 512 x 16 x 16 - - fea1 = self.shortcut[0](x) # input image and trimap - fea2 = self.shortcut[1](x1) - fea3 = self.shortcut[2](x2) - fea4 = self.shortcut[3](x3) - fea5 = self.shortcut[4](x4) - - return out, { - 'shortcut': (fea1, fea2, fea3, fea4, fea5), - 'image_fea': im_fea, - 'unknown': unknown, - } - - -class BasicBlock(nn.Layer): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - downsample=None, - norm_layer=None): - super().__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = nn.utils.spectral_norm(conv3x3(inplanes, planes, stride)) - self.bn1 = norm_layer(planes) - self.activation = nn.ReLU() - self.conv2 = nn.utils.spectral_norm(conv3x3(planes, planes)) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.activation(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.activation(out) - - return out - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2D( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - groups=groups, - bias_attr=False, - dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2D( - in_planes, out_planes, kernel_size=1, stride=stride, bias_attr=False) diff --git a/spaces/volhack/vits-uma-genshin-honkai/app.py b/spaces/volhack/vits-uma-genshin-honkai/app.py deleted file mode 100644 index 92ddafdcd240434f58569b0e6964ef331a971dcf..0000000000000000000000000000000000000000 --- a/spaces/volhack/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,124 +0,0 @@ -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor -import torch - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model).to(device) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 500: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate") - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() \ No newline at end of file diff --git a/spaces/volhack/vits-uma-genshin-honkai/text/cleaners.py b/spaces/volhack/vits-uma-genshin-honkai/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/volhack/vits-uma-genshin-honkai/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i 0, 'at least one fusion should be used' - self.in_channels = in_channels - self.ratio = ratio - self.planes = int(in_channels * ratio) - self.pooling_type = pooling_type - self.fusion_types = fusion_types - if pooling_type == 'att': - self.conv_mask = nn.Conv2d(in_channels, 1, kernel_size=1) - self.softmax = nn.Softmax(dim=2) - else: - self.avg_pool = nn.AdaptiveAvgPool2d(1) - if 'channel_add' in fusion_types: - self.channel_add_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_add_conv = None - if 'channel_mul' in fusion_types: - self.channel_mul_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_mul_conv = None - self.reset_parameters() - - def reset_parameters(self): - if self.pooling_type == 'att': - kaiming_init(self.conv_mask, mode='fan_in') - self.conv_mask.inited = True - - if self.channel_add_conv is not None: - last_zero_init(self.channel_add_conv) - if self.channel_mul_conv is not None: - last_zero_init(self.channel_mul_conv) - - def spatial_pool(self, x): - batch, channel, height, width = x.size() - if self.pooling_type == 'att': - input_x = x - # [N, C, H * W] - input_x = input_x.view(batch, channel, height * width) - # [N, 1, C, H * W] - input_x = input_x.unsqueeze(1) - # [N, 1, H, W] - context_mask = self.conv_mask(x) - # [N, 1, H * W] - context_mask = context_mask.view(batch, 1, height * width) - # [N, 1, H * W] - context_mask = self.softmax(context_mask) - # [N, 1, H * W, 1] - context_mask = context_mask.unsqueeze(-1) - # [N, 1, C, 1] - context = torch.matmul(input_x, context_mask) - # [N, C, 1, 1] - context = context.view(batch, channel, 1, 1) - else: - # [N, C, 1, 1] - context = self.avg_pool(x) - - return context - - def forward(self, x): - # [N, C, 1, 1] - context = self.spatial_pool(x) - - out = x - if self.channel_mul_conv is not None: - # [N, C, 1, 1] - channel_mul_term = torch.sigmoid(self.channel_mul_conv(context)) - out = out * channel_mul_term - if self.channel_add_conv is not None: - # [N, C, 1, 1] - channel_add_term = self.channel_add_conv(context) - out = out + channel_add_term - - return out diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/image/geometric.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/image/geometric.py deleted file mode 100644 index cf97c201cb4e43796c911919d03fb26a07ed817d..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/image/geometric.py +++ /dev/null @@ -1,728 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers - -import cv2 -import numpy as np - -from ..utils import to_2tuple -from .io import imread_backend - -try: - from PIL import Image -except ImportError: - Image = None - - -def _scale_size(size, scale): - """Rescale a size by a ratio. - - Args: - size (tuple[int]): (w, h). - scale (float | tuple(float)): Scaling factor. - - Returns: - tuple[int]: scaled size. - """ - if isinstance(scale, (float, int)): - scale = (scale, scale) - w, h = size - return int(w * float(scale[0]) + 0.5), int(h * float(scale[1]) + 0.5) - - -cv2_interp_codes = { - 'nearest': cv2.INTER_NEAREST, - 'bilinear': cv2.INTER_LINEAR, - 'bicubic': cv2.INTER_CUBIC, - 'area': cv2.INTER_AREA, - 'lanczos': cv2.INTER_LANCZOS4 -} - -if Image is not None: - pillow_interp_codes = { - 'nearest': Image.NEAREST, - 'bilinear': Image.BILINEAR, - 'bicubic': Image.BICUBIC, - 'box': Image.BOX, - 'lanczos': Image.LANCZOS, - 'hamming': Image.HAMMING - } - - -def imresize(img, - size, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image to a given size. - - Args: - img (ndarray): The input image. - size (tuple[int]): Target size (w, h). - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if backend is None: - backend = imread_backend - if backend not in ['cv2', 'pillow']: - raise ValueError(f'backend: {backend} is not supported for resize.' - f"Supported backends are 'cv2', 'pillow'") - - if backend == 'pillow': - assert img.dtype == np.uint8, 'Pillow backend only support uint8 type' - pil_image = Image.fromarray(img) - pil_image = pil_image.resize(size, pillow_interp_codes[interpolation]) - resized_img = np.array(pil_image) - else: - resized_img = cv2.resize( - img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) - if not return_scale: - return resized_img - else: - w_scale = size[0] / w - h_scale = size[1] / h - return resized_img, w_scale, h_scale - - -def imresize_to_multiple(img, - divisor, - size=None, - scale_factor=None, - keep_ratio=False, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image according to a given size or scale factor and then rounds - up the the resized or rescaled image size to the nearest value that can be - divided by the divisor. - - Args: - img (ndarray): The input image. - divisor (int | tuple): Resized image size will be a multiple of - divisor. If divisor is a tuple, divisor should be - (w_divisor, h_divisor). - size (None | int | tuple[int]): Target size (w, h). Default: None. - scale_factor (None | float | tuple[float]): Multiplier for spatial - size. Should match input size if it is a tuple and the 2D style is - (w_scale_factor, h_scale_factor). Default: None. - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: False. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if size is not None and scale_factor is not None: - raise ValueError('only one of size or scale_factor should be defined') - elif size is None and scale_factor is None: - raise ValueError('one of size or scale_factor should be defined') - elif size is not None: - size = to_2tuple(size) - if keep_ratio: - size = rescale_size((w, h), size, return_scale=False) - else: - size = _scale_size((w, h), scale_factor) - - divisor = to_2tuple(divisor) - size = tuple([int(np.ceil(s / d)) * d for s, d in zip(size, divisor)]) - resized_img, w_scale, h_scale = imresize( - img, - size, - return_scale=True, - interpolation=interpolation, - out=out, - backend=backend) - if return_scale: - return resized_img, w_scale, h_scale - else: - return resized_img - - -def imresize_like(img, - dst_img, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image to the same size of a given image. - - Args: - img (ndarray): The input image. - dst_img (ndarray): The target image. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = dst_img.shape[:2] - return imresize(img, (w, h), return_scale, interpolation, backend=backend) - - -def rescale_size(old_size, scale, return_scale=False): - """Calculate the new size to be rescaled to. - - Args: - old_size (tuple[int]): The old size (w, h) of image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image size. - - Returns: - tuple[int]: The new rescaled image size. - """ - w, h = old_size - if isinstance(scale, (float, int)): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - scale_factor = scale - elif isinstance(scale, tuple): - max_long_edge = max(scale) - max_short_edge = min(scale) - scale_factor = min(max_long_edge / max(h, w), - max_short_edge / min(h, w)) - else: - raise TypeError( - f'Scale must be a number or tuple of int, but got {type(scale)}') - - new_size = _scale_size((w, h), scale_factor) - - if return_scale: - return new_size, scale_factor - else: - return new_size - - -def imrescale(img, - scale, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image while keeping the aspect ratio. - - Args: - img (ndarray): The input image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - ndarray: The rescaled image. - """ - h, w = img.shape[:2] - new_size, scale_factor = rescale_size((w, h), scale, return_scale=True) - rescaled_img = imresize( - img, new_size, interpolation=interpolation, backend=backend) - if return_scale: - return rescaled_img, scale_factor - else: - return rescaled_img - - -def imflip(img, direction='horizontal'): - """Flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image. - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return np.flip(img, axis=1) - elif direction == 'vertical': - return np.flip(img, axis=0) - else: - return np.flip(img, axis=(0, 1)) - - -def imflip_(img, direction='horizontal'): - """Inplace flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image (inplace). - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return cv2.flip(img, 1, img) - elif direction == 'vertical': - return cv2.flip(img, 0, img) - else: - return cv2.flip(img, -1, img) - - -def imrotate(img, - angle, - center=None, - scale=1.0, - border_value=0, - interpolation='bilinear', - auto_bound=False): - """Rotate an image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees, positive values mean - clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. - scale (float): Isotropic scale factor. - border_value (int): Border value. - interpolation (str): Same as :func:`resize`. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. - - Returns: - ndarray: The rotated image. - """ - if center is not None and auto_bound: - raise ValueError('`auto_bound` conflicts with `center`') - h, w = img.shape[:2] - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - assert isinstance(center, tuple) - - matrix = cv2.getRotationMatrix2D(center, -angle, scale) - if auto_bound: - cos = np.abs(matrix[0, 0]) - sin = np.abs(matrix[0, 1]) - new_w = h * sin + w * cos - new_h = h * cos + w * sin - matrix[0, 2] += (new_w - w) * 0.5 - matrix[1, 2] += (new_h - h) * 0.5 - w = int(np.round(new_w)) - h = int(np.round(new_h)) - rotated = cv2.warpAffine( - img, - matrix, (w, h), - flags=cv2_interp_codes[interpolation], - borderValue=border_value) - return rotated - - -def bbox_clip(bboxes, img_shape): - """Clip bboxes to fit the image shape. - - Args: - bboxes (ndarray): Shape (..., 4*k) - img_shape (tuple[int]): (height, width) of the image. - - Returns: - ndarray: Clipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - cmin = np.empty(bboxes.shape[-1], dtype=bboxes.dtype) - cmin[0::2] = img_shape[1] - 1 - cmin[1::2] = img_shape[0] - 1 - clipped_bboxes = np.maximum(np.minimum(bboxes, cmin), 0) - return clipped_bboxes - - -def bbox_scaling(bboxes, scale, clip_shape=None): - """Scaling bboxes w.r.t the box center. - - Args: - bboxes (ndarray): Shape(..., 4). - scale (float): Scaling factor. - clip_shape (tuple[int], optional): If specified, bboxes that exceed the - boundary will be clipped according to the given shape (h, w). - - Returns: - ndarray: Scaled bboxes. - """ - if float(scale) == 1.0: - scaled_bboxes = bboxes.copy() - else: - w = bboxes[..., 2] - bboxes[..., 0] + 1 - h = bboxes[..., 3] - bboxes[..., 1] + 1 - dw = (w * (scale - 1)) * 0.5 - dh = (h * (scale - 1)) * 0.5 - scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1) - if clip_shape is not None: - return bbox_clip(scaled_bboxes, clip_shape) - else: - return scaled_bboxes - - -def imcrop(img, bboxes, scale=1.0, pad_fill=None): - """Crop image patches. - - 3 steps: scale the bboxes -> clip bboxes -> crop and pad. - - Args: - img (ndarray): Image to be cropped. - bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes. - scale (float, optional): Scale ratio of bboxes, the default value - 1.0 means no padding. - pad_fill (Number | list[Number]): Value to be filled for padding. - Default: None, which means no padding. - - Returns: - list[ndarray] | ndarray: The cropped image patches. - """ - chn = 1 if img.ndim == 2 else img.shape[2] - if pad_fill is not None: - if isinstance(pad_fill, (int, float)): - pad_fill = [pad_fill for _ in range(chn)] - assert len(pad_fill) == chn - - _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes - scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32) - clipped_bbox = bbox_clip(scaled_bboxes, img.shape) - - patches = [] - for i in range(clipped_bbox.shape[0]): - x1, y1, x2, y2 = tuple(clipped_bbox[i, :]) - if pad_fill is None: - patch = img[y1:y2 + 1, x1:x2 + 1, ...] - else: - _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :]) - if chn == 1: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1) - else: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn) - patch = np.array( - pad_fill, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - x_start = 0 if _x1 >= 0 else -_x1 - y_start = 0 if _y1 >= 0 else -_y1 - w = x2 - x1 + 1 - h = y2 - y1 + 1 - patch[y_start:y_start + h, x_start:x_start + w, - ...] = img[y1:y1 + h, x1:x1 + w, ...] - patches.append(patch) - - if bboxes.ndim == 1: - return patches[0] - else: - return patches - - -def impad(img, - *, - shape=None, - padding=None, - pad_val=0, - padding_mode='constant'): - """Pad the given image to a certain shape or pad on all sides with - specified padding mode and padding value. - - Args: - img (ndarray): Image to be padded. - shape (tuple[int]): Expected padding shape (h, w). Default: None. - padding (int or tuple[int]): Padding on each border. If a single int is - provided this is used to pad all borders. If tuple of length 2 is - provided this is the padding on left/right and top/bottom - respectively. If a tuple of length 4 is provided this is the - padding for the left, top, right and bottom borders respectively. - Default: None. Note that `shape` and `padding` can not be both - set. - pad_val (Number | Sequence[Number]): Values to be filled in padding - areas when padding_mode is 'constant'. Default: 0. - padding_mode (str): Type of padding. Should be: constant, edge, - reflect or symmetric. Default: constant. - - - constant: pads with a constant value, this value is specified - with pad_val. - - edge: pads with the last value at the edge of the image. - - reflect: pads with reflection of image without repeating the - last value on the edge. For example, padding [1, 2, 3, 4] - with 2 elements on both sides in reflect mode will result - in [3, 2, 1, 2, 3, 4, 3, 2]. - - symmetric: pads with reflection of image repeating the last - value on the edge. For example, padding [1, 2, 3, 4] with - 2 elements on both sides in symmetric mode will result in - [2, 1, 1, 2, 3, 4, 4, 3] - - Returns: - ndarray: The padded image. - """ - - assert (shape is not None) ^ (padding is not None) - if shape is not None: - padding = (0, 0, shape[1] - img.shape[1], shape[0] - img.shape[0]) - - # check pad_val - if isinstance(pad_val, tuple): - assert len(pad_val) == img.shape[-1] - elif not isinstance(pad_val, numbers.Number): - raise TypeError('pad_val must be a int or a tuple. ' - f'But received {type(pad_val)}') - - # check padding - if isinstance(padding, tuple) and len(padding) in [2, 4]: - if len(padding) == 2: - padding = (padding[0], padding[1], padding[0], padding[1]) - elif isinstance(padding, numbers.Number): - padding = (padding, padding, padding, padding) - else: - raise ValueError('Padding must be a int or a 2, or 4 element tuple.' - f'But received {padding}') - - # check padding mode - assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'] - - border_type = { - 'constant': cv2.BORDER_CONSTANT, - 'edge': cv2.BORDER_REPLICATE, - 'reflect': cv2.BORDER_REFLECT_101, - 'symmetric': cv2.BORDER_REFLECT - } - img = cv2.copyMakeBorder( - img, - padding[1], - padding[3], - padding[0], - padding[2], - border_type[padding_mode], - value=pad_val) - - return img - - -def impad_to_multiple(img, divisor, pad_val=0): - """Pad an image to ensure each edge to be multiple to some number. - - Args: - img (ndarray): Image to be padded. - divisor (int): Padded image edges will be multiple to divisor. - pad_val (Number | Sequence[Number]): Same as :func:`impad`. - - Returns: - ndarray: The padded image. - """ - pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor - pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor - return impad(img, shape=(pad_h, pad_w), pad_val=pad_val) - - -def cutout(img, shape, pad_val=0): - """Randomly cut out a rectangle from the original img. - - Args: - img (ndarray): Image to be cutout. - shape (int | tuple[int]): Expected cutout shape (h, w). If given as a - int, the value will be used for both h and w. - pad_val (int | float | tuple[int | float]): Values to be filled in the - cut area. Defaults to 0. - - Returns: - ndarray: The cutout image. - """ - - channels = 1 if img.ndim == 2 else img.shape[2] - if isinstance(shape, int): - cut_h, cut_w = shape, shape - else: - assert isinstance(shape, tuple) and len(shape) == 2, \ - f'shape must be a int or a tuple with length 2, but got type ' \ - f'{type(shape)} instead.' - cut_h, cut_w = shape - if isinstance(pad_val, (int, float)): - pad_val = tuple([pad_val] * channels) - elif isinstance(pad_val, tuple): - assert len(pad_val) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(pad_val), channels) - else: - raise TypeError(f'Invalid type {type(pad_val)} for `pad_val`') - - img_h, img_w = img.shape[:2] - y0 = np.random.uniform(img_h) - x0 = np.random.uniform(img_w) - - y1 = int(max(0, y0 - cut_h / 2.)) - x1 = int(max(0, x0 - cut_w / 2.)) - y2 = min(img_h, y1 + cut_h) - x2 = min(img_w, x1 + cut_w) - - if img.ndim == 2: - patch_shape = (y2 - y1, x2 - x1) - else: - patch_shape = (y2 - y1, x2 - x1, channels) - - img_cutout = img.copy() - patch = np.array( - pad_val, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - img_cutout[y1:y2, x1:x2, ...] = patch - - return img_cutout - - -def _get_shear_matrix(magnitude, direction='horizontal'): - """Generate the shear matrix for transformation. - - Args: - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - - Returns: - ndarray: The shear matrix with dtype float32. - """ - if direction == 'horizontal': - shear_matrix = np.float32([[1, magnitude, 0], [0, 1, 0]]) - elif direction == 'vertical': - shear_matrix = np.float32([[1, 0, 0], [magnitude, 1, 0]]) - return shear_matrix - - -def imshear(img, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear an image. - - Args: - img (ndarray): Image to be sheared with format (h, w) - or (h, w, c). - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The sheared image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`') - shear_matrix = _get_shear_matrix(magnitude, direction) - sheared = cv2.warpAffine( - img, - shear_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. shearing masks whose channels large - # than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return sheared - - -def _get_translate_matrix(offset, direction='horizontal'): - """Generate the translate matrix. - - Args: - offset (int | float): The offset used for translate. - direction (str): The translate direction, either - "horizontal" or "vertical". - - Returns: - ndarray: The translate matrix with dtype float32. - """ - if direction == 'horizontal': - translate_matrix = np.float32([[1, 0, offset], [0, 1, 0]]) - elif direction == 'vertical': - translate_matrix = np.float32([[1, 0, 0], [0, 1, offset]]) - return translate_matrix - - -def imtranslate(img, - offset, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Translate an image. - - Args: - img (ndarray): Image to be translated with format - (h, w) or (h, w, c). - offset (int | float): The offset used for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The translated image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`.') - translate_matrix = _get_translate_matrix(offset, direction) - translated = cv2.warpAffine( - img, - translate_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. translating masks whose channels - # large than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return translated diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py deleted file mode 100644 index f9a72592be47b534ce22573775fd5a7e8e86d72d..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class MlflowLoggerHook(LoggerHook): - - def __init__(self, - exp_name=None, - tags=None, - log_model=True, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - """Class to log metrics and (optionally) a trained model to MLflow. - - It requires `MLflow`_ to be installed. - - Args: - exp_name (str, optional): Name of the experiment to be used. - Default None. - If not None, set the active experiment. - If experiment does not exist, an experiment with provided name - will be created. - tags (dict of str: str, optional): Tags for the current run. - Default None. - If not None, set tags for the current run. - log_model (bool, optional): Whether to log an MLflow artifact. - Default True. - If True, log runner.model as an MLflow artifact - for the current run. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _MLflow: - https://www.mlflow.org/docs/latest/index.html - """ - super(MlflowLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_mlflow() - self.exp_name = exp_name - self.tags = tags - self.log_model = log_model - - def import_mlflow(self): - try: - import mlflow - import mlflow.pytorch as mlflow_pytorch - except ImportError: - raise ImportError( - 'Please run "pip install mlflow" to install mlflow') - self.mlflow = mlflow - self.mlflow_pytorch = mlflow_pytorch - - @master_only - def before_run(self, runner): - super(MlflowLoggerHook, self).before_run(runner) - if self.exp_name is not None: - self.mlflow.set_experiment(self.exp_name) - if self.tags is not None: - self.mlflow.set_tags(self.tags) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - self.mlflow.log_metrics(tags, step=self.get_iter(runner)) - - @master_only - def after_run(self, runner): - if self.log_model: - self.mlflow_pytorch.log_model(runner.model, 'models') diff --git a/spaces/wanghuoto/gogoai/src/lib/bots/bing/tts.ts b/spaces/wanghuoto/gogoai/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/wanghuoto/gogoai/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/wonoqo/AlphaGPT/app.py b/spaces/wonoqo/AlphaGPT/app.py deleted file mode 100644 index 1e10327ce23a5fc15476871038c21ee4296bef73..0000000000000000000000000000000000000000 --- a/spaces/wonoqo/AlphaGPT/app.py +++ /dev/null @@ -1,899 +0,0 @@ -import io -import os -import ssl -from contextlib import closing -from typing import Optional, Tuple -import datetime - -import boto3 -import gradio as gr -import requests - -# UNCOMMENT TO USE WHISPER -import warnings -import whisper - -from langchain import ConversationChain, LLMChain - -from langchain.agents import load_tools, initialize_agent -from langchain.chains.conversation.memory import ConversationBufferMemory -from langchain.llms import OpenAI, OpenAIChat -from threading import Lock - -# Console to variable -from io import StringIO -import sys -import re - -from openai.error import AuthenticationError, InvalidRequestError, RateLimitError - -# Pertains to Express-inator functionality -from langchain.prompts import PromptTemplate - -from polly_utils import PollyVoiceData, NEURAL_ENGINE -from azure_utils import AzureVoiceData - -# Pertains to question answering functionality -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.text_splitter import CharacterTextSplitter -from langchain.vectorstores.faiss import FAISS -from langchain.docstore.document import Document -from langchain.chains.question_answering import load_qa_chain - -news_api_key = os.environ["NEWS_API_KEY"] -tmdb_bearer_token = os.environ["TMDB_BEARER_TOKEN"] - -TOOLS_LIST = ['serpapi', 'wolfram-alpha', 'pal-math', - 'pal-colored-objects'] # 'google-search','news-api','tmdb-api','open-meteo-api' -TOOLS_DEFAULT_LIST = ['serpapi'] -BUG_FOUND_MSG = "Congratulations, you've found a bug in this application!" -# AUTH_ERR_MSG = "Please paste your OpenAI key from openai.com to use this application. It is not necessary to hit a button or key after pasting it." -AUTH_ERR_MSG = "Please paste your OpenAI key from openai.com to use this application. " -MAX_TOKENS = 512 - -LOOPING_TALKING_HEAD = "videos/Masahiro.mp4" -TALKING_HEAD_WIDTH = "192" -MAX_TALKING_HEAD_TEXT_LENGTH = 155 - -# Pertains to Express-inator functionality -NUM_WORDS_DEFAULT = 0 -MAX_WORDS = 400 -FORMALITY_DEFAULT = "N/A" -TEMPERATURE_DEFAULT = 0.5 -EMOTION_DEFAULT = "N/A" -LANG_LEVEL_DEFAULT = "N/A" -TRANSLATE_TO_DEFAULT = "N/A" -LITERARY_STYLE_DEFAULT = "N/A" -PROMPT_TEMPLATE = PromptTemplate( - input_variables=["original_words", "num_words", "formality", "emotions", "lang_level", "translate_to", - "literary_style"], - template="Restate {num_words}{formality}{emotions}{lang_level}{translate_to}{literary_style}the following: \n{original_words}\n", -) - -FORCE_TRANSLATE_DEFAULT = True -USE_GPT4_DEFAULT = False - -POLLY_VOICE_DATA = PollyVoiceData() -AZURE_VOICE_DATA = AzureVoiceData() - -# Pertains to WHISPER functionality -WHISPER_DETECT_LANG = "Detect language" - -# UNCOMMENT TO USE WHISPER -warnings.filterwarnings("ignore") -WHISPER_MODEL = whisper.load_model("tiny") -print("WHISPER_MODEL", WHISPER_MODEL) - - -# UNCOMMENT TO USE WHISPER -def transcribe(aud_inp, whisper_lang): - if aud_inp is None: - return "" - aud = whisper.load_audio(aud_inp) - aud = whisper.pad_or_trim(aud) - mel = whisper.log_mel_spectrogram(aud).to(WHISPER_MODEL.device) - _, probs = WHISPER_MODEL.detect_language(mel) - options = whisper.DecodingOptions() - if whisper_lang != WHISPER_DETECT_LANG: - whisper_lang_code = POLLY_VOICE_DATA.get_whisper_lang_code(whisper_lang) - options = whisper.DecodingOptions(language=whisper_lang_code) - result = whisper.decode(WHISPER_MODEL, mel, options) - print("result.text", result.text) - result_text = "" - if result and result.text: - result_text = result.text - return result_text - - -# Temporarily address Wolfram Alpha SSL certificate issue -ssl._create_default_https_context = ssl._create_unverified_context - - -# TEMPORARY FOR TESTING -def transcribe_dummy(aud_inp_tb, whisper_lang): - if aud_inp_tb is None: - return "" - # aud = whisper.load_audio(aud_inp) - # aud = whisper.pad_or_trim(aud) - # mel = whisper.log_mel_spectrogram(aud).to(WHISPER_MODEL.device) - # _, probs = WHISPER_MODEL.detect_language(mel) - # options = whisper.DecodingOptions() - # options = whisper.DecodingOptions(language="ja") - # result = whisper.decode(WHISPER_MODEL, mel, options) - result_text = "Whisper will detect language" - if whisper_lang != WHISPER_DETECT_LANG: - whisper_lang_code = POLLY_VOICE_DATA.get_whisper_lang_code(whisper_lang) - result_text = f"Whisper will use lang code: {whisper_lang_code}" - print("result_text", result_text) - return aud_inp_tb - - -# Pertains to Express-inator functionality -def transform_text(desc, express_chain, num_words, formality, - anticipation_level, joy_level, trust_level, - fear_level, surprise_level, sadness_level, disgust_level, anger_level, - lang_level, translate_to, literary_style, force_translate): - num_words_prompt = "" - if num_words and int(num_words) != 0: - num_words_prompt = "using up to " + str(num_words) + " words, " - - # Change some arguments to lower case - formality = formality.lower() - anticipation_level = anticipation_level.lower() - joy_level = joy_level.lower() - trust_level = trust_level.lower() - fear_level = fear_level.lower() - surprise_level = surprise_level.lower() - sadness_level = sadness_level.lower() - disgust_level = disgust_level.lower() - anger_level = anger_level.lower() - - formality_str = "" - if formality != "n/a": - formality_str = "in a " + formality + " manner, " - - # put all emotions into a list - emotions = [] - if anticipation_level != "n/a": - emotions.append(anticipation_level) - if joy_level != "n/a": - emotions.append(joy_level) - if trust_level != "n/a": - emotions.append(trust_level) - if fear_level != "n/a": - emotions.append(fear_level) - if surprise_level != "n/a": - emotions.append(surprise_level) - if sadness_level != "n/a": - emotions.append(sadness_level) - if disgust_level != "n/a": - emotions.append(disgust_level) - if anger_level != "n/a": - emotions.append(anger_level) - - emotions_str = "" - if len(emotions) > 0: - if len(emotions) == 1: - emotions_str = "with emotion of " + emotions[0] + ", " - else: - emotions_str = "with emotions of " + ", ".join(emotions[:-1]) + " and " + emotions[-1] + ", " - - lang_level_str = "" - if lang_level != LANG_LEVEL_DEFAULT: - lang_level_str = "at a level that a person in " + lang_level + " can easily comprehend, " if translate_to == TRANSLATE_TO_DEFAULT else "" - - translate_to_str = "" - if translate_to != TRANSLATE_TO_DEFAULT and (force_translate or lang_level != LANG_LEVEL_DEFAULT): - translate_to_str = "translated to " + translate_to + ( - "" if lang_level == LANG_LEVEL_DEFAULT else " at a level that a person in " + lang_level + " can easily comprehend") + ", " - - literary_style_str = "" - if literary_style != LITERARY_STYLE_DEFAULT: - if literary_style == "Prose": - literary_style_str = "as prose, " - if literary_style == "Story": - literary_style_str = "as a story, " - elif literary_style == "Summary": - literary_style_str = "as a summary, " - elif literary_style == "Outline": - literary_style_str = "as an outline numbers and lower case letters, " - elif literary_style == "Bullets": - literary_style_str = "as bullet points using bullets, " - elif literary_style == "Poetry": - literary_style_str = "as a poem, " - elif literary_style == "Haiku": - literary_style_str = "as a haiku, " - elif literary_style == "Limerick": - literary_style_str = "as a limerick, " - elif literary_style == "Rap": - literary_style_str = "as a rap, " - elif literary_style == "Joke": - literary_style_str = "as a very funny joke with a setup and punchline, " - elif literary_style == "Knock-knock": - literary_style_str = "as a very funny knock-knock joke, " - elif literary_style == "FAQ": - literary_style_str = "as a FAQ with several questions and answers, " - - formatted_prompt = PROMPT_TEMPLATE.format( - original_words=desc, - num_words=num_words_prompt, - formality=formality_str, - emotions=emotions_str, - lang_level=lang_level_str, - translate_to=translate_to_str, - literary_style=literary_style_str - ) - - trans_instr = num_words_prompt + formality_str + emotions_str + lang_level_str + translate_to_str + literary_style_str - if express_chain and len(trans_instr.strip()) > 0: - generated_text = express_chain.run( - {'original_words': desc, 'num_words': num_words_prompt, 'formality': formality_str, - 'emotions': emotions_str, 'lang_level': lang_level_str, 'translate_to': translate_to_str, - 'literary_style': literary_style_str}).strip() - else: - print("Not transforming text") - generated_text = desc - - # replace all newlines with
    in generated_text - generated_text = generated_text.replace("\n", "\n\n") - - prompt_plus_generated = "GPT prompt: " + formatted_prompt + "\n\n" + generated_text - - print("\n==== date/time: " + str(datetime.datetime.now() - datetime.timedelta(hours=5)) + " ====") - print("prompt_plus_generated: " + prompt_plus_generated) - - return generated_text - - -def load_chain(tools_list, llm): - chain = None - express_chain = None - memory = None - if llm: - print("\ntools_list", tools_list) - tool_names = tools_list - tools = load_tools(tool_names, llm=llm, news_api_key=news_api_key, tmdb_bearer_token=tmdb_bearer_token) - - memory = ConversationBufferMemory(memory_key="chat_history") - - chain = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True, memory=memory) - express_chain = LLMChain(llm=llm, prompt=PROMPT_TEMPLATE, verbose=True) - return chain, express_chain, memory - - -def set_openai_api_key(api_key, use_gpt4): - """Set the api key and return chain. - If no api_key, then None is returned. - """ - if api_key and api_key.startswith("sk-") and len(api_key) > 50: - os.environ["OPENAI_API_KEY"] = api_key - print("\n\n ++++++++++++++ Setting OpenAI API key ++++++++++++++ \n\n") - print(str(datetime.datetime.now()) + ": Before OpenAI, OPENAI_API_KEY length: " + str( - len(os.environ["OPENAI_API_KEY"]))) - - if use_gpt4: - llm = OpenAIChat(temperature=0, max_tokens=MAX_TOKENS, model_name="gpt-4") - print("Trying to use llm OpenAIChat with gpt-4") - else: - print("Trying to use llm OpenAI with text-davinci-003") - llm = OpenAI(temperature=0, max_tokens=MAX_TOKENS, model_name="text-davinci-003") - - print(str(datetime.datetime.now()) + ": After OpenAI, OPENAI_API_KEY length: " + str( - len(os.environ["OPENAI_API_KEY"]))) - chain, express_chain, memory = load_chain(TOOLS_DEFAULT_LIST, llm) - - # Pertains to question answering functionality - embeddings = OpenAIEmbeddings() - - if use_gpt4: - qa_chain = load_qa_chain(OpenAIChat(temperature=0, model_name="gpt-4"), chain_type="stuff") - print("Trying to use qa_chain OpenAIChat with gpt-4") - else: - print("Trying to use qa_chain OpenAI with text-davinci-003") - qa_chain = load_qa_chain(OpenAI(temperature=0, model_name="text-davinci-003"), chain_type="stuff") - - print(str(datetime.datetime.now()) + ": After load_chain, OPENAI_API_KEY length: " + str( - len(os.environ["OPENAI_API_KEY"]))) - os.environ["OPENAI_API_KEY"] = "" - return chain, express_chain, llm, embeddings, qa_chain, memory, use_gpt4 - return None, None, None, None, None, None, None - - -def run_chain(chain, inp, capture_hidden_text): - output = "" - hidden_text = None - if capture_hidden_text: - error_msg = None - tmp = sys.stdout - hidden_text_io = StringIO() - sys.stdout = hidden_text_io - - try: - output = chain.run(input=inp) - except AuthenticationError as ae: - error_msg = AUTH_ERR_MSG + str(datetime.datetime.now()) + ". " + str(ae) - print("error_msg", error_msg) - except RateLimitError as rle: - error_msg = "\n\nRateLimitError: " + str(rle) - except ValueError as ve: - pass - # error_msg = "\n\nValueError: " + str(ve) - except InvalidRequestError as ire: - error_msg = "\n\nInvalidRequestError: " + str(ire) - except Exception as e: - error_msg = "\n\n" + BUG_FOUND_MSG + ":\n\n" + str(e) - - sys.stdout = tmp - hidden_text = hidden_text_io.getvalue() - - # remove escape characters from hidden_text - hidden_text = re.sub(r'\x1b[^m]*m', '', hidden_text) - - # remove "Entering new AgentExecutor chain..." from hidden_text - hidden_text = re.sub(r"Entering new AgentExecutor chain...\n", "", hidden_text) - - # remove "Finished chain." from hidden_text - hidden_text = re.sub(r"Finished chain.", "", hidden_text) - - # Add newline after "Thought:" "Action:" "Observation:" "Input:" and "AI:" - hidden_text = re.sub(r"Thought:", "\n\nThought:", hidden_text) - hidden_text = re.sub(r"Action:", "\n\nAction:", hidden_text) - hidden_text = re.sub(r"Observation:", "\n\nObservation:", hidden_text) - hidden_text = re.sub(r"Input:", "\n\nInput:", hidden_text) - hidden_text = re.sub(r"AI:", "\n\nAI:", hidden_text) - - if error_msg: - hidden_text += error_msg - - print("hidden_text: ", hidden_text) - else: - try: - output = chain.run(input=inp) - except AuthenticationError as ae: - output = AUTH_ERR_MSG + str(datetime.datetime.now()) + ". " + str(ae) - print("output", output) - except RateLimitError as rle: - output = "\n\nRateLimitError: " + str(rle) - except ValueError as ve: - pass - # output = "\n\nValueError: " + str(ve) - except InvalidRequestError as ire: - output = "\n\nInvalidRequestError: " + str(ire) - except Exception as e: - output = "\n\n" + BUG_FOUND_MSG + ":\n\n" + str(e) - - return output, hidden_text - - -def reset_memory(history, memory): - memory.clear() - history = [] - return history, history, memory - - -class ChatWrapper: - - def __init__(self): - self.lock = Lock() - - def __call__( - self, api_key: str, inp: str, history: Optional[Tuple[str, str]], chain: Optional[ConversationChain], - trace_chain: bool, speak_text: bool, talking_head: bool, monologue: bool, express_chain: Optional[LLMChain], - num_words, formality, anticipation_level, joy_level, trust_level, - fear_level, surprise_level, sadness_level, disgust_level, anger_level, - lang_level, translate_to, literary_style, qa_chain, docsearch, use_embeddings, force_translate - ): - """Execute the chat functionality.""" - self.lock.acquire() - try: - print("\n==== date/time: " + str(datetime.datetime.now()) + " ====") - print("inp: " + inp) - print("trace_chain: ", trace_chain) - print("speak_text: ", speak_text) - print("talking_head: ", talking_head) - print("monologue: ", monologue) - history = history or [] - # If chain is None, that is because no API key was provided. - output = "Please paste your OpenAI key from openai.com to use this app. " + str(datetime.datetime.now()) - hidden_text = output - - if chain: - # Set OpenAI key - import openai - openai.api_key = api_key - if not monologue: - if use_embeddings: - if inp and inp.strip() != "": - if docsearch: - docs = docsearch.similarity_search(inp) - output = str(qa_chain.run(input_documents=docs, question=inp)) - else: - output, hidden_text = "Please supply some text in the the Embeddings tab.", None - else: - output, hidden_text = "What's on your mind?", None - else: - output, hidden_text = run_chain(chain, inp, capture_hidden_text=trace_chain) - else: - output, hidden_text = inp, None - - output = transform_text(output, express_chain, num_words, formality, anticipation_level, joy_level, - trust_level, - fear_level, surprise_level, sadness_level, disgust_level, anger_level, - lang_level, translate_to, literary_style, force_translate) - - text_to_display = output - if trace_chain: - text_to_display = hidden_text + "\n\n" + output - history.append((inp, text_to_display)) - - html_video, temp_file, html_audio, temp_aud_file = None, None, None, None - if speak_text: - if talking_head: - if len(output) <= MAX_TALKING_HEAD_TEXT_LENGTH: - html_video, temp_file = do_html_video_speak(output, translate_to) - else: - temp_file = LOOPING_TALKING_HEAD - html_video = create_html_video(temp_file, TALKING_HEAD_WIDTH) - html_audio, temp_aud_file = do_html_audio_speak(output, translate_to) - else: - html_audio, temp_aud_file = do_html_audio_speak(output, translate_to) - else: - if talking_head: - temp_file = LOOPING_TALKING_HEAD - html_video = create_html_video(temp_file, TALKING_HEAD_WIDTH) - else: - # html_audio, temp_aud_file = do_html_audio_speak(output, translate_to) - # html_video = create_html_video(temp_file, "128") - pass - - except Exception as e: - raise e - finally: - self.lock.release() - return history, history, html_video, temp_file, html_audio, temp_aud_file, "" - # return history, history, html_audio, temp_aud_file, "" - - -chat = ChatWrapper() - - -def do_html_audio_speak(words_to_speak, polly_language): - polly_client = boto3.Session( - aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"], - aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"], - region_name=os.environ["AWS_DEFAULT_REGION"] - ).client('polly') - - # voice_id, language_code, engine = POLLY_VOICE_DATA.get_voice(polly_language, "Female") - voice_id, language_code, engine = POLLY_VOICE_DATA.get_voice(polly_language, "Male") - if not voice_id: - # voice_id = "Joanna" - voice_id = "Matthew" - language_code = "en-US" - engine = NEURAL_ENGINE - response = polly_client.synthesize_speech( - Text=words_to_speak, - OutputFormat='mp3', - VoiceId=voice_id, - LanguageCode=language_code, - Engine=engine - ) - - html_audio = '
    no audio
    ' - - # Save the audio stream returned by Amazon Polly on Lambda's temp directory - if "AudioStream" in response: - with closing(response["AudioStream"]) as stream: - # output = os.path.join("/tmp/", "speech.mp3") - - try: - with open('audios/tempfile.mp3', 'wb') as f: - f.write(stream.read()) - temp_aud_file = gr.File("audios/tempfile.mp3") - temp_aud_file_url = "/file=" + temp_aud_file.value['name'] - html_audio = f'' - except IOError as error: - # Could not write to file, exit gracefully - print(error) - return None, None - else: - # The response didn't contain audio data, exit gracefully - print("Could not stream audio") - return None, None - - return html_audio, "audios/tempfile.mp3" - - -def create_html_video(file_name, width): - temp_file_url = "/file=" + tmp_file.value['name'] - html_video = f'' - return html_video - - -def do_html_video_speak(words_to_speak, azure_language): - azure_voice = AZURE_VOICE_DATA.get_voice(azure_language, "Male") - if not azure_voice: - azure_voice = "en-US-ChristopherNeural" - - headers = {"Authorization": f"Bearer {os.environ['EXHUMAN_API_KEY']}"} - body = { - 'bot_name': 'Masahiro', - 'bot_response': words_to_speak, - 'azure_voice': azure_voice, - 'azure_style': 'friendly', - 'animation_pipeline': 'high_speed', - } - api_endpoint = "https://api.exh.ai/animations/v1/generate_lipsync" - res = requests.post(api_endpoint, json=body, headers=headers) - print("res.status_code: ", res.status_code) - - html_video = '
    no video
    ' - if isinstance(res.content, bytes): - response_stream = io.BytesIO(res.content) - print("len(res.content)): ", len(res.content)) - - with open('videos/tempfile.mp4', 'wb') as f: - f.write(response_stream.read()) - temp_file = gr.File("videos/tempfile.mp4") - temp_file_url = "/file=" + temp_file.value['name'] - html_video = f'' - else: - print('video url unknown') - return html_video, "videos/tempfile.mp4" - - -def update_selected_tools(widget, state, llm): - if widget: - state = widget - chain, express_chain, memory = load_chain(state, llm) - return state, llm, chain, express_chain - - -def update_talking_head(widget, state): - if widget: - state = widget - - video_html_talking_head = create_html_video(LOOPING_TALKING_HEAD, TALKING_HEAD_WIDTH) - return state, video_html_talking_head - else: - # return state, create_html_video(LOOPING_TALKING_HEAD, "32") - return None, "
    "
    -
    -
    -def update_foo(widget, state):
    -    if widget:
    -        state = widget
    -        return state
    -
    -
    -# Pertains to question answering functionality
    -def update_embeddings(embeddings_text, embeddings, qa_chain):
    -    if embeddings_text:
    -        text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
    -        texts = text_splitter.split_text(embeddings_text)
    -
    -        docsearch = FAISS.from_texts(texts, embeddings)
    -        print("Embeddings updated")
    -        return docsearch
    -
    -
    -# Pertains to question answering functionality
    -def update_use_embeddings(widget, state):
    -    if widget:
    -        state = widget
    -        return state
    -
    -
    -with gr.Blocks(css=".gradio-container {background-color: lightgray}") as block:
    -    llm_state = gr.State()
    -    history_state = gr.State()
    -    chain_state = gr.State()
    -    express_chain_state = gr.State()
    -    tools_list_state = gr.State(TOOLS_DEFAULT_LIST)
    -    trace_chain_state = gr.State(False)
    -    speak_text_state = gr.State(False)
    -    talking_head_state = gr.State(True)
    -    monologue_state = gr.State(False)  # Takes the input and repeats it back to the user, optionally transforming it.
    -    force_translate_state = gr.State(FORCE_TRANSLATE_DEFAULT)  #
    -    memory_state = gr.State()
    -
    -    # Pertains to Express-inator functionality
    -    num_words_state = gr.State(NUM_WORDS_DEFAULT)
    -    formality_state = gr.State(FORMALITY_DEFAULT)
    -    anticipation_level_state = gr.State(EMOTION_DEFAULT)
    -    joy_level_state = gr.State(EMOTION_DEFAULT)
    -    trust_level_state = gr.State(EMOTION_DEFAULT)
    -    fear_level_state = gr.State(EMOTION_DEFAULT)
    -    surprise_level_state = gr.State(EMOTION_DEFAULT)
    -    sadness_level_state = gr.State(EMOTION_DEFAULT)
    -    disgust_level_state = gr.State(EMOTION_DEFAULT)
    -    anger_level_state = gr.State(EMOTION_DEFAULT)
    -    lang_level_state = gr.State(LANG_LEVEL_DEFAULT)
    -    translate_to_state = gr.State(TRANSLATE_TO_DEFAULT)
    -    literary_style_state = gr.State(LITERARY_STYLE_DEFAULT)
    -
    -    # Pertains to WHISPER functionality
    -    whisper_lang_state = gr.State(WHISPER_DETECT_LANG)
    -
    -    # Pertains to question answering functionality
    -    embeddings_state = gr.State()
    -    qa_chain_state = gr.State()
    -    docsearch_state = gr.State()
    -    use_embeddings_state = gr.State(False)
    -
    -    use_gpt4_state = gr.State(USE_GPT4_DEFAULT)
    -
    -    with gr.Tab("Chat"):
    -        with gr.Row():
    -            with gr.Column():
    -                gr.HTML(
    -                    """
    GPT + WolframAlpha + Whisper
    -

    Hit Enter after pasting your OpenAI API key.

    -
    If you have GPT-4 access, optionally select it in Settings tab.
    """) - - openai_api_key_textbox = gr.Textbox(placeholder="Paste your OpenAI API key (sk-...) and hit Enter", - show_label=False, lines=1, type='password') - - with gr.Row(): - with gr.Column(scale=1, min_width=TALKING_HEAD_WIDTH, visible=True): - speak_text_cb = gr.Checkbox(label="Enable speech", value=False) - speak_text_cb.change(update_foo, inputs=[speak_text_cb, speak_text_state], - outputs=[speak_text_state]) - - my_file = gr.File(label="Upload a file", type="file", visible=False) - tmp_file = gr.File(LOOPING_TALKING_HEAD, visible=False) - # tmp_file_url = "/file=" + tmp_file.value['name'] - htm_video = create_html_video(LOOPING_TALKING_HEAD, TALKING_HEAD_WIDTH) - video_html = gr.HTML(htm_video) - - # my_aud_file = gr.File(label="Audio file", type="file", visible=True) - tmp_aud_file = gr.File("audios/tempfile.mp3", visible=False) - tmp_aud_file_url = "/file=" + tmp_aud_file.value['name'] - htm_audio = f'' - audio_html = gr.HTML(htm_audio) - - with gr.Column(scale=7): - chatbot = gr.Chatbot() - - with gr.Row(): - message = gr.Textbox(label="What's on your mind??", - placeholder="What's the answer to life, the universe, and everything?", - lines=1) - submit = gr.Button(value="Send", variant="secondary").style(full_width=False) - - # UNCOMMENT TO USE WHISPER - with gr.Row(): - audio_comp = gr.Microphone(source="microphone", type="filepath", label="Just say it!", - interactive=True, streaming=False) - audio_comp.change(transcribe, inputs=[audio_comp, whisper_lang_state], outputs=[message]) - - # TEMPORARY FOR TESTING - # with gr.Row(): - # audio_comp_tb = gr.Textbox(label="Just say it!", lines=1) - # audio_comp_tb.submit(transcribe_dummy, inputs=[audio_comp_tb, whisper_lang_state], outputs=[message]) - - gr.Examples( - examples=["How many people live in Canada?", - "What is 2 to the 30th power?", - "If x+y=10 and x-y=4, what are x and y?", - "How much did it rain in SF today?", - "Get me information about the movie 'Avatar'", - "What are the top tech headlines in the US?", - "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses - " - "if I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"], - inputs=message - ) - - with gr.Tab("Settings"): - tools_cb_group = gr.CheckboxGroup(label="Tools:", choices=TOOLS_LIST, - value=TOOLS_DEFAULT_LIST) - tools_cb_group.change(update_selected_tools, - inputs=[tools_cb_group, tools_list_state, llm_state], - outputs=[tools_list_state, llm_state, chain_state, express_chain_state]) - - trace_chain_cb = gr.Checkbox(label="Show reasoning chain in chat bubble", value=False) - trace_chain_cb.change(update_foo, inputs=[trace_chain_cb, trace_chain_state], - outputs=[trace_chain_state]) - - force_translate_cb = gr.Checkbox(label="Force translation to selected Output Language", - value=FORCE_TRANSLATE_DEFAULT) - force_translate_cb.change(update_foo, inputs=[force_translate_cb, force_translate_state], - outputs=[force_translate_state]) - - # speak_text_cb = gr.Checkbox(label="Speak text from agent", value=False) - # speak_text_cb.change(update_foo, inputs=[speak_text_cb, speak_text_state], - # outputs=[speak_text_state]) - - talking_head_cb = gr.Checkbox(label="Show talking head", value=True) - talking_head_cb.change(update_talking_head, inputs=[talking_head_cb, talking_head_state], - outputs=[talking_head_state, video_html]) - - monologue_cb = gr.Checkbox(label="Babel fish mode (translate/restate what you enter, no conversational agent)", - value=False) - monologue_cb.change(update_foo, inputs=[monologue_cb, monologue_state], - outputs=[monologue_state]) - - use_gpt4_cb = gr.Checkbox(label="Use GPT-4 (experimental) if your OpenAI API has access to it", - value=USE_GPT4_DEFAULT) - use_gpt4_cb.change(set_openai_api_key, - inputs=[openai_api_key_textbox, use_gpt4_cb], - outputs=[chain_state, express_chain_state, llm_state, embeddings_state, - qa_chain_state, memory_state, use_gpt4_state]) - - reset_btn = gr.Button(value="Reset chat", variant="secondary").style(full_width=False) - reset_btn.click(reset_memory, inputs=[history_state, memory_state], - outputs=[chatbot, history_state, memory_state]) - - with gr.Tab("Whisper STT"): - whisper_lang_radio = gr.Radio(label="Whisper speech-to-text language:", choices=[ - WHISPER_DETECT_LANG, "Arabic", "Arabic (Gulf)", "Catalan", "Chinese (Cantonese)", "Chinese (Mandarin)", - "Danish", "Dutch", "English (Australian)", "English (British)", "English (Indian)", "English (New Zealand)", - "English (South African)", "English (US)", "English (Welsh)", "Finnish", "French", "French (Canadian)", - "German", "German (Austrian)", "Georgian", "Hindi", "Icelandic", "Indonesian", "Italian", "Japanese", - "Korean", "Norwegian", "Polish", - "Portuguese (Brazilian)", "Portuguese (European)", "Romanian", "Russian", "Spanish (European)", - "Spanish (Mexican)", "Spanish (US)", "Swedish", "Turkish", "Ukrainian", "Welsh"], - value=WHISPER_DETECT_LANG) - - whisper_lang_radio.change(update_foo, - inputs=[whisper_lang_radio, whisper_lang_state], - outputs=[whisper_lang_state]) - - with gr.Tab("Output Language"): - lang_level_radio = gr.Radio(label="Language level:", choices=[ - LANG_LEVEL_DEFAULT, "1st grade", "2nd grade", "3rd grade", "4th grade", "5th grade", "6th grade", - "7th grade", "8th grade", "9th grade", "10th grade", "11th grade", "12th grade", "University"], - value=LANG_LEVEL_DEFAULT) - lang_level_radio.change(update_foo, inputs=[lang_level_radio, lang_level_state], - outputs=[lang_level_state]) - - translate_to_radio = gr.Radio(label="Language:", choices=[ - TRANSLATE_TO_DEFAULT, "Arabic", "Arabic (Gulf)", "Catalan", "Chinese (Cantonese)", "Chinese (Mandarin)", - "Danish", "Dutch", "English (Australian)", "English (British)", "English (Indian)", "English (New Zealand)", - "English (South African)", "English (US)", "English (Welsh)", "Finnish", "French", "French (Canadian)", - "German", "German (Austrian)", "Georgian", "Hindi", "Icelandic", "Indonesian", "Italian", "Japanese", - "Korean", "Norwegian", "Polish", - "Portuguese (Brazilian)", "Portuguese (European)", "Romanian", "Russian", "Spanish (European)", - "Spanish (Mexican)", "Spanish (US)", "Swedish", "Turkish", "Ukrainian", "Welsh", - "emojis", "Gen Z slang", "how the stereotypical Karen would say it", "Klingon", "Neanderthal", - "Pirate", "Strange Planet expospeak technical talk", "Yoda"], - value=TRANSLATE_TO_DEFAULT) - - translate_to_radio.change(update_foo, - inputs=[translate_to_radio, translate_to_state], - outputs=[translate_to_state]) - - with gr.Tab("Formality"): - formality_radio = gr.Radio(label="Formality:", - choices=[FORMALITY_DEFAULT, "Casual", "Polite", "Honorific"], - value=FORMALITY_DEFAULT) - formality_radio.change(update_foo, - inputs=[formality_radio, formality_state], - outputs=[formality_state]) - - with gr.Tab("Lit Style"): - literary_style_radio = gr.Radio(label="Literary style:", choices=[ - LITERARY_STYLE_DEFAULT, "Prose", "Story", "Summary", "Outline", "Bullets", "Poetry", "Haiku", "Limerick", - "Rap", - "Joke", "Knock-knock", "FAQ"], - value=LITERARY_STYLE_DEFAULT) - - literary_style_radio.change(update_foo, - inputs=[literary_style_radio, literary_style_state], - outputs=[literary_style_state]) - - with gr.Tab("Emotions"): - anticipation_level_radio = gr.Radio(label="Anticipation level:", - choices=[EMOTION_DEFAULT, "Interest", "Anticipation", "Vigilance"], - value=EMOTION_DEFAULT) - anticipation_level_radio.change(update_foo, - inputs=[anticipation_level_radio, anticipation_level_state], - outputs=[anticipation_level_state]) - - joy_level_radio = gr.Radio(label="Joy level:", - choices=[EMOTION_DEFAULT, "Serenity", "Joy", "Ecstasy"], - value=EMOTION_DEFAULT) - joy_level_radio.change(update_foo, - inputs=[joy_level_radio, joy_level_state], - outputs=[joy_level_state]) - - trust_level_radio = gr.Radio(label="Trust level:", - choices=[EMOTION_DEFAULT, "Acceptance", "Trust", "Admiration"], - value=EMOTION_DEFAULT) - trust_level_radio.change(update_foo, - inputs=[trust_level_radio, trust_level_state], - outputs=[trust_level_state]) - - fear_level_radio = gr.Radio(label="Fear level:", - choices=[EMOTION_DEFAULT, "Apprehension", "Fear", "Terror"], - value=EMOTION_DEFAULT) - fear_level_radio.change(update_foo, - inputs=[fear_level_radio, fear_level_state], - outputs=[fear_level_state]) - - surprise_level_radio = gr.Radio(label="Surprise level:", - choices=[EMOTION_DEFAULT, "Distraction", "Surprise", "Amazement"], - value=EMOTION_DEFAULT) - surprise_level_radio.change(update_foo, - inputs=[surprise_level_radio, surprise_level_state], - outputs=[surprise_level_state]) - - sadness_level_radio = gr.Radio(label="Sadness level:", - choices=[EMOTION_DEFAULT, "Pensiveness", "Sadness", "Grief"], - value=EMOTION_DEFAULT) - sadness_level_radio.change(update_foo, - inputs=[sadness_level_radio, sadness_level_state], - outputs=[sadness_level_state]) - - disgust_level_radio = gr.Radio(label="Disgust level:", - choices=[EMOTION_DEFAULT, "Boredom", "Disgust", "Loathing"], - value=EMOTION_DEFAULT) - disgust_level_radio.change(update_foo, - inputs=[disgust_level_radio, disgust_level_state], - outputs=[disgust_level_state]) - - anger_level_radio = gr.Radio(label="Anger level:", - choices=[EMOTION_DEFAULT, "Annoyance", "Anger", "Rage"], - value=EMOTION_DEFAULT) - anger_level_radio.change(update_foo, - inputs=[anger_level_radio, anger_level_state], - outputs=[anger_level_state]) - - with gr.Tab("Max Words"): - num_words_slider = gr.Slider(label="Max number of words to generate (0 for don't care)", - value=NUM_WORDS_DEFAULT, minimum=0, maximum=MAX_WORDS, step=10) - num_words_slider.change(update_foo, - inputs=[num_words_slider, num_words_state], - outputs=[num_words_state]) - - with gr.Tab("Embeddings"): - embeddings_text_box = gr.Textbox(label="Enter text for embeddings and hit Create:", - lines=20) - - with gr.Row(): - use_embeddings_cb = gr.Checkbox(label="Use embeddings", value=False) - use_embeddings_cb.change(update_use_embeddings, inputs=[use_embeddings_cb, use_embeddings_state], - outputs=[use_embeddings_state]) - - embeddings_text_submit = gr.Button(value="Create", variant="secondary").style(full_width=False) - embeddings_text_submit.click(update_embeddings, - inputs=[embeddings_text_box, embeddings_state, qa_chain_state], - outputs=[docsearch_state]) - - gr.HTML("""
    - - Duplicate Space - Powered by LangChain 🦜️🔗 -
    """) - - - message.submit(chat, inputs=[openai_api_key_textbox, message, history_state, chain_state, trace_chain_state, - speak_text_state, talking_head_state, monologue_state, - express_chain_state, num_words_state, formality_state, - anticipation_level_state, joy_level_state, trust_level_state, fear_level_state, - surprise_level_state, sadness_level_state, disgust_level_state, anger_level_state, - lang_level_state, translate_to_state, literary_style_state, - qa_chain_state, docsearch_state, use_embeddings_state, - force_translate_state], - outputs=[chatbot, history_state, video_html, my_file, audio_html, tmp_aud_file, message]) - - submit.click(chat, inputs=[openai_api_key_textbox, message, history_state, chain_state, trace_chain_state, - speak_text_state, talking_head_state, monologue_state, - express_chain_state, num_words_state, formality_state, - anticipation_level_state, joy_level_state, trust_level_state, fear_level_state, - surprise_level_state, sadness_level_state, disgust_level_state, anger_level_state, - lang_level_state, translate_to_state, literary_style_state, - qa_chain_state, docsearch_state, use_embeddings_state, - force_translate_state], - outputs=[chatbot, history_state, video_html, my_file, audio_html, tmp_aud_file, message]) - - openai_api_key_textbox.change(set_openai_api_key, - inputs=[openai_api_key_textbox, use_gpt4_state], - outputs=[chain_state, express_chain_state, llm_state, embeddings_state, - qa_chain_state, memory_state, use_gpt4_state]) - openai_api_key_textbox.submit(set_openai_api_key, - inputs=[openai_api_key_textbox, use_gpt4_state], - outputs=[chain_state, express_chain_state, llm_state, embeddings_state, - qa_chain_state, memory_state, use_gpt4_state]) - -block.launch(debug=True) diff --git a/spaces/wy213/213a/src/components/providers.tsx b/spaces/wy213/213a/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/xfys/yolov5_tracking/README.md b/spaces/xfys/yolov5_tracking/README.md deleted file mode 100644 index 011e44f45de2b389338d40ed2a4489e3f26f2d50..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Yolov5 Tracking -emoji: 🐨 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xiaofenglingreal/Remove-Animation-Figures-Background/app.py b/spaces/xiaofenglingreal/Remove-Animation-Figures-Background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/xiaofenglingreal/Remove-Animation-Figures-Background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/xiaoxuezi/spleeter/spleeter/types1.py b/spaces/xiaoxuezi/spleeter/spleeter/types1.py deleted file mode 100644 index 0fa6f7835f6b466aee1c3a1ef52c480ea780a93e..0000000000000000000000000000000000000000 --- a/spaces/xiaoxuezi/spleeter/spleeter/types1.py +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env python -# coding: utf8 - -""" Custom types definition. """ - -from typing import Any, Tuple - -# pyright: reportMissingImports=false -# pylint: disable=import-error -import numpy as np - -# pylint: enable=import-error - -AudioDescriptor: type = Any -Signal: type = Tuple[np.ndarray, float] diff --git a/spaces/yangtommy6/Computer_Vision_Project/README.md b/spaces/yangtommy6/Computer_Vision_Project/README.md deleted file mode 100644 index 100f50610d506d5cbf9575102671048e4041912e..0000000000000000000000000000000000000000 --- a/spaces/yangtommy6/Computer_Vision_Project/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Computer_Vision_Project -emoji: 👀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Hi! This is a computer vision app built by a python library, fastai. -I designed and built a simple model that is able to recognize 45 of the most common animals listed here: "https://www.brownelltravel.com/blog/animals-around-the-world/". -If you upload a picture that's not an animal, the AI will try to tell you what this picture is. -I have found that it's hard for the AI to recognize more than 1 main object in the picture, so I will have to redesign and train the model later. -This is just a demo of a computer vision AI. \ No newline at end of file diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/archs/rrdbnet_arch.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/archs/rrdbnet_arch.py deleted file mode 100644 index 49a2d6c204557cba53ada7550deb587541855cfb..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/archs/rrdbnet_arch.py +++ /dev/null @@ -1,119 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from basicsr.utils.registry import ARCH_REGISTRY -from .arch_util import default_init_weights, make_layer, pixel_unshuffle - - -class ResidualDenseBlock(nn.Module): - """Residual Dense Block. - - Used in RRDB block in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat=64, num_grow_ch=32): - super(ResidualDenseBlock, self).__init__() - self.conv1 = nn.Conv2d(num_feat, num_grow_ch, 3, 1, 1) - self.conv2 = nn.Conv2d(num_feat + num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv3 = nn.Conv2d(num_feat + 2 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv4 = nn.Conv2d(num_feat + 3 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv5 = nn.Conv2d(num_feat + 4 * num_grow_ch, num_feat, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - # initialization - default_init_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1) - - def forward(self, x): - x1 = self.lrelu(self.conv1(x)) - x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) - x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) - x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) - x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) - # Emperically, we use 0.2 to scale the residual for better performance - return x5 * 0.2 + x - - -class RRDB(nn.Module): - """Residual in Residual Dense Block. - - Used in RRDB-Net in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat, num_grow_ch=32): - super(RRDB, self).__init__() - self.rdb1 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb2 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb3 = ResidualDenseBlock(num_feat, num_grow_ch) - - def forward(self, x): - out = self.rdb1(x) - out = self.rdb2(out) - out = self.rdb3(out) - # Emperically, we use 0.2 to scale the residual for better performance - return out * 0.2 + x - - -@ARCH_REGISTRY.register() -class RRDBNet(nn.Module): - """Networks consisting of Residual in Residual Dense Block, which is used - in ESRGAN. - - ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. - - We extend ESRGAN for scale x2 and scale x1. - Note: This is one option for scale 1, scale 2 in RRDBNet. - We first employ the pixel-unshuffle (an inverse operation of pixelshuffle to reduce the spatial size - and enlarge the channel size before feeding inputs into the main ESRGAN architecture. - - Args: - num_in_ch (int): Channel number of inputs. - num_out_ch (int): Channel number of outputs. - num_feat (int): Channel number of intermediate features. - Default: 64 - num_block (int): Block number in the trunk network. Defaults: 23 - num_grow_ch (int): Channels for each growth. Default: 32. - """ - - def __init__(self, num_in_ch, num_out_ch, scale=4, num_feat=64, num_block=23, num_grow_ch=32): - super(RRDBNet, self).__init__() - self.scale = scale - if scale == 2: - num_in_ch = num_in_ch * 4 - elif scale == 1: - num_in_ch = num_in_ch * 16 - self.conv_first = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1) - self.body = make_layer(RRDB, num_block, num_feat=num_feat, num_grow_ch=num_grow_ch) - self.conv_body = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - # upsample - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - if self.scale == 2: - feat = pixel_unshuffle(x, scale=2) - elif self.scale == 1: - feat = pixel_unshuffle(x, scale=4) - else: - feat = x - feat = self.conv_first(feat) - body_feat = self.conv_body(self.body(feat)) - feat = feat + body_feat - # upsample - feat = self.lrelu(self.conv_up1(F.interpolate(feat, scale_factor=2, mode='nearest'))) - feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest'))) - out = self.conv_last(self.lrelu(self.conv_hr(feat))) - return out \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/albert/configuration_albert.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/albert/configuration_albert.py deleted file mode 100644 index cacc0499035c19280307b1c132719670d2f628e7..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/albert/configuration_albert.py +++ /dev/null @@ -1,178 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" ALBERT model configuration""" -from collections import OrderedDict -from typing import Mapping - -from ...configuration_utils import PretrainedConfig -from ...onnx import OnnxConfig - - -ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "albert-base-v1": "https://huggingface.co/albert-base-v1/resolve/main/config.json", - "albert-large-v1": "https://huggingface.co/albert-large-v1/resolve/main/config.json", - "albert-xlarge-v1": "https://huggingface.co/albert-xlarge-v1/resolve/main/config.json", - "albert-xxlarge-v1": "https://huggingface.co/albert-xxlarge-v1/resolve/main/config.json", - "albert-base-v2": "https://huggingface.co/albert-base-v2/resolve/main/config.json", - "albert-large-v2": "https://huggingface.co/albert-large-v2/resolve/main/config.json", - "albert-xlarge-v2": "https://huggingface.co/albert-xlarge-v2/resolve/main/config.json", - "albert-xxlarge-v2": "https://huggingface.co/albert-xxlarge-v2/resolve/main/config.json", -} - - -class AlbertConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`AlbertModel`] or a [`TFAlbertModel`]. It is used - to instantiate an ALBERT model according to the specified arguments, defining the model architecture. Instantiating - a configuration with the defaults will yield a similar configuration to that of the ALBERT - [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 30000): - Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`AlbertModel`] or [`TFAlbertModel`]. - embedding_size (`int`, *optional*, defaults to 128): - Dimensionality of vocabulary embeddings. - hidden_size (`int`, *optional*, defaults to 4096): - Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - num_hidden_groups (`int`, *optional*, defaults to 1): - Number of groups for the hidden layers, parameters in the same group are shared. - num_attention_heads (`int`, *optional*, defaults to 64): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 16384): - The dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. - inner_group_num (`int`, *optional*, defaults to 1): - The number of inner repetition of attention and ffn. - hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu_new"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"silu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0): - The dropout ratio for the attention probabilities. - max_position_embeddings (`int`, *optional*, defaults to 512): - The maximum sequence length that this model might ever be used with. Typically set this to something large - (e.g., 512 or 1024 or 2048). - type_vocab_size (`int`, *optional*, defaults to 2): - The vocabulary size of the `token_type_ids` passed when calling [`AlbertModel`] or [`TFAlbertModel`]. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - classifier_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout ratio for attached classifiers. - position_embedding_type (`str`, *optional*, defaults to `"absolute"`): - Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For - positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to - [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). - For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models - with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). - pad_token_id (`int`, *optional*, defaults to 0): - Padding token id. - bos_token_id (`int`, *optional*, defaults to 2): - Beginning of stream token id. - eos_token_id (`int`, *optional*, defaults to 3): - End of stream token id. - - Examples: - - ```python - >>> from transformers import AlbertConfig, AlbertModel - - >>> # Initializing an ALBERT-xxlarge style configuration - >>> albert_xxlarge_configuration = AlbertConfig() - - >>> # Initializing an ALBERT-base style configuration - >>> albert_base_configuration = AlbertConfig( - ... hidden_size=768, - ... num_attention_heads=12, - ... intermediate_size=3072, - ... ) - - >>> # Initializing a model (with random weights) from the ALBERT-base style configuration - >>> model = AlbertModel(albert_xxlarge_configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "albert" - - def __init__( - self, - vocab_size=30000, - embedding_size=128, - hidden_size=4096, - num_hidden_layers=12, - num_hidden_groups=1, - num_attention_heads=64, - intermediate_size=16384, - inner_group_num=1, - hidden_act="gelu_new", - hidden_dropout_prob=0, - attention_probs_dropout_prob=0, - max_position_embeddings=512, - type_vocab_size=2, - initializer_range=0.02, - layer_norm_eps=1e-12, - classifier_dropout_prob=0.1, - position_embedding_type="absolute", - pad_token_id=0, - bos_token_id=2, - eos_token_id=3, - **kwargs, - ): - super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) - - self.vocab_size = vocab_size - self.embedding_size = embedding_size - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_hidden_groups = num_hidden_groups - self.num_attention_heads = num_attention_heads - self.inner_group_num = inner_group_num - self.hidden_act = hidden_act - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.type_vocab_size = type_vocab_size - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - self.classifier_dropout_prob = classifier_dropout_prob - self.position_embedding_type = position_embedding_type - - -# Copied from transformers.models.bert.configuration_bert.BertOnnxConfig with Roberta->Albert -class AlbertOnnxConfig(OnnxConfig): - @property - def inputs(self) -> Mapping[str, Mapping[int, str]]: - if self.task == "multiple-choice": - dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"} - else: - dynamic_axis = {0: "batch", 1: "sequence"} - return OrderedDict( - [ - ("input_ids", dynamic_axis), - ("attention_mask", dynamic_axis), - ("token_type_ids", dynamic_axis), - ] - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/chinese_clip/convert_chinese_clip_original_pytorch_to_hf.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/chinese_clip/convert_chinese_clip_original_pytorch_to_hf.py deleted file mode 100644 index 02c4b7b754b295016c23b114213d1dd0353363e1..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/chinese_clip/convert_chinese_clip_original_pytorch_to_hf.py +++ /dev/null @@ -1,134 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The OFA-Sys Team Authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse - -import torch - -from transformers import ChineseCLIPConfig, ChineseCLIPModel - - -def copy_attn_layer(hf_attn_layer, pt_weights, prefix): - q_proj, k_proj, v_proj = pt_weights[f"{prefix}.in_proj_weight"].chunk(3, dim=0) - q_proj_bias, k_proj_bias, v_proj_bias = pt_weights[f"{prefix}.in_proj_bias"].chunk(3, dim=0) - - out_proj_weights = pt_weights[f"{prefix}.out_proj.weight"] - out_proj_bias = pt_weights[f"{prefix}.out_proj.bias"] - - hf_attn_layer.q_proj.weight.data = q_proj - hf_attn_layer.q_proj.bias.data = q_proj_bias - - hf_attn_layer.k_proj.weight.data = k_proj - hf_attn_layer.k_proj.bias.data = k_proj_bias - - hf_attn_layer.v_proj.weight.data = v_proj - hf_attn_layer.v_proj.bias.data = v_proj_bias - - hf_attn_layer.out_proj.weight.data = out_proj_weights - hf_attn_layer.out_proj.bias.data = out_proj_bias - - -def copy_mlp(hf_mlp, pt_weights, prefix): - copy_linear(hf_mlp.fc1, pt_weights, f"{prefix}.c_fc") - copy_linear(hf_mlp.fc2, pt_weights, f"{prefix}.c_proj") - - -def copy_linear(hf_linear, pt_weights, prefix): - hf_linear.weight.data = pt_weights[f"{prefix}.weight"].data - hf_linear.bias.data = pt_weights[f"{prefix}.bias"].data - - -def copy_layer(hf_layer, pt_weights, prefix): - # copy layer norms - copy_linear(hf_layer.layer_norm1, pt_weights, f"{prefix}.ln_1") - copy_linear(hf_layer.layer_norm2, pt_weights, f"{prefix}.ln_2") - - # copy MLP - copy_mlp(hf_layer.mlp, pt_weights, f"{prefix}.mlp") - - # copy attn - copy_attn_layer(hf_layer.self_attn, pt_weights, f"{prefix}.attn") - - -def copy_layers(hf_layers, pt_weights, prefix): - for layer_id, hf_layer in enumerate(hf_layers): - copy_layer(hf_layer, pt_weights, f"{prefix}.{layer_id}") - - -def copy_text_model_and_projection(hf_model, pt_weights): - # copy projection - hf_model.text_projection.weight.data = pt_weights["text_projection"].data.T - - # copy text encoder - for name, param in hf_model.text_model.named_parameters(): - param.data = pt_weights[f"bert.{name}"].data - - -def copy_vision_model_and_projection(hf_model, pt_weights): - # copy projection - hf_model.visual_projection.weight.data = pt_weights["visual.proj"].data.T - - # copy layer norms - copy_linear(hf_model.vision_model.pre_layrnorm, pt_weights, "visual.ln_pre") - copy_linear(hf_model.vision_model.post_layernorm, pt_weights, "visual.ln_post") - - # copy embeddings - hf_model.vision_model.embeddings.patch_embedding.weight.data = pt_weights["visual.conv1.weight"].data - hf_model.vision_model.embeddings.class_embedding.data = pt_weights["visual.class_embedding"].data - hf_model.vision_model.embeddings.position_embedding.weight.data = pt_weights["visual.positional_embedding"].data - - # copy encoder - copy_layers(hf_model.vision_model.encoder.layers, pt_weights, "visual.transformer.resblocks") - - -@torch.no_grad() -def convert_chinese_clip_checkpoint(checkpoint_path, pytorch_dump_folder_path, config_path=None): - """ - Copy/paste/tweak model's weights to transformers design. - """ - - assert config_path is not None, "Please specify the ChineseCLIP model config of the corresponding model size." - config = ChineseCLIPConfig.from_pretrained(config_path) - - hf_model = ChineseCLIPModel(config).eval() - - pt_weights = torch.load(checkpoint_path, map_location="cpu")["state_dict"] - pt_weights = {(name[7:] if name.startswith("module.") else name): value for name, value in pt_weights.items()} - - copy_text_model_and_projection(hf_model, pt_weights) - copy_vision_model_and_projection(hf_model, pt_weights) - hf_model.logit_scale.data = pt_weights["logit_scale"].data - - hf_model.save_pretrained(pytorch_dump_folder_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "--pytorch_dump_folder_path", - default=None, - type=str, - help="Path to the output folder storing converted hf PyTorch model.", - ) - parser.add_argument( - "--checkpoint_path", default=None, type=str, help="Path to original github format ChineseCLIP checkpoint." - ) - parser.add_argument( - "--config_path", default=None, required=True, type=str, help="Path to hf config.json of model to convert." - ) - args = parser.parse_args() - - convert_chinese_clip_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.config_path) - print("The conversion is finished!") diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/modeling_deit.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/modeling_deit.py deleted file mode 100644 index 38c28dbbedc669fe2b490a37ef3518f6a346912b..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/modeling_deit.py +++ /dev/null @@ -1,904 +0,0 @@ -# coding=utf-8 -# Copyright 2021 Facebook AI Research (FAIR), Ross Wightman, The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch DeiT model.""" - - -import collections.abc -import math -from dataclasses import dataclass -from typing import Optional, Set, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutput, - BaseModelOutputWithPooling, - ImageClassifierOutput, - MaskedImageModelingOutput, -) -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_deit import DeiTConfig - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "DeiTConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "facebook/deit-base-distilled-patch16-224" -_EXPECTED_OUTPUT_SHAPE = [1, 198, 768] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "facebook/deit-base-distilled-patch16-224" -_IMAGE_CLASS_EXPECTED_OUTPUT = "tabby, tabby cat" - - -DEIT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "facebook/deit-base-distilled-patch16-224", - # See all DeiT models at https://huggingface.co/models?filter=deit -] - - -class DeiTEmbeddings(nn.Module): - """ - Construct the CLS token, distillation token, position and patch embeddings. Optionally, also the mask token. - """ - - def __init__(self, config: DeiTConfig, use_mask_token: bool = False) -> None: - super().__init__() - - self.cls_token = nn.Parameter(torch.zeros(1, 1, config.hidden_size)) - self.distillation_token = nn.Parameter(torch.zeros(1, 1, config.hidden_size)) - self.mask_token = nn.Parameter(torch.zeros(1, 1, config.hidden_size)) if use_mask_token else None - self.patch_embeddings = DeiTPatchEmbeddings(config) - num_patches = self.patch_embeddings.num_patches - self.position_embeddings = nn.Parameter(torch.zeros(1, num_patches + 2, config.hidden_size)) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, pixel_values: torch.Tensor, bool_masked_pos: Optional[torch.BoolTensor] = None) -> torch.Tensor: - embeddings = self.patch_embeddings(pixel_values) - batch_size, seq_length, _ = embeddings.size() - - if bool_masked_pos is not None: - mask_tokens = self.mask_token.expand(batch_size, seq_length, -1) - # replace the masked visual tokens by mask_tokens - mask = bool_masked_pos.unsqueeze(-1).type_as(mask_tokens) - embeddings = embeddings * (1.0 - mask) + mask_tokens * mask - - cls_tokens = self.cls_token.expand(batch_size, -1, -1) - distillation_tokens = self.distillation_token.expand(batch_size, -1, -1) - embeddings = torch.cat((cls_tokens, distillation_tokens, embeddings), dim=1) - embeddings = embeddings + self.position_embeddings - embeddings = self.dropout(embeddings) - return embeddings - - -class DeiTPatchEmbeddings(nn.Module): - """ - This class turns `pixel_values` of shape `(batch_size, num_channels, height, width)` into the initial - `hidden_states` (patch embeddings) of shape `(batch_size, seq_length, hidden_size)` to be consumed by a - Transformer. - """ - - def __init__(self, config): - super().__init__() - image_size, patch_size = config.image_size, config.patch_size - num_channels, hidden_size = config.num_channels, config.hidden_size - - image_size = image_size if isinstance(image_size, collections.abc.Iterable) else (image_size, image_size) - patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size) - num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) - self.image_size = image_size - self.patch_size = patch_size - self.num_channels = num_channels - self.num_patches = num_patches - - self.projection = nn.Conv2d(num_channels, hidden_size, kernel_size=patch_size, stride=patch_size) - - def forward(self, pixel_values: torch.Tensor) -> torch.Tensor: - batch_size, num_channels, height, width = pixel_values.shape - if num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - if height != self.image_size[0] or width != self.image_size[1]: - raise ValueError( - f"Input image size ({height}*{width}) doesn't match model ({self.image_size[0]}*{self.image_size[1]})." - ) - x = self.projection(pixel_values).flatten(2).transpose(1, 2) - return x - - -# Copied from transformers.models.vit.modeling_vit.ViTSelfAttention with ViT->DeiT -class DeiTSelfAttention(nn.Module): - def __init__(self, config: DeiTConfig) -> None: - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size {config.hidden_size,} is not a multiple of the number of attention " - f"heads {config.num_attention_heads}." - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - - def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor: - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, hidden_states, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False - ) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]: - mixed_query_layer = self.query(hidden_states) - - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - query_layer = self.transpose_for_scores(mixed_query_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - return outputs - - -# Copied from transformers.models.vit.modeling_vit.ViTSelfOutput with ViT->DeiT -class DeiTSelfOutput(nn.Module): - """ - The residual connection is defined in DeiTLayer instead of here (as is the case with other models), due to the - layernorm applied before each block. - """ - - def __init__(self, config: DeiTConfig) -> None: - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - - return hidden_states - - -# Copied from transformers.models.vit.modeling_vit.ViTAttention with ViT->DeiT -class DeiTAttention(nn.Module): - def __init__(self, config: DeiTConfig) -> None: - super().__init__() - self.attention = DeiTSelfAttention(config) - self.output = DeiTSelfOutput(config) - self.pruned_heads = set() - - def prune_heads(self, heads: Set[int]) -> None: - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.attention.query = prune_linear_layer(self.attention.query, index) - self.attention.key = prune_linear_layer(self.attention.key, index) - self.attention.value = prune_linear_layer(self.attention.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads) - self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - ) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]: - self_outputs = self.attention(hidden_states, head_mask, output_attentions) - - attention_output = self.output(self_outputs[0], hidden_states) - - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -# Copied from transformers.models.vit.modeling_vit.ViTIntermediate with ViT->DeiT -class DeiTIntermediate(nn.Module): - def __init__(self, config: DeiTConfig) -> None: - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - - return hidden_states - - -# Copied from transformers.models.vit.modeling_vit.ViTOutput with ViT->DeiT -class DeiTOutput(nn.Module): - def __init__(self, config: DeiTConfig) -> None: - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - - hidden_states = hidden_states + input_tensor - - return hidden_states - - -# Copied from transformers.models.vit.modeling_vit.ViTLayer with ViT->DeiT -class DeiTLayer(nn.Module): - """This corresponds to the Block class in the timm implementation.""" - - def __init__(self, config: DeiTConfig) -> None: - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = DeiTAttention(config) - self.intermediate = DeiTIntermediate(config) - self.output = DeiTOutput(config) - self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - ) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]: - self_attention_outputs = self.attention( - self.layernorm_before(hidden_states), # in DeiT, layernorm is applied before self-attention - head_mask, - output_attentions=output_attentions, - ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - # first residual connection - hidden_states = attention_output + hidden_states - - # in DeiT, layernorm is also applied after self-attention - layer_output = self.layernorm_after(hidden_states) - layer_output = self.intermediate(layer_output) - - # second residual connection is done here - layer_output = self.output(layer_output, hidden_states) - - outputs = (layer_output,) + outputs - - return outputs - - -# Copied from transformers.models.vit.modeling_vit.ViTEncoder with ViT->DeiT -class DeiTEncoder(nn.Module): - def __init__(self, config: DeiTConfig) -> None: - super().__init__() - self.config = config - self.layer = nn.ModuleList([DeiTLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ) -> Union[tuple, BaseModelOutput]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - layer_head_mask, - ) - else: - layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -class DeiTPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = DeiTConfig - base_model_prefix = "deit" - main_input_name = "pixel_values" - supports_gradient_checkpointing = True - _no_split_modules = ["DeiTLayer"] - - def _init_weights(self, module: Union[nn.Linear, nn.Conv2d, nn.LayerNorm]) -> None: - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Conv2d)): - # Upcast the input in `fp32` and cast it back to desired `dtype` to avoid - # `trunc_normal_cpu` not implemented in `half` issues - module.weight.data = nn.init.trunc_normal_( - module.weight.data.to(torch.float32), mean=0.0, std=self.config.initializer_range - ).to(module.weight.dtype) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module: DeiTEncoder, value: bool = False) -> None: - if isinstance(module, DeiTEncoder): - module.gradient_checkpointing = value - - -DEIT_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it - as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`DeiTConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -DEIT_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`DeiTImageProcessor.__call__`] for details. - - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare DeiT Model transformer outputting raw hidden-states without any specific head on top.", - DEIT_START_DOCSTRING, -) -class DeiTModel(DeiTPreTrainedModel): - def __init__(self, config: DeiTConfig, add_pooling_layer: bool = True, use_mask_token: bool = False) -> None: - super().__init__(config) - self.config = config - - self.embeddings = DeiTEmbeddings(config, use_mask_token=use_mask_token) - self.encoder = DeiTEncoder(config) - - self.layernorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.pooler = DeiTPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> DeiTPatchEmbeddings: - return self.embeddings.patch_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPooling, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*): - Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?) - expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype - if pixel_values.dtype != expected_dtype: - pixel_values = pixel_values.to(expected_dtype) - - embedding_output = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos) - - encoder_outputs = self.encoder( - embedding_output, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - sequence_output = self.layernorm(sequence_output) - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - head_outputs = (sequence_output, pooled_output) if pooled_output is not None else (sequence_output,) - return head_outputs + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -# Copied from transformers.models.vit.modeling_vit.ViTPooler with ViT->DeiT -class DeiTPooler(nn.Module): - def __init__(self, config: DeiTConfig): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -@add_start_docstrings( - """DeiT Model with a decoder on top for masked image modeling, as proposed in [SimMIM](https://arxiv.org/abs/2111.09886). - - - - Note that we provide a script to pre-train this model on custom data in our [examples - directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). - - - """, - DEIT_START_DOCSTRING, -) -class DeiTForMaskedImageModeling(DeiTPreTrainedModel): - def __init__(self, config: DeiTConfig) -> None: - super().__init__(config) - - self.deit = DeiTModel(config, add_pooling_layer=False, use_mask_token=True) - - self.decoder = nn.Sequential( - nn.Conv2d( - in_channels=config.hidden_size, - out_channels=config.encoder_stride**2 * config.num_channels, - kernel_size=1, - ), - nn.PixelShuffle(config.encoder_stride), - ) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=MaskedImageModelingOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[tuple, MaskedImageModelingOutput]: - r""" - bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`): - Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). - - Returns: - - Examples: - ```python - >>> from transformers import AutoImageProcessor, DeiTForMaskedImageModeling - >>> import torch - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") - >>> model = DeiTForMaskedImageModeling.from_pretrained("facebook/deit-base-distilled-patch16-224") - - >>> num_patches = (model.config.image_size // model.config.patch_size) ** 2 - >>> pixel_values = image_processor(images=image, return_tensors="pt").pixel_values - >>> # create random boolean mask of shape (batch_size, num_patches) - >>> bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool() - - >>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) - >>> loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction - >>> list(reconstructed_pixel_values.shape) - [1, 3, 224, 224] - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.deit( - pixel_values, - bool_masked_pos=bool_masked_pos, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - # Reshape to (batch_size, num_channels, height, width) - sequence_output = sequence_output[:, 1:-1] - batch_size, sequence_length, num_channels = sequence_output.shape - height = width = int(sequence_length**0.5) - sequence_output = sequence_output.permute(0, 2, 1).reshape(batch_size, num_channels, height, width) - - # Reconstruct pixel values - reconstructed_pixel_values = self.decoder(sequence_output) - - masked_im_loss = None - if bool_masked_pos is not None: - size = self.config.image_size // self.config.patch_size - bool_masked_pos = bool_masked_pos.reshape(-1, size, size) - mask = ( - bool_masked_pos.repeat_interleave(self.config.patch_size, 1) - .repeat_interleave(self.config.patch_size, 2) - .unsqueeze(1) - .contiguous() - ) - reconstruction_loss = nn.functional.l1_loss(pixel_values, reconstructed_pixel_values, reduction="none") - masked_im_loss = (reconstruction_loss * mask).sum() / (mask.sum() + 1e-5) / self.config.num_channels - - if not return_dict: - output = (reconstructed_pixel_values,) + outputs[1:] - return ((masked_im_loss,) + output) if masked_im_loss is not None else output - - return MaskedImageModelingOutput( - loss=masked_im_loss, - reconstruction=reconstructed_pixel_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - DeiT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of - the [CLS] token) e.g. for ImageNet. - """, - DEIT_START_DOCSTRING, -) -class DeiTForImageClassification(DeiTPreTrainedModel): - def __init__(self, config: DeiTConfig) -> None: - super().__init__(config) - - self.num_labels = config.num_labels - self.deit = DeiTModel(config, add_pooling_layer=False) - - # Classifier head - self.classifier = nn.Linear(config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=ImageClassifierOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[tuple, ImageClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, DeiTForImageClassification - >>> import torch - >>> from PIL import Image - >>> import requests - - >>> torch.manual_seed(3) # doctest: +IGNORE_RESULT - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> # note: we are loading a DeiTForImageClassificationWithTeacher from the hub here, - >>> # so the head will be randomly initialized, hence the predictions will be random - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") - >>> model = DeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224") - - >>> inputs = image_processor(images=image, return_tensors="pt") - >>> outputs = model(**inputs) - >>> logits = outputs.logits - >>> # model predicts one of the 1000 ImageNet classes - >>> predicted_class_idx = logits.argmax(-1).item() - >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) - Predicted class: magpie - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.deit( - pixel_values, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.classifier(sequence_output[:, 0, :]) - # we don't use the distillation token - - loss = None - if labels is not None: - labels = labels.to(logits.device) - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - if not return_dict: - output = (logits,) + outputs[1:] - return ((loss,) + output) if loss is not None else output - - return ImageClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@dataclass -class DeiTForImageClassificationWithTeacherOutput(ModelOutput): - """ - Output type of [`DeiTForImageClassificationWithTeacher`]. - - Args: - logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`): - Prediction scores as the average of the cls_logits and distillation logits. - cls_logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`): - Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the - class token). - distillation_logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`): - Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the - distillation token). - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer - plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in - the self-attention heads. - """ - - logits: torch.FloatTensor = None - cls_logits: torch.FloatTensor = None - distillation_logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@add_start_docstrings( - """ - DeiT Model transformer with image classification heads on top (a linear layer on top of the final hidden state of - the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. - - .. warning:: - - This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet - supported. - """, - DEIT_START_DOCSTRING, -) -class DeiTForImageClassificationWithTeacher(DeiTPreTrainedModel): - def __init__(self, config: DeiTConfig) -> None: - super().__init__(config) - - self.num_labels = config.num_labels - self.deit = DeiTModel(config, add_pooling_layer=False) - - # Classifier heads - self.cls_classifier = ( - nn.Linear(config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() - ) - self.distillation_classifier = ( - nn.Linear(config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() - ) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=DeiTForImageClassificationWithTeacherOutput, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[tuple, DeiTForImageClassificationWithTeacherOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.deit( - pixel_values, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - cls_logits = self.cls_classifier(sequence_output[:, 0, :]) - distillation_logits = self.distillation_classifier(sequence_output[:, 1, :]) - - # during inference, return the average of both classifier predictions - logits = (cls_logits + distillation_logits) / 2 - - if not return_dict: - output = (logits, cls_logits, distillation_logits) + outputs[1:] - return output - - return DeiTForImageClassificationWithTeacherOutput( - logits=logits, - cls_logits=cls_logits, - distillation_logits=distillation_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/esm/openfold_utils/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/esm/openfold_utils/__init__.py deleted file mode 100644 index 02a8c149ae320dd9b045edc5df31760a4eebefd9..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/esm/openfold_utils/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .chunk_utils import chunk_layer -from .data_transforms import make_atom14_masks -from .feats import atom14_to_atom37, frames_and_literature_positions_to_atom14_pos, torsion_angles_to_frames -from .loss import compute_predicted_aligned_error, compute_tm -from .protein import Protein as OFProtein -from .protein import to_pdb -from .rigid_utils import Rigid, Rotation -from .tensor_utils import dict_multimap, flatten_final_dims, permute_final_dims diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/catalog.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/catalog.py deleted file mode 100644 index 45c110c19508f23921b9033cdaf0aa8056f0c125..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/catalog.py +++ /dev/null @@ -1,236 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import types -from collections import UserDict -from typing import List - -from detectron2.utils.logger import log_first_n - -__all__ = ["DatasetCatalog", "MetadataCatalog", "Metadata"] - - -class _DatasetCatalog(UserDict): - """ - A global dictionary that stores information about the datasets and how to obtain them. - - It contains a mapping from strings - (which are names that identify a dataset, e.g. "coco_2014_train") - to a function which parses the dataset and returns the samples in the - format of `list[dict]`. - - The returned dicts should be in Detectron2 Dataset format (See DATASETS.md for details) - if used with the data loader functionalities in `data/build.py,data/detection_transform.py`. - - The purpose of having this catalog is to make it easy to choose - different datasets, by just using the strings in the config. - """ - - def register(self, name, func): - """ - Args: - name (str): the name that identifies a dataset, e.g. "coco_2014_train". - func (callable): a callable which takes no arguments and returns a list of dicts. - It must return the same results if called multiple times. - """ - assert callable(func), "You must register a function with `DatasetCatalog.register`!" - assert name not in self, "Dataset '{}' is already registered!".format(name) - self[name] = func - - def get(self, name): - """ - Call the registered function and return its results. - - Args: - name (str): the name that identifies a dataset, e.g. "coco_2014_train". - - Returns: - list[dict]: dataset annotations. - """ - try: - f = self[name] - except KeyError as e: - raise KeyError( - "Dataset '{}' is not registered! Available datasets are: {}".format( - name, ", ".join(list(self.keys())) - ) - ) from e - return f() - - def list(self) -> List[str]: - """ - List all registered datasets. - - Returns: - list[str] - """ - return list(self.keys()) - - def remove(self, name): - """ - Alias of ``pop``. - """ - self.pop(name) - - def __str__(self): - return "DatasetCatalog(registered datasets: {})".format(", ".join(self.keys())) - - __repr__ = __str__ - - -DatasetCatalog = _DatasetCatalog() -DatasetCatalog.__doc__ = ( - _DatasetCatalog.__doc__ - + """ - .. automethod:: detectron2.data.catalog.DatasetCatalog.register - .. automethod:: detectron2.data.catalog.DatasetCatalog.get -""" -) - - -class Metadata(types.SimpleNamespace): - """ - A class that supports simple attribute setter/getter. - It is intended for storing metadata of a dataset and make it accessible globally. - - Examples: - :: - # somewhere when you load the data: - MetadataCatalog.get("mydataset").thing_classes = ["person", "dog"] - - # somewhere when you print statistics or visualize: - classes = MetadataCatalog.get("mydataset").thing_classes - """ - - # the name of the dataset - # set default to N/A so that `self.name` in the errors will not trigger getattr again - name: str = "N/A" - - _RENAMED = { - "class_names": "thing_classes", - "dataset_id_to_contiguous_id": "thing_dataset_id_to_contiguous_id", - "stuff_class_names": "stuff_classes", - } - - def __getattr__(self, key): - if key in self._RENAMED: - log_first_n( - logging.WARNING, - "Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]), - n=10, - ) - return getattr(self, self._RENAMED[key]) - - # "name" exists in every metadata - if len(self.__dict__) > 1: - raise AttributeError( - "Attribute '{}' does not exist in the metadata of dataset '{}'. Available " - "keys are {}.".format(key, self.name, str(self.__dict__.keys())) - ) - else: - raise AttributeError( - f"Attribute '{key}' does not exist in the metadata of dataset '{self.name}': " - "metadata is empty." - ) - - def __setattr__(self, key, val): - if key in self._RENAMED: - log_first_n( - logging.WARNING, - "Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]), - n=10, - ) - setattr(self, self._RENAMED[key], val) - - # Ensure that metadata of the same name stays consistent - try: - oldval = getattr(self, key) - assert oldval == val, ( - "Attribute '{}' in the metadata of '{}' cannot be set " - "to a different value!\n{} != {}".format(key, self.name, oldval, val) - ) - except AttributeError: - super().__setattr__(key, val) - - def as_dict(self): - """ - Returns all the metadata as a dict. - Note that modifications to the returned dict will not reflect on the Metadata object. - """ - return copy.copy(self.__dict__) - - def set(self, **kwargs): - """ - Set multiple metadata with kwargs. - """ - for k, v in kwargs.items(): - setattr(self, k, v) - return self - - def get(self, key, default=None): - """ - Access an attribute and return its value if exists. - Otherwise return default. - """ - try: - return getattr(self, key) - except AttributeError: - return default - - -class _MetadataCatalog(UserDict): - """ - MetadataCatalog is a global dictionary that provides access to - :class:`Metadata` of a given dataset. - - The metadata associated with a certain name is a singleton: once created, the - metadata will stay alive and will be returned by future calls to ``get(name)``. - - It's like global variables, so don't abuse it. - It's meant for storing knowledge that's constant and shared across the execution - of the program, e.g.: the class names in COCO. - """ - - def get(self, name): - """ - Args: - name (str): name of a dataset (e.g. coco_2014_train). - - Returns: - Metadata: The :class:`Metadata` instance associated with this name, - or create an empty one if none is available. - """ - assert len(name) - r = super().get(name, None) - if r is None: - r = self[name] = Metadata(name=name) - return r - - def list(self): - """ - List all registered metadata. - - Returns: - list[str]: keys (names of datasets) of all registered metadata - """ - return list(self.keys()) - - def remove(self, name): - """ - Alias of ``pop``. - """ - self.pop(name) - - def __str__(self): - return "MetadataCatalog(registered metadata: {})".format(", ".join(self.keys())) - - __repr__ = __str__ - - -MetadataCatalog = _MetadataCatalog() -MetadataCatalog.__doc__ = ( - _MetadataCatalog.__doc__ - + """ - .. automethod:: detectron2.data.catalog.MetadataCatalog.get -""" -) diff --git a/spaces/yunfei0710/gpt-academic/docs/README_EN.md b/spaces/yunfei0710/gpt-academic/docs/README_EN.md deleted file mode 100644 index 65af23d7b2c989107a664d7bd3ef88cf7e55c7f7..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/docs/README_EN.md +++ /dev/null @@ -1,322 +0,0 @@ -> **Note** -> -> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct. -> -> When installing dependencies, **please strictly select the versions** specified in requirements.txt. -> -> `pip install -r requirements.txt` - -# GPT Academic Optimization (GPT Academic) - -**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. -To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).** - -> Note: -> -> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**! -> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation). -> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect. - -
    - -Function | Description ---- | --- -One-click polishing | Supports one-click polishing and one-click searching for grammar errors in papers. -One-click Chinese-English translation | One-click Chinese-English translation. -One-click code interpretation | Displays, explains, generates, and adds comments to code. -[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys. -Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project -[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/... -Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts. -Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers. -Batch annotation generation | [Function plug-in] One-click batch generation of function annotations. -Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) in the five languages above? -Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running. -[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded) -[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click. -[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function plug-in] Given any Google Scholar search page URL, let GPT help you [write relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) -Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated. -Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting. -Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click. -Start Dark Gradio [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme. -[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right? -More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/) -More new feature displays (image generation, etc.)…… | See the end of this document for more... -
    - -- New interface (modify the LAYOUT option in `config.py` to switch between "left and right layout" and "up and down layout") -
    - -
    - All buttons are dynamically generated by reading `functional.py`, and you can add custom functions freely to unleash the power of clipboard. -
    - -
    - -- polishing/correction -
    - -
    - -- If the output contains formulas, they will be displayed in both `tex` and render form, making it easy to copy and read. -
    - -
    - -- Tired of reading the project code? ChatGPT can explain it all. -
    - -
    - -- Multiple large language models are mixed, such as ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4. -
    - -
    - ---- -# Installation -## Method 1: Directly running (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure the API_KEY - -Configure the API KEY in `config.py`, [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py` and use the configurations in it to override the same configurations in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configurations in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your private information more secure. P.S. The project also supports configuring most options through `environment variables`. Please refer to the format of `docker-compose` file when writing. Reading priority: `environment variables` > `config_private.py` > `config.py`) - - -3. Install the dependencies -```sh -# (Option I: If familiar with python) (python version 3.9 or above, the newer the better), note: use official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If not familiar with python) Use anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # create anaconda environment -conda activate gptac_venv # activate anaconda environment -python -m pip install -r requirements.txt # this step is the same as pip installation -``` - -
    If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand -

    - -[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (prerequisites: familiar with Python + used Pytorch + computer configuration is strong enough): -```sh -# [Optional Step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: if you encounter the "Call ChatGLM fail cannot load ChatGLM parameters" error, refer to this: 1: The default installation above is torch + cpu version, to use cuda, you need to uninstall torch and reinstall torch + cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code = True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional Step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the root directory of the project - -# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. Currently supported models are as follows (the jittorllms series only supports the docker solution for the time being): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

    -
    - - - -4. Run it -```sh -python main.py -```5. Test Function Plugin -``` -- Test function plugin template function (ask GPT what happened today in history), based on which you can implement more complex functions as a template - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation - Method 2: Using Docker - -1. ChatGPT Only (Recommended for Most People) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # Download project -cd chatgpt_academic # Enter path -nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc. -docker build -t gpt-academic . # Install - -#(Last step - option 1) In a Linux environment, use `--net=host` for convenience and speed. -docker run --rm -it --net=host gpt-academic -#(Last step - option 2) On macOS/windows environment, only -p option can be used to expose the container's port (e.g. 50923) to the port of the main machine. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 3, and keep Plan 2. Modify the configuration of Plan 2 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -3. ChatGPT + LLAMA + Pangu + RWKV (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 2, and keep Plan 3. Modify the configuration of Plan 3 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -## Installation - Method 3: Other Deployment Options - -1. How to Use Reverse Proxy URL/Microsoft Cloud Azure API -Configure API_URL_REDIRECT according to the instructions in 'config.py'. - -2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers) -Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL2 (Windows Subsystem for Linux) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`) -Please visit [FastAPI Running Instructions](docs/WithFastapi.md) - -5. Using docker-compose to Run -Read the docker-compose.yml and follow the prompts. - ---- -# Advanced Usage -## Custom New Shortcut Buttons / Custom Function Plugins - -1. Custom New Shortcut Buttons (Academic Hotkey) -Open `core_functional.py` with any text editor, add an entry as follows and restart the program. (If the button has been successfully added and is visible, the prefix and suffix can be hot-modified without having to restart the program.) -For example, -``` -"Super English-to-Chinese": { - # Prefix, which will be added before your input. For example, used to describe your requests, such as translation, code explanation, polishing, etc. - "Prefix": "Please translate the following content into Chinese and then use a markdown table to explain the proprietary terms that appear in the text:\n\n", - - # Suffix, which is added after your input. For example, with the prefix, your input content can be surrounded by quotes. - "Suffix": "", -}, -``` -
    - -
    - -2. Custom Function Plugins - -Write powerful function plugins to perform any task you can think of, even those you cannot think of. -The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide. -For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New Feature Dynamics -1. Conversation saving function. Call `Save current conversation` in the function plugin area to save the current conversation as a readable and recoverable HTML file. In addition, call `Load conversation history archive` in the function plugin area (dropdown menu) to restore previous sessions. Tip: Clicking `Load conversation history archive` without specifying a file will display the cached history of HTML archives, and clicking `Delete all local conversation history` will delete all HTML archive caches. - -
    - -
    - - -2. Report generation. Most plugins will generate work reports after execution. - -
    - - - -
    - - -3. Modular function design with simple interfaces that support powerful functions. - -
    - - -
    - - -4. This is an open-source project that can "self-translate". - -
    - -
    - -5. Translating other open-source projects is a piece of cake. - -
    - -
    - -
    - -
    - -6. A small feature decorated with [live2d](https://github.com/fghrsh/live2d_demo) (disabled by default, need to modify `config.py`). - -
    - -
    - -7. Added MOSS large language model support. -
    - -
    - -8. OpenAI image generation. -
    - -
    - -9. OpenAI audio parsing and summarization. -
    - -
    - -10. Full-text proofreading and error correction of LaTeX. -
    - -
    - - -## Versions: -- version 3.5(Todo): Use natural language to call all function plugins of this project (high priority). -- version 3.4(Todo): Improve multi-threading support for chatglm local large models. -- version 3.3: +Internet information integration function. -- version 3.2: Function plugin supports more parameter interfaces (save conversation function, interpretation of any language code + simultaneous inquiry of any LLM combination). -- version 3.1: Support simultaneous inquiry of multiple GPT models! Support api2d, and support load balancing of multiple apikeys. -- version 3.0: Support chatglm and other small LLM models. -- version 2.6: Refactored plugin structure, improved interactivity, and added more plugins. -- version 2.5: Self-updating, solving the problem of text overflow and token overflow when summarizing large engineering source codes. -- version 2.4: (1) Added PDF full-text translation function; (2) Added the function of switching the position of the input area; (3) Added vertical layout option; (4) Optimized multi-threading function plugins. -- version 2.3: Enhanced multi-threading interactivity. -- version 2.2: Function plugin supports hot reloading. -- version 2.1: Collapsible layout. -- version 2.0: Introduction of modular function plugins. -- version 1.0: Basic functions. - -gpt_academic Developer QQ Group-2: 610599535 - -- Known Issues - - Some browser translation plugins interfere with the front-end operation of this software. - - Both high and low versions of gradio can lead to various exceptions. - -## Reference and Learning - -``` -Many other excellent designs have been referenced in the code, mainly including: - -# Project 1: THU ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# Project 2: THU JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# Project 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Project 4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Project 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# More: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/yunfei0710/gpt-academic/docs/README_RS.md b/spaces/yunfei0710/gpt-academic/docs/README_RS.md deleted file mode 100644 index 5ba5fcccc30db520d38e21950e2f7cfc03d324c5..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/docs/README_RS.md +++ /dev/null @@ -1,278 +0,0 @@ -> **Note** -> -> Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным. -> -# GPT Академическая оптимизация (GPT Academic) - -**Если вам нравится этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные языковые ярлыки или функциональные плагины, не стесняйтесь открывать issue или pull request. -Чтобы перевести этот проект на произвольный язык с помощью GPT, ознакомьтесь и запустите [`multi_language.py`](multi_language.py) (экспериментальный). - -> **Примечание** -> -> 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов! -> -> 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). С каждой итерацией выполнения версии вы можете в любое время вызвать повторное создание отчета о самоанализе этого проекта, щелкнув соответствующий функциональный плагин и вызвав GPT. Вопросы сборки описаны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Метод установки](#installation). -> -> 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу. - -> **Примечание** -> -> При установке зависимостей строго выбирайте версии, **указанные в файле requirements.txt**. -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`## Задание - -Вы профессиональный переводчик научных статей. - -Переведите этот файл в формате Markdown на русский язык. Не изменяйте существующие команды Markdown, ответьте только переведенными результатами. - -## Результат - -Функция | Описание ---- | --- -Однокнопочный стиль | Поддержка однокнопочного стиля и поиска грамматических ошибок в научных статьях -Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский -Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода -[Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш -Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/chatgpt_academic/wiki/Function-Plug-in-Guide) -[Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academicProject-Self-analysis-Report) исходного кода этого проекта -[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева других проектов Python/C/C++/Java/Lua/... -Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме -Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи -Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций -[Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) для этих 5 языков? -Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение -Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность) -[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF -[Google Scholar Integration Helper](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] При заданном любом URL страницы поиска в Google Scholar позвольте gpt вам помочь [написать обзор](https://www.bilibili.com/video/BV1GP411U7Az/) -Сбор Интернет-информации + GPT | [Функциональный плагин] Однокнопочный [запрос информации из Интернета GPT](https://www.bilibili.com/video/BV1om4y127ck), затем ответьте на вопрос, чтобы информация не устарела никогда -Отображение формул / изображений / таблиц | Может одновременно отображать формулы в [формате Tex и рендеринге](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), поддерживает формулы, подсвечивает код -Поддержка функций с многопоточностью | Поддержка многопоточного вызова chatgpt, однокнопочная обработка [больших объемов текста](https://www.bilibili.com/video/BV1FT411H7c5/) или программ -Темная тема gradio для запуска приложений | Добавьте ```/?__theme=dark``` после URL в браузере, чтобы переключиться на темную тему -[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Они одновременно обслуживаются GPT3.5, GPT4, [Clear ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) -Подключение нескольких новых моделей LLM, поддержка деплоя[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Подключение интерфейса Newbing (новый Bing), подключение поддержки [LLaMA](https://github.com/facebookresearch/llama), поддержка [RWKV](https://github.com/BlinkDL/ChatRWKV) и [Pangu α](https://openi.org.cn/pangu/) -Больше новых функций (генерация изображения и т. д.) | См. на конце этого файла…- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to liberate the clipboard -
    - -
    - -- Revision/Correction -
    - -
    - -- If the output contains formulas, they will be displayed in both tex and rendered form for easy copying and reading -
    - -
    - -- Don't feel like looking at project code? Show the entire project directly in chatgpt -
    - -
    - -- Mixing multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
    - -
    - ---- -# Installation -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure API_KEY - -In `config.py`, configure API KEY and other settings, [special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check whether there is a secret configuration file named `config_private.py` and use the configuration in it to replace the same name in` config.py`. Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Priority of read: `environment variable`>`config_private.py`>`config.py`) - - -3. Install dependencies -```sh -# (Option I: If familiar with Python)(Python version 3.9 or above, the newer the better), note: use the official pip source or the aliyun pip source, temporary switching source method: python -m pip install -r requirements.txt - i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If unfamiliar with Python)Use Anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # create an Anaconda environment -conda activate gptac_venv # activate Anaconda environment -python -m pip install -r requirements.txt # This step is the same as the pip installation -``` - -
    If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, click here to expand -

    - -[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, you need to install more dependencies (prerequisites: familiar with Python + have used Pytorch + computer configuration is strong): -```sh -# [Optional step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM note: If you encounter the "Call ChatGLM fail cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installation above is torch+cpu version, and cuda is used Need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) Modify to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, you must be in the project root path - -# [Optional step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently, all supported models are as follows (the jittorllms series currently only supports the docker solution): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

    -
    - - - -4. Run -```sh -python main.py -```5. Testing Function Plugin -``` -- Testing function plugin template function (requires GPT to answer what happened in history today), you can use this function as a template to implement more complex functions - Click "[Function plugin Template Demo] On this day in history" -``` - -## Installation - Method 2: Using Docker - -1. ChatGPT only (recommended for most people) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # download the project -cd chatgpt_academic # enter the path -nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923) -docker build -t gpt-academic . # install - -# (Last step-Option 1) In a Linux environment, using `--net=host` is more convenient and faster -docker run --rm -it --net=host gpt-academic -# (Last step-Option 2) In macOS/windows environment, only -p option can be used to expose the port on the container (eg 50923) to the port on the host -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker) - -``` sh -# Edit docker-compose.yml, delete solutions 1 and 3, and keep solution 2. Modify the configuration of solution 2 in docker-compose.yml, refer to the comments in it -docker-compose up -``` - -3. ChatGPT + LLAMA + PanGu + RWKV (requires familiarity with Docker) -``` sh -# Edit docker-compose.yml, delete solutions 1 and 2, and keep solution 3. Modify the configuration of solution 3 in docker-compose.yml, refer to the comments in it -docker-compose up -``` - - -## Installation Method 3: Other Deployment Methods - -1. How to use reverse proxy URL/Microsoft Azure API -Configure API_URL_REDIRECT according to the instructions in `config.py`. - -2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers) -Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL2 (Windows Subsystem for Linux subsystem) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to run at the secondary URL (such as `http://localhost/subpath`) -Please visit [FastAPI Operation Instructions](docs/WithFastapi.md) - -5. Using docker-compose to run -Please read docker-compose.yml and follow the prompts to operate. - ---- -# Advanced Usage -## Customize new convenient buttons / custom function plugins - -1. Customize new convenient buttons (academic shortcuts) -Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, both prefixes and suffixes can be hot-modified without having to restart the program.) -For example: -``` -"Super English to Chinese": { - # Prefix, will be added before your input. For example, describe your requirements, such as translation, code interpretation, polishing, etc. - "Prefix": "Please translate the following content into Chinese, and then explain each proper noun that appears in the text with a markdown table:\n\n", - - # Suffix, will be added after your input. For example, with the prefix, you can enclose your input content in quotes. - "Suffix": "", -}, -``` -
    - -
    - -2. Custom function plugin - -Write powerful function plugins to perform any task you can and can't imagine. -The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide. -Please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) for details. - ---- -# Latest Update -## New feature dynamic - -1. Сохранение диалогов. Вызовите "Сохранить текущий диалог" в разделе функций-плагина, чтобы сохранить текущий диалог как файл HTML, который можно прочитать и восстановить. Кроме того, вызовите «Загрузить архив истории диалога» в меню функций-плагина, чтобы восстановить предыдущую сессию. Совет: если нажать кнопку "Загрузить исторический архив диалога" без указания файла, можно просмотреть кэш исторических файлов HTML. Щелкните "Удалить все локальные записи истории диалогов", чтобы удалить все файловые кэши HTML. - -2. Создание отчетов. Большинство плагинов создают рабочий отчет после завершения выполнения. -  -3. Модульный дизайн функций, простой интерфейс, но сильный функционал. - -4. Это проект с открытым исходным кодом, который может «сам переводить себя». - -5. Перевод других проектов с открытым исходным кодом - это не проблема. - -6. Мелкие функции декорирования [live2d](https://github.com/fghrsh/live2d_demo) (по умолчанию отключены, нужно изменить `config.py`). - -7. Поддержка большой языковой модели MOSS. - -8. Генерация изображений с помощью OpenAI. - -9. Анализ и подведение итогов аудиофайлов с помощью OpenAI. - -10. Полный цикл проверки правописания с использованием LaTeX. - -## Версии: -- Версия 3.5 (Todo): использование естественного языка для вызова функций-плагинов проекта (высокий приоритет) -- Версия 3.4 (Todo): улучшение многопоточной поддержки локальных больших моделей чата. -- Версия 3.3: добавлена функция объединения интернет-информации. -- Версия 3.2: функции-плагины поддерживают большое количество параметров (сохранение диалогов, анализирование любого языка программирования и одновременное запрос LLM-групп). -- Версия 3.1: поддержка одновременного запроса нескольких моделей GPT! Поддержка api2d, сбалансированное распределение нагрузки по нескольким ключам api. -- Версия 3.0: поддержка chatglm и других небольших LLM. -- Версия 2.6: перестройка структуры плагинов, улучшение интерактивности, добавлено больше плагинов. -- Версия 2.5: автоматическое обновление для решения проблемы длинного текста и переполнения токенов при обработке больших проектов. -- Версия 2.4: (1) добавлена функция полного перевода PDF; (2) добавлена функция переключения положения ввода; (3) добавлена опция вертикального макета; (4) оптимизация многопоточности плагинов. -- Версия 2.3: улучшение многопоточной интерактивности. -- Версия 2.2: функции-плагины поддерживают горячую перезагрузку. -- Версия 2.1: раскрывающийся макет. -- Версия 2.0: использование модульных функций-плагинов. -- Версия 1.0: базовые функции. - -gpt_academic Разработчик QQ-группы-2: 610599535 - -- Известные проблемы - - Некоторые плагины перевода в браузерах мешают работе фронтенда этого программного обеспечения - - Высокая или низкая версия gradio может вызвать множество исключений - -## Ссылки и учебные материалы - -``` -Мы использовали многие концепты кода из других отличных проектов, включая: - -# Проект 1: Qinghua ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# Проект 2: Qinghua JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# Проект 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Проект 4: Chuanhu ChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Проект 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# Больше: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/zeno-ml/translation-report/config.py b/spaces/zeno-ml/translation-report/config.py deleted file mode 100644 index a6ee539fee4e1b3b02b5360e6887e09579f7c38b..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/config.py +++ /dev/null @@ -1,150 +0,0 @@ -"""Config for analyzing GPT-MT.""" - -from __future__ import annotations - -from collections.abc import Callable -from dataclasses import dataclass - -from zeno_build.evaluation.text_features.capitalization import input_capital_char_ratio -from zeno_build.evaluation.text_features.exact_match import avg_exact_match, exact_match -from zeno_build.evaluation.text_features.frequency import output_max_word_freq -from zeno_build.evaluation.text_features.length import ( - doc_context_length, - input_length, - label_length, - output_length, -) -from zeno_build.evaluation.text_metrics.critique import ( - avg_bert_score, - avg_chrf, - avg_comet, - avg_length_ratio, - bert_score, - chrf, - comet, - length_ratio, -) -from zeno_build.experiments import search_space - -from modeling import remove_leading_language - -lang_pairs: dict[str, list[str]] = { - # All language pairs used in any experiment - "all_lang_pairs": [ - "csen", - "deen", - "defr", - "encs", - "ende", - "enha", - "enis", - "enja", - "enru", - "enuk", - "enzh", - "frde", - "haen", - "isen", - "jaen", - "ruen", - "uken", - "zhen", - ], - # Language pairs used in the experiments on a limited number of language pairs - "limited_lang_pairs": [ - "deen", - "defr", - "ende", - "enru", - "enzh", - "frde", - "ruen", - "zhen", - ], -} - -# The search space for the main experiments -main_space = search_space.CombinatorialSearchSpace( - { - "lang_pairs": search_space.Constant("all_lang_pairs"), - "model_preset": search_space.Categorical( - [ - "text-davinci-003-zeroshot", - "text-davinci-003-RR-1-shot", - "text-davinci-003-RR-5-shot", - "text-davinci-003-QR-1-shot", - "text-davinci-003-QR-5-shot", - "gpt-3.5-turbo-0301-zeroshot", - "gpt-4-0314-zeroshot", - "gpt-4-0314-zeroshot-postprocess", - "MS-Translator", - "google-cloud", - "wmt-best", - ] - ), - } -) - - -@dataclass(frozen=True) -class GptMtConfig: - """Config for gpt-MT models.""" - - path: str - base_model: str - prompt_strategy: str | None = None - prompt_shots: int | None = None - post_processors: list[Callable[[str], str]] | None = None - - -# The details of each model -model_configs = { - "text-davinci-003-RR-1-shot": GptMtConfig( - "text-davinci-003/RR/1-shot", "text-davinci-003", "RR", 1 - ), - "text-davinci-003-RR-5-shot": GptMtConfig( - "text-davinci-003/RR/5-shot", "text-davinci-003", "RR", 5 - ), - "text-davinci-003-QR-1-shot": GptMtConfig( - "text-davinci-003/QR/1-shot", "text-davinci-003", "QR", 1 - ), - "text-davinci-003-QR-5-shot": GptMtConfig( - "text-davinci-003/QR/5-shot", "text-davinci-003", "QR", 5 - ), - "text-davinci-003-zeroshot": GptMtConfig( - "text-davinci-003/zeroshot", "text-davinci-003", None, 0 - ), - "gpt-3.5-turbo-0301-zeroshot": GptMtConfig( - "gpt-3.5-turbo-0301/zeroshot", "gpt-3.5-turbo-0301", None, 0 - ), - "gpt-4-0314-zeroshot": GptMtConfig("gpt-4-0314/zeroshot", "gpt-4-0314", None, 0), - "gpt-4-0314-zeroshot-postprocess": GptMtConfig( - "gpt-4-0314/zeroshot", "gpt-4-0314", None, 0, [remove_leading_language] - ), - "MS-Translator": GptMtConfig("MS-Translator", "MS-Translator"), - "google-cloud": GptMtConfig("google-cloud", "google-cloud"), - "wmt-best": GptMtConfig("wmt-best", "wmt-best"), -} - -sweep_distill_functions = [chrf] -sweep_metric_function = avg_chrf - -# The functions used for Zeno visualization -zeno_distill_and_metric_functions = [ - output_length, - input_length, - label_length, - doc_context_length, - input_capital_char_ratio, - output_max_word_freq, - chrf, - comet, - length_ratio, - bert_score, - exact_match, - avg_chrf, - avg_comet, - avg_length_ratio, - avg_bert_score, - avg_exact_match, -] diff --git a/spaces/zhangyd/bingo/cloudflare/worker.js b/spaces/zhangyd/bingo/cloudflare/worker.js deleted file mode 100644 index 228fbfa4d445712c8a42e460f8304b7e0ccfe94f..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/cloudflare/worker.js +++ /dev/null @@ -1,9 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/zideliu/styledrop/timm/optim/radam.py b/spaces/zideliu/styledrop/timm/optim/radam.py deleted file mode 100644 index 9987a334460286b1a6c8ec6d57ee023596a74219..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/optim/radam.py +++ /dev/null @@ -1,152 +0,0 @@ -"""RAdam Optimizer. -Implementation lifted from: https://github.com/LiyuanLucasLiu/RAdam -Paper: `On the Variance of the Adaptive Learning Rate and Beyond` - https://arxiv.org/abs/1908.03265 -""" -import math -import torch -from torch.optim.optimizer import Optimizer, required - - -class RAdam(Optimizer): - - def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - self.buffer = [[None, None, None] for ind in range(10)] - super(RAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(RAdam, self).__setstate__(state) - - def step(self, closure=None): - - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError('RAdam does not support sparse gradients') - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32) - - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state['step'] += 1 - buffered = self.buffer[int(state['step'] % 10)] - if state['step'] == buffered[0]: - N_sma, step_size = buffered[1], buffered[2] - else: - buffered[0] = state['step'] - beta2_t = beta2 ** state['step'] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t) - buffered[1] = N_sma - - # more conservative since it's an approximated value - if N_sma >= 5: - step_size = group['lr'] * math.sqrt( - (1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / ( - N_sma_max - 2)) / (1 - beta1 ** state['step']) - else: - step_size = group['lr'] / (1 - beta1 ** state['step']) - buffered[2] = step_size - - if group['weight_decay'] != 0: - p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32) - - # more conservative since it's an approximated value - if N_sma >= 5: - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_(-step_size, exp_avg, denom) - else: - p_data_fp32.add_(-step_size, exp_avg) - - p.data.copy_(p_data_fp32) - - return loss - - -class PlainRAdam(Optimizer): - - def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - - super(PlainRAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(PlainRAdam, self).__setstate__(state) - - def step(self, closure=None): - - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError('RAdam does not support sparse gradients') - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32) - - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state['step'] += 1 - beta2_t = beta2 ** state['step'] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t) - - if group['weight_decay'] != 0: - p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32) - - # more conservative since it's an approximated value - if N_sma >= 5: - step_size = group['lr'] * math.sqrt( - (1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / ( - N_sma_max - 2)) / (1 - beta1 ** state['step']) - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_(-step_size, exp_avg, denom) - else: - step_size = group['lr'] / (1 - beta1 ** state['step']) - p_data_fp32.add_(-step_size, exp_avg) - - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/zihanch/zihan/README.md b/spaces/zihanch/zihan/README.md deleted file mode 100644 index ecbea9880c0ac2671fb7ea8944daf11a726a8d95..0000000000000000000000000000000000000000 --- a/spaces/zihanch/zihan/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Zihan -emoji: 💩 -colorFrom: green -colorTo: gray -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zonglin03/White-box-Cartoonization/wbc/guided_filter.py b/spaces/zonglin03/White-box-Cartoonization/wbc/guided_filter.py deleted file mode 100644 index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000 --- a/spaces/zonglin03/White-box-Cartoonization/wbc/guided_filter.py +++ /dev/null @@ -1,87 +0,0 @@ -import tensorflow as tf -import numpy as np - - - - -def tf_box_filter(x, r): - k_size = int(2*r+1) - ch = x.get_shape().as_list()[-1] - weight = 1/(k_size**2) - box_kernel = weight*np.ones((k_size, k_size, ch, 1)) - box_kernel = np.array(box_kernel).astype(np.float32) - output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME') - return output - - - -def guided_filter(x, y, r, eps=1e-2): - - x_shape = tf.shape(x) - #y_shape = tf.shape(y) - - N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r) - - mean_x = tf_box_filter(x, r) / N - mean_y = tf_box_filter(y, r) / N - cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y - var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf_box_filter(A, r) / N - mean_b = tf_box_filter(b, r) / N - - output = mean_A * x + mean_b - - return output - - - -def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8): - - #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4 - - lr_x_shape = tf.shape(lr_x) - #lr_y_shape = tf.shape(lr_y) - hr_x_shape = tf.shape(hr_x) - - N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r) - - mean_x = tf_box_filter(lr_x, r) / N - mean_y = tf_box_filter(lr_y, r) / N - cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y - var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf.image.resize_images(A, hr_x_shape[1: 3]) - mean_b = tf.image.resize_images(b, hr_x_shape[1: 3]) - - output = mean_A * hr_x + mean_b - - return output - - -if __name__ == '__main__': - import cv2 - from tqdm import tqdm - - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3]) - output = guided_filter(input_photo, input_photo, 5, eps=1) - image = cv2.imread('output_figure1/cartoon2.jpg') - image = image/127.5 - 1 - image = np.expand_dims(image, axis=0) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - sess.run(tf.global_variables_initializer()) - - out = sess.run(output, feed_dict={input_photo: image}) - out = (np.squeeze(out)+1)*127.5 - out = np.clip(out, 0, 255).astype(np.uint8) - cv2.imwrite('output_figure1/cartoon2_filter.jpg', out)