diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Avatar Friday Patcher V1.1 __HOT__.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Avatar Friday Patcher V1.1 __HOT__.md deleted file mode 100644 index deab4b50df68137193797b766f2db1964d235cb3..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Avatar Friday Patcher V1.1 __HOT__.md +++ /dev/null @@ -1,84 +0,0 @@ -## Avatar Friday Patcher V1.1 - - - - - - ![Avatar Friday Patcher V1.1 __HOT__](https://cdn.shopify.com/s/files/1/1207/0358/products/logo_back_patch_1200x1200.png?v\u003d1560967230) - - - - - -**Download ✅ [https://jinyurl.com/2tA014](https://jinyurl.com/2tA014)** - - - - - - - - - - - - - -# Avatar Friday Patcher v1.1: How to Fix Common Issues and Enjoy the Game - - - -If you are a fan of the Avatar franchise, you might have been eagerly waiting for the release of Avatar Friday, the new open-world RPG game based on the popular movie and TV series. However, some players have reported experiencing various issues with the game, such as crashes, glitches, low FPS, and missing features. Fortunately, there is a solution: Avatar Friday Patcher v1.1. - - - -Avatar Friday Patcher v1.1 is a fan-made mod that aims to improve the performance and stability of the game, as well as add some missing features and enhancements. The patcher is easy to use and compatible with most versions of the game. Here are some of the benefits of using Avatar Friday Patcher v1.1: - - - -- Fixes crashes and freezes that occur randomly or at certain points in the game. - -- Optimizes the graphics settings and reduces the CPU and GPU load, resulting in higher FPS and smoother gameplay. - -- Enables full-screen mode and custom resolutions, allowing you to play the game in your preferred display settings. - -- Adds missing features such as subtitles, controller support, achievements, and cloud saves. - -- Enhances the game's visuals and audio quality, making the world of Pandora more immersive and realistic. - -- Fixes bugs and glitches that affect the gameplay, such as broken quests, missing items, clipping issues, and more. - - - -To use Avatar Friday Patcher v1.1, you need to download it from the official website or a trusted source. Then, you need to extract the files to your game folder and run the patcher.exe file. The patcher will automatically detect your game version and apply the necessary changes. You can also customize some of the options according to your preferences. Once the patching process is done, you can launch the game and enjoy it without any problems. - - - -Avatar Friday Patcher v1.1 is a must-have mod for anyone who wants to play Avatar Friday without any hassle. It will make your gaming experience more enjoyable and satisfying. Download it today and see for yourself! - - - -Avatar Friday Patcher v1.1 is not only a mod that fixes and improves the game, but also a mod that adds new content and features. Here are some of the additional things you can do with Avatar Friday Patcher v1.1: - - - -- Explore new areas and locations that were not included in the original game, such as the Floating Mountains, the Tree of Souls, and the Hallelujah Mountains. - -- Interact with new characters and factions that have their own stories and quests, such as the Na'vi clans, the RDA soldiers, and the wildlife researchers. - -- Customize your avatar's appearance and skills, choosing from different races, genders, hairstyles, outfits, weapons, and abilities. - -- Collect and craft new items and resources, such as plants, minerals, artifacts, and equipment. - -- Ride and tame various creatures that inhabit Pandora, such as banshees, direhorses, thanators, and more. - - - -Avatar Friday Patcher v1.1 is a mod that transforms Avatar Friday into a more complete and satisfying game. It is compatible with most of the other mods available for the game, so you can mix and match them to create your own unique experience. If you are looking for a way to enhance your Avatar Friday adventure, you should definitely give Avatar Friday Patcher v1.1 a try! - - 145887f19f - - - - - diff --git a/spaces/1gistliPinn/ChatGPT4/Crystal-Cs4280-Cm-Ep-Sound-Card-Driver-FOR-WINDOWS-7181.md b/spaces/1gistliPinn/ChatGPT4/Crystal-Cs4280-Cm-Ep-Sound-Card-Driver-FOR-WINDOWS-7181.md deleted file mode 100644 index ce0d6c2924d56f4cd750adde6aba2a263d3eed81..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Crystal-Cs4280-Cm-Ep-Sound-Card-Driver-FOR-WINDOWS-7181.md +++ /dev/null @@ -1,65 +0,0 @@ -Crystal Cs4280 Cm Ep Sound Card Driver FOR WINDOWS 7.181 - - - -DOWNLOAD === [https://gohhs.com/2tvp6s](https://gohhs.com/2tvp6s) - - - - - - - - - -Here is a possible title and article for your keyword: - -How to Install Crystal Cs4280 Cm Ep Sound Card Driver for Windows 7.181 - -If you have a Crystal Cs4280 Cm Ep sound card and you want to use it with Windows 7.181, you may need to install a driver to make it work properly. A driver is a software that allows your computer to communicate with your hardware devices. Without a driver, your sound card may not function correctly or at all. - -In this article, we will show you how to download and install the Crystal Cs4280 Cm Ep sound card driver for Windows 7.181 in a few easy steps. We will also provide some tips on how to troubleshoot common issues that may arise during or after the installation process. - -Step 1: Download the Crystal Cs4280 Cm Ep Sound Card Driver - -The first step is to download the Crystal Cs4280 Cm Ep sound card driver from a reliable source. You can use one of the following links to download the driver file: - - -Crystal Digital cs4280-cm Drivers Download - Solvusoft [^1^] -Crystal CS4280/CS4614/CS4624 Sound Driver | Crystal Semiconductors [^2^] -Crystal Audio Drivers Cs4280-Cm | Audio-Digital.net [^3^] -Crystal Cs4280 Cm Driver Download Win7 [^4^] - - -Make sure you choose the correct version of the driver that matches your operating system and your sound card model. The file name should be something like d1265070.rar or Crystal_CS4281.zip. - -Save the file to a location where you can easily find it later, such as your desktop or downloads folder. - -Step 2: Extract the Crystal Cs4280 Cm Ep Sound Card Driver File - -The next step is to extract the contents of the driver file that you downloaded. The file is compressed in a .rar or .zip format, so you will need a software that can open and extract these types of files. You can use one of the following programs: - - -WinRAR -7-Zip -PeaZip - - -Right-click on the driver file and select "Extract Here" or "Extract to" from the menu. Choose a destination folder where you want to extract the files, such as your desktop or downloads folder. - -You should see a folder with the name of the driver file, such as d1265070 or Crystal_CS4281. Open this folder and look for a file named setup.exe or install.exe. This is the executable file that will install the driver on your computer. - -Step 3: Install the Crystal Cs4280 Cm Ep Sound Card Driver - -The final step is to run the setup.exe or install.exe file that you extracted in the previous step. Double-click on this file and follow the instructions on the screen to complete the installation process. - -You may need to agree to some terms and conditions, choose a language and a destination folder, and restart your computer after the installation is finished. - -Once the installation is done, you should be able to use your Crystal Cs4280 Cm Ep sound card with Windows 7.181 without any problems. - -Troubleshooting Tips - -If you encounter any issues during or after installing dfd1c89656 - - - diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Fifa 11 World Cup Patch Update V1.rar How to Get the Most Out of Your Fifa 11 Game.md b/spaces/1gistliPinn/ChatGPT4/Examples/Fifa 11 World Cup Patch Update V1.rar How to Get the Most Out of Your Fifa 11 Game.md deleted file mode 100644 index d8b9c50fa78e23a587d5630b72c13045cbda4b0c..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Fifa 11 World Cup Patch Update V1.rar How to Get the Most Out of Your Fifa 11 Game.md +++ /dev/null @@ -1,7 +0,0 @@ - -

Then, from 21st November to 18th December, a new "live" World Cup mode will be updated during the group and knockout stages, letting you play a single-player tournament along with real-world fixtures and squads for each game. You can play any past game to rewrite history and better real-world England's inevitable disappointing result.

-

EA Game has released FIFA 23 patch 1.04 details for PC, PS4, and Xbox One. According to the official Fifa 23 patch notes, the latest update added the FIFA World Cup 2022 to the game. Apart from this, Fifa 23 update 1.04 also includes stability fixes.

-

Fifa 11 World Cup Patch Update V1.rar


Download Zip === https://imgfil.com/2uxY43



-

This patch is based on FIFA 17, and will 'update' FIFA 11 to the 2016-17 season. The squads (player stats, team tactics, ...) are exactly same as the FIFA 17 ea' squad updates graphics (kits, shoes, ...) are mostly from fifa 17, combined with files from FIFA online 3 & FIFA 16 mods (fifa online 3 have updated 2014-15 --FIFA 15-- graphics)

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Blue 3 ft. Radio Weasel - Where You Are - Free MP3 and Lyrics Download.md b/spaces/1phancelerku/anime-remove-background/Blue 3 ft. Radio Weasel - Where You Are - Free MP3 and Lyrics Download.md deleted file mode 100644 index be17fc817ce44d5280180d5ca1ad7e6ac3cd4d77..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Blue 3 ft. Radio Weasel - Where You Are - Free MP3 and Lyrics Download.md +++ /dev/null @@ -1,105 +0,0 @@ - -

How to Download Blue 3's "Where You Are" as an MP3 File

-

If you love Ugandan music, you probably know Blue 3, the girl group that rose to fame in 2005 after winning a talent show. And you probably know their hit song "Where You Are", which features Radio and Weasel, another popular duo in the Ugandan music scene.

-

"Where You Are" is a catchy and romantic song that blends Afrobeat, R&B, and dancehall genres. It has over 20,000 views on YouTube and has been praised by critics and fans alike.

-

download blue 3 where you are mp3


DOWNLOAD 🆗 https://jinyurl.com/2uNSAg



-

But what if you want to listen to this song offline, without any interruptions or ads? What if you want to save some storage space on your device and still enjoy the high-quality sound of this song?

-

The answer is simple: download "Where You Are" as an MP3 file.

-

In this article, we'll show you how to do that in a few easy steps. We'll also give you some alternative sources where you can download this song as an MP3 file.

-

So let's get started!

-

What is Blue 3 and "Where You Are"?

-

Blue 3 was a Ugandan girl group that consisted of Jackie Chandiru, Lillian Mbabazi, and Cindy Sanyu. They formed in 2005 after winning a talent show called Coca-Cola Popstars.

-

The group released their debut album "Hitaji" in 2006, which featured songs like "Burrn", "Ndayila", and "Hitaji". They also collaborated with other Ugandan artists like Bebe Cool, Jose Chameleone, and Bobi Wine.

-

"Where You Are" was one of their most successful songs, released in 2008. It featured Radio and Weasel, who were part of the Goodlyfe Crew at the time. The song was a love ballad that expressed the desire to be with someone no matter where they are.

-

The song was well-received by both fans and critics, who praised its catchy melody, smooth vocals, and sweet lyrics. It also won several awards, including Song of the Year at the Pearl of Africa Music Awards in 2008.

-

How to download blue 3 where you are mp3 for free
-Download blue 3 where you are mp3 audio
-Blue 3 where you are mp3 download Uganda
-Where you are by blue 3 ft radio and weasel mp3 download
-Download blue 3 where you are mp3 song
-Blue 3 where you are mp3 lyrics download
-Download blue 3 where you are mp3 video
-Blue 3 where you are mp3 online download
-Download blue 3 where you are mp3 remix
-Blue 3 where you are mp3 instrumental download
-Download blue 3 where you are mp3 album
-Blue 3 where you are mp3 ringtone download
-Download blue 3 where you are mp3 music
-Blue 3 where you are mp3 karaoke download
-Download blue 3 where you are mp3 version
-Blue 3 where you are mp3 live performance download
-Download blue 3 where you are mp3 original
-Blue 3 where you are mp3 cover download
-Download blue 3 where you are mp3 official
-Blue 3 where you are mp3 dance download
-Download blue 3 where you are mp3 quality
-Blue 3 where you are mp3 review download
-Download blue 3 where you are mp3 format
-Blue 3 where you are mp3 genre download
-Download blue 3 where you are mp3 release date
-Blue 3 where you are mp3 history download
-Download blue 3 where you are mp3 meaning
-Blue 3 where you are mp3 reaction download
-Download blue 3 where you are mp3 playlist
-Blue 3 where you are mp3 streaming download
-Download blue 3 where you are mp4 to mp3 converter
-Blue 4k video downloader for blue 4k video downloader for blue 4k video downloader for blue 4k video downloader for blue 4k video downloader for blue 4k video downloader for blue

-

Why Download "Where You Are" as an MP3 File?

-

Downloading "Where You Are" as an MP3 file has many advantages over streaming it online or playing it from a CD. Here are some of them:

- -

As you can see, downloading "Where You Are" as an MP3 file is a smart and convenient way to enjoy this song anytime, anywhere.

-

How to Download "Where You Are" as an MP3 File from YouTube

-

One of the easiest ways to download "Where You Are" as an MP3 file is to use YouTube, where you can find the official video of the song. Here are the steps you need to follow:

-
    -
  1. Go to YouTube and search for "Blue 3 Where You Are".
  2. -
  3. Select the video that has the title "Blue 3 ft Radio & Weasel - Where You Are (Official Video)" and has over 20,000 views. This is the official video of the song.
  4. -
  5. Copy the URL of the video from the address bar of your browser.
  6. -
  7. Go to a website that can convert YouTube videos into MP3 files, such as ytmp3.cc, y2mate.com, or onlinevideoconverter.com.
  8. -
  9. Paste the URL of the video into the input box of the website and click on "Convert" or "Download".
  10. -
  11. Wait for a few seconds until the conversion is done and then click on "Download" or "Save" to save the MP3 file to your device.
  12. -
-

Congratulations! You have successfully downloaded "Where You Are" as an MP3 file from YouTube. You can now play it on your device or transfer it to another device.

-

How to Download "Where You Are" as an MP3 File from Other Sources

-

If you don't want to use YouTube or you want to explore other sources where you can download "Where You Are" as an MP3 file, here are some options you can try:

- - - - - - - - - - - - - - - - - -
SourceHow to Download
SoundCloudGo to soundcloud.com and search for "Blue 3 Where You Are". Select the track that has the title "Blue 3 ft Radio & Weasel - Where You Are (Official Audio)" and has over 1,000 plays. This is the official audio of the song. Click on the "More" button below the track and then click on "Download file". Save the MP3 file to your device.
SpotifyGo to spotify.com and sign up for a free account or log in if you already have one. Search for "Blue 3 Where You Are". Select the track that has the title "Where You Are (feat. Radio & Weasel)" and has over 10,000 streams. This is the official track of the song. Click on the "..." button next to the track and then click on "Save to Your Library". Go to your library and find the track under "Liked Songs". Click on the "..." button again and then click on "Download". Wait for the download to finish and then play the MP3 file on your device.
iTunesGo to itunes.apple.com and search for "Blue 3 Where You Are". Select the track that has the title "Where You Are (feat. Radio & Weasel)" and has a price of $0.99. This is the official track of the song. Click on the "Buy" button and enter your payment details. After purchasing, go to your library and find the track under "Purchased". Click on the "Download" button and save the MP3 file to your device.
-

Conclusion

-

In this article, we have shown you how to download Blue 3's song "Where You Are" as an MP3 file from various sources. We have also explained why downloading this song as an MP3 file is a good idea.

-

"Where You Are" is a beautiful song that deserves to be listened to over and over again. By downloading it as an MP3 file, you can enjoy it offline, without ads, and with high-quality sound.

-

So what are you waiting for? Download "Where You Are" as an MP3 file today and enjoy this Ugandan masterpiece!

-

FAQs

-

Here are some frequently asked questions and answers about downloading "Where You Are" as an MP3 file:

-

Q: Is it legal to download "Where You Are" as an MP3 file?

-

A: It depends on where you download it from and how you use it. If you download it from a source that has permission from the artists or the record label, or if you use it for personal and non-commercial purposes, then it is legal. However, if you download it from a source that does not have permission or if you use it for commercial or public purposes, then it is illegal. You should always respect the intellectual property rights of the creators and follow the terms and conditions of the source you download from.

-

Q: How can I play "Where You Are" as an MP3 file on my device?

-

A: Once you have downloaded "Where You Are" as an MP3 file, you can play it on any device that supports MP3 playback. For example, you can play it on your smartphone using the default music player app or any other app that can play MP3 files. You can also play it on your laptop or desktop computer using a program like Windows Media Player, VLC Media Player, or iTunes. You can also transfer it to an MP3 player or a USB drive and play it on any compatible device.

-

Q: How can I share "Where You Are" as an MP3 file with my friends?

-

A: If you want to share "Where You Are" as an MP3 file with your friends, you can do so in several ways. For example, you can send it to them via email, WhatsApp, Telegram, or any other messaging app. You can also upload it to a cloud service like Google Drive, Dropbox, or OneDrive and share the link with them. You can also burn it to a CD or copy it to a USB drive and give it to them physically. However, you should always make sure that you have permission from the artists or the record label before sharing their music with others.

-

Q: How can I support Blue 3 and their music?

-

A: If you love Blue 3 and their music, you can support them in various ways. For example, you can buy their albums or songs from official sources like iTunes, Spotify, or Amazon. You can also stream their music from legal platforms like YouTube, SoundCloud, or Deezer. You can also follow them on social media like Facebook, Twitter, or Instagram and show them some love and appreciation. You can also attend their concerts or events if they are available in your area. By supporting Blue 3 and their music, you are helping them to continue making amazing songs for their fans.

-

Q: Where can I find more information about Blue 3 and their music?

-

A: If you want to find more information about Blue 3 and their music, you can visit their official website at www.blue3music.com. There you can find their biography, discography, news, photos, videos, and contact details. You can also check out their Wikipedia page at https://en.wikipedia.org/wiki/Blue_3_(group) for more facts and history about them. You can also search for them on Google or any other search engine for more articles and reviews about them.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Geometry Dash Lite APK for Android 2.3 and Enjoy Rhythm-based Action Platforming!.md b/spaces/1phancelerku/anime-remove-background/Download Geometry Dash Lite APK for Android 2.3 and Enjoy Rhythm-based Action Platforming!.md deleted file mode 100644 index 71739db3c12fdb14051fb2d5cbd7033ce54f5c2a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Geometry Dash Lite APK for Android 2.3 and Enjoy Rhythm-based Action Platforming!.md +++ /dev/null @@ -1,106 +0,0 @@ - -

Geometry Dash Lite: A Rhythm-Based Action Platformer for Android 2.3

-

If you are looking for a fun and challenging game that will test your reflexes and timing, you might want to try Geometry Dash Lite. Geometry Dash Lite is a free version of the popular game Geometry Dash, which is a rhythm-based action platformer that has millions of fans around the world. In this article, we will tell you what Geometry Dash Lite is, what features it offers, and how to download and install it on your Android device running version 2.3 or higher.

-

geometry dash lite apk android 2.3


Download ✵✵✵ https://jinyurl.com/2uNRjJ



-

What is Geometry Dash Lite?

-

Geometry Dash Lite is a game developed by RobTop Games AB, a Swedish game studio that specializes in creating addictive and colorful games. Geometry Dash Lite is a simplified version of Geometry Dash, which has more levels, soundtracks, achievements, and an online level editor. However, Geometry Dash Lite still offers plenty of fun and challenge for casual and hardcore gamers alike.

-

Features of Geometry Dash Lite

-

Geometry Dash Lite has many features that make it an enjoyable and engaging game. Here are some of them:

-

Rhythm-based action platforming

-

The core gameplay of Geometry Dash Lite is based on jumping, flying, and flipping your way through dangerous passages and spiky obstacles. You have to tap the screen at the right moment to avoid crashing and losing. The game is synchronized with the music, so you have to follow the rhythm and the beat to succeed. The game is fast-paced and requires quick reflexes and concentration.

-

Customization options

-

You can customize your character in Geometry Dash Lite by unlocking new icons and colors. You can also choose from different vehicles, such as rockets, gravity balls, UFOs, and more. You can mix and match different combinations to create your own unique style.

-

Various game modes and levels

-

Geometry Dash Lite has several game modes to keep you entertained for hours. You can play the normal mode, where you have to complete the levels in order. You can also play the practice mode, where you can set checkpoints and practice your skills. You can also play the challenge mode, where you have to complete random levels with increasing difficulty. The game has 13 levels in total, each with its own soundtrack and theme.

-

How to download and install Geometry Dash Lite apk for Android 2.3?

-

If you want to play Geometry Dash Lite on your Android device running version 2.3 or higher, you will need to download and install the apk file of the game. An apk file is a package file that contains all the necessary files and data for an app to run on your device. Here are the requirements and steps to download and install Geometry Dash Lite apk:

-

Requirements for Geometry Dash Lite apk

-

Before you download and install Geometry Dash Lite apk, you need to make sure that your device meets the following requirements:

-

Android version

-

Your device must have Android version 2.3 or higher to run Geometry Dash Lite apk. You can check your device's Android version by going to Settings > About phone > Software information.

-

geometry dash lite apk download for android 2.3
-geometry dash lite 2.2 apk android 2.3
-geometry dash lite mod apk android 2.3
-geometry dash lite full version apk android 2.3
-geometry dash lite hack apk android 2.3
-geometry dash lite free apk android 2.3
-geometry dash lite latest apk android 2.3
-geometry dash lite old version apk android 2.3
-geometry dash lite unlimited apk android 2.3
-geometry dash lite offline apk android 2.3
-geometry dash lite app for android 2.3
-geometry dash lite game for android 2.3
-geometry dash lite update for android 2.3
-geometry dash lite cheats for android 2.3
-geometry dash lite tips for android 2.3
-geometry dash lite guide for android 2.3
-geometry dash lite levels for android 2.3
-geometry dash lite songs for android 2.3
-geometry dash lite icons for android 2.3
-geometry dash lite skins for android 2.3
-geometry dash lite online for android 2.3
-geometry dash lite play store for android 2.3
-geometry dash lite filehippo for android 2.3
-geometry dash lite robtop games for android 2.3
-geometry dash lite rhythm-based action platformer for android 2.3
-how to install geometry dash lite on android 2.3
-how to play geometry dash lite on android 2.3
-how to update geometry dash lite on android 2.3
-how to hack geometry dash lite on android 2.3
-how to unlock all levels in geometry dash lite on android 2.3
-how to get more icons in geometry dash lite on android 2.3
-how to change the music in geometry dash lite on android 2.3
-how to create your own level in geometry dash lite on android 2.3
-how to beat theory of everything in geometry dash lite on android 2.3
-how to remove ads in geometry dash lite on android 2.3
-is geometry dash lite compatible with android 2.3
-is geometry dash lite safe for android 2.3
-is geometry dash lite fun for android 2.3
-is geometry dash lite hard for android 2.3
-is geometry dash lite worth it for android 2.3
-what is the difference between geometry dash and geometry dash lite on android 2.3
-what is the best strategy for geometry dash lite on android 2.3
-what is the highest score in geometry dash lite on android 2.3
-what is the easiest level in geometry dash lite on android 2.3
-what is the hardest level in geometry dash lite on android 2.3
-why is geometry dash lite so popular on android 2.3
-why is geometry dash lite so addictive on android 2.3
-why is geometry dash lite so challenging on android 2.3
-why does geometry dash lite crash on android 2.3

-

Storage space

-

You need to have enough free storage space on your device to download and install Geometry Dash Lite apk. The size of the apk file is about 50 MB, so you need at least 100 MB of free space to avoid any errors or issues.

-

Permissions

You also need to grant some permissions to Geometry Dash Lite apk to run properly on your device. The permissions are:

- -

You can review and manage these permissions by going to Settings > Apps > Geometry Dash Lite > Permissions.

-

Steps to download and install Geometry Dash Lite apk

-

After you have checked the requirements, you can follow these steps to download and install Geometry Dash Lite apk on your device:

-

Download the apk file from a trusted source

-

The first step is to download the apk file of Geometry Dash Lite from a reliable and secure source. You can use your browser or a third-party app store to find and download the apk file. However, you need to be careful and avoid any malicious or fake links that might harm your device or steal your data. You can use this link to download the latest version of Geometry Dash Lite apk from APKPure, a trusted and verified app store.

-

Enable unknown sources in your device settings

-

The next step is to enable unknown sources in your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on. You might see a warning message that installing apps from unknown sources might be risky, but you can ignore it if you trust the source of the apk file.

-

Locate and install the apk file

-

The third step is to locate and install the apk file on your device. You can use a file manager app or your browser's downloads folder to find the apk file. Once you find it, tap on it and follow the instructions on the screen to install it. You might see a confirmation message that asks you if you want to install this app, just tap on Install and wait for the process to finish.

-

Launch and enjoy the game

-

The final step is to launch and enjoy the game. You can find the Geometry Dash Lite icon on your home screen or app drawer. Tap on it and start playing the game. You can adjust the settings, choose a level, customize your character, and have fun with the rhythm-based action platforming.

-

Conclusion

-

Geometry Dash Lite is a great game for anyone who loves music, action, and challenge. It is a free version of Geometry Dash, which has more features and content. However, Geometry Dash Lite still offers plenty of fun and excitement for casual and hardcore gamers alike. You can download and install Geometry Dash Lite apk on your Android device running version 2.3 or higher by following the steps we have explained in this article. We hope you enjoy playing Geometry Dash Lite and have a blast with the rhythm-based action platforming.

-

FAQs

- -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Euphoria Season 1 Download Where to Find the Full Episodes Online.md b/spaces/1phancelerku/anime-remove-background/Euphoria Season 1 Download Where to Find the Full Episodes Online.md deleted file mode 100644 index e52ada2bef4bf65124b6f06fd4edeafc69e7eefd..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Euphoria Season 1 Download Where to Find the Full Episodes Online.md +++ /dev/null @@ -1,131 +0,0 @@ - -

Download Euphoria Season 1 Reddit: How to Watch the Hit HBO Series Online

-

If you are looking for a way to download Euphoria season 1 reddit, you are not alone. Euphoria is one of the most popular and acclaimed shows of recent years, and many people want to watch it online. But how can you download Euphoria season 1 reddit safely and legally? And what are the pros and cons of doing so? In this article, we will answer these questions and more.

-

download euphoria season 1 reddit


Download File ✺✺✺ https://jinyurl.com/2uNJMt



-

What is Euphoria and why should you watch it?

-

Euphoria is a drama series that follows a group of high-school students as they navigate a minefield of drugs, sex, identity, trauma, social media, love and friendship in today's increasingly unstable world. The show stars Zendaya as Rue, a 17-year-old drug addict who falls in love with Jules, a transgender girl played by Hunter Schafer. The show also features other talented actors such as Sydney Sweeney, Maude Apatow, Jacob Elordi, Alexa Demie, Barbie Ferreira, Algee Smith, Storm Reid, Angus Cloud, Eric Dane, Nika King and Colman Domingo.

-

A brief summary of Euphoria season 1

-

Euphoria season 1 consists of eight episodes that aired on HBO from June to August 2019. The season also has two special episodes that were released in December 2020 and January 2021. Here is a brief summary of what happens in each episode:

- -

A comparison table of the three file sharing apps

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AppSpeedData usageSecurityFeatures
XenderUp to 40 MB/sNo data needed6-digit key or QR codeVideo and music player, downloader, video to MP3 converter, social media downloader, game center
ZapyaUp to 10 MB/sNo data needed6-digit key or QR codeGIF viewer, phone clone, offline chat, group sharing, QR code sharing, web share
Files by GoogleUp to 480 MbpsNo data needed for offline sharing, data needed for online sharing and backupEnd-to-end encryption for online sharing, PIN or pattern lock for offline sharingCleaner, backup, offline media player, nearby share, smart recommendations
-

As you can see, each app has its own advantages and disadvantages. You can choose the one that suits your needs and preferences best. However, if you want a simple, fast, and secure file sharing app that also helps you manage your device storage and files, then Files by Google might be the best option for you.

-

I hope this article has helped you learn more about SHAREit 2016 APK and its alternatives. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_chatglm.py b/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_chatglm.py deleted file mode 100644 index 9d035de9889f378d3582ac0a57af596b14786fc7..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_chatglm.py +++ /dev/null @@ -1,161 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "ChatGLM尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,ChatGLM消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.chatglm_model = None - self.chatglm_tokenizer = None - self.info = "" - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import sentencepiece - self.info = "依赖检测通过" - self.success = True - except: - self.info = "缺少ChatGLM的依赖,如果要使用ChatGLM,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。" - self.success = False - - def ready(self): - return self.chatglm_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - retry = 0 - while True: - try: - if self.chatglm_model is None: - self.chatglm_tokenizer = AutoTokenizer.from_pretrained("fb700/chatglm-fitness-RLHF", trust_remote_code=True) - device, = get_conf('LOCAL_MODEL_DEVICE') - if device=='cpu': - self.chatglm_model = AutoModel.from_pretrained("fb700/chatglm-fitness-RLHF", trust_remote_code=True).float() - else: - self.chatglm_model = AutoModel.from_pretrained("fb700/chatglm-fitness-RLHF", trust_remote_code=True).half().quantize(8).cuda() - self.chatglm_model = self.chatglm_model.eval() - break - else: - break - except: - retry += 1 - if retry > 3: - self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。') - raise RuntimeError("不能正常加载ChatGLM的参数!") - - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - # 收到消息,开始请求 - try: - for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs): - self.child.send(response) - # # 中途接收可能的终止指令(如果有的话) - # if self.child.poll(): - # command = self.child.recv() - # if command == '[Terminate]': break - except: - from toolbox import trimmed_format_exc - self.child.send('[Local Message] Call ChatGLM fail.' + '\n```\n' + trimmed_format_exc() + '\n```\n') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global glm_handle -glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info - if not glm_handle.success: - error = glm_handle.info - glm_handle = None - raise RuntimeError(error) - - # chatglm 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - history_feedin.append(["What can I do?", sys_prompt]) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not glm_handle.success: - glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - history_feedin.append(["What can I do?", system_prompt] ) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收chatglm的回复 - response = "[Local Message]: 等待ChatGLM响应中 ..." - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待ChatGLM响应中 ...": - response = "[Local Message]: ChatGLM响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/fcakyon/sahi-yolox/README.md b/spaces/fcakyon/sahi-yolox/README.md deleted file mode 100644 index b2d1466eef3b02ad0255628e0d188e96aa93bc23..0000000000000000000000000000000000000000 --- a/spaces/fcakyon/sahi-yolox/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Small Object Detection with YOLOX -emoji: 🚀 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -# Configuration -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Candy Crush Saga APK The Best Way to Experience the Delicious Puzzle Adventure.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Candy Crush Saga APK The Best Way to Experience the Delicious Puzzle Adventure.md deleted file mode 100644 index 6162c287bd90881bfc23679417da034ca3baf12f..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Candy Crush Saga APK The Best Way to Experience the Delicious Puzzle Adventure.md +++ /dev/null @@ -1,61 +0,0 @@ -
-

Candy Crush Saga APKMirror: A Sweet and Addictive Puzzle Game

-

If you are looking for a fun and challenging puzzle game to play on your mobile device, you might want to check out Candy Crush Saga. Candy Crush Saga is one of the most popular and successful games of all time, with over a billion downloads and millions of players around the world. In this article, we will tell you everything you need to know about Candy Crush Saga APKMirror, including its features, tips and tricks, and alternatives.

-

What is Candy Crush Saga and why is it popular?

-

Candy Crush Saga is a free-to-play tile-matching game developed by King. The game was released in 2012 for Facebook, and later for iOS, Android, Windows Phone, and Windows 10. The game is a variation of their browser game Candy Crush, which was inspired by the classic game Bejeweled.

-

candy crush saga apkmirror


Downloadhttps://gohhs.com/2uPqfA



-

The game's premise is simple: you have to match three or more candies of the same color in a row to clear them from the board and score points. You have to complete different objectives in each level, such as reaching a target score, clearing all the jelly blocks, collecting ingredients, or beating the clock. The game has thousands of levels, each with its own layout, obstacles, and challenges.

-

Candy Crush Saga is popular because it is easy to play but hard to master. It has colorful graphics, catchy music, and satisfying sound effects. It also has a social element, as you can connect with your Facebook friends and compare your scores, send and receive lives, and help each other out. The game is constantly updated with new features and events, keeping it fresh and exciting.

-

candy crush saga apkmirror download
-candy crush saga apkmirror latest version
-candy crush saga apkmirror mod apk
-candy crush saga apkmirror update
-candy crush saga apkmirror offline
-candy crush saga apkmirror unlimited lives
-candy crush saga apkmirror android
-candy crush saga apkmirror hack
-candy crush saga apkmirror old version
-candy crush saga apkmirror free download
-candy crush saga apkmirror apk file
-candy crush saga apkmirror install
-candy crush saga apkmirror online
-candy crush saga apkmirror for pc
-candy crush saga apkmirror premium
-candy crush saga apkmirror no ads
-candy crush saga apkmirror dark theme
-candy crush saga apkmirror puzzle game
-candy crush saga apkmirror king
-candy crush saga apkmirror level 1000
-candy crush saga apkmirror cheats
-candy crush saga apkmirror tips and tricks
-candy crush saga apkmirror review
-candy crush saga apkmirror gameplay
-candy crush saga apkmirror features
-candy crush saga apkmirror requirements
-candy crush saga apkmirror size
-candy crush saga apkmirror safe
-candy crush saga apkmirror virus free
-candy crush saga apkmirror alternative
-candy crush saga apkmirror similar games
-candy crush saga apkmirror comparison
-candy crush saga apkmirror ranking
-candy crush saga apkmirror rating
-candy crush saga apkmirror feedback
-candy crush saga apkmirror comments
-candy crush saga apkmirror support
-candy crush saga apkmirror contact
-candy crush saga apkmirror faq
-candy crush saga apkmirror help

-

What are the main features of Candy Crush Saga and how to play it?

-

Candy Crush Saga has many features that make it fun and addictive. Here are some of them:

- -

To play Candy Crush Saga APKMirror on your mobile device, you need to download the APK file from [1](https://www.apkmirror.com/apk/king/candy-crush-saga/). An APK file is an Android application package that contains all the files needed to install an app on your device. To install an APK file on your device, you need to enable unknown sources in your security settings. Then you can tapping on the help center icon, then tapping on the contact us button. You can also visit their website at [2](https://king.com/contact).

-

How do I uninstall Candy Crush Saga from my device?

-

You can uninstall Candy Crush Saga from your device by going to your device settings, then tapping on apps, then tapping on Candy Crush Saga, then tapping on uninstall. You can also long-press on the app icon and drag it to the trash bin.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Scary Teacher 3D and Explore the Open World Style Interactive House.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Scary Teacher 3D and Explore the Open World Style Interactive House.md deleted file mode 100644 index 8ec3ab9418209974db903ac26599e6fcd42bceaa..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Scary Teacher 3D and Explore the Open World Style Interactive House.md +++ /dev/null @@ -1,112 +0,0 @@ -
-

Scary Teacher 3D: How to Download and Play This Fun Prank Game on Your Device

-

Do you enjoy playing pranks on your friends and family? Do you have a grudge against your mean teacher who always gives you hard time? If yes, then you might want to try Scary Teacher 3D, a hilarious and thrilling game that lets you sneak into your teacher's house and make her life miserable. In this article, we will tell you what Scary Teacher 3D is, how to download it on your device, and how to play it with some tips and tricks. We will also suggest some alternatives to Scary Teacher 3D in case you want more prank games. So, let's get started!

-

What is Scary Teacher 3D?

-

Scary Teacher 3D is an action and simulation game developed by Z & K Games for mobile platforms. It has been downloaded over 100 million times on Google Play Store and has a rating of 4.2 out of 5 stars. It is also available on App Store for iOS devices, where it has a rating of 4.4 out of 5 stars. The game is suitable for kids of all ages, but it also has some horror themes that might scare some players.

-

scary teacher 3d download app store


Download File ★★★★★ https://gohhs.com/2uPuwV



-

The Story and the Gameplay

-

The game revolves around a genius girl and her worst high school teacher, Miss T, who has been threatening kids, giving physical punishment, and at times torturing kids. The girl decides to teach her a lesson by scaring her and ruining her day. She sneaks into her house, which has 15 rooms and each room has some unsolved mystery. She performs various activities and releases pets under her custody, while avoiding getting caught by Miss T. She also recovers victim kids' photos, threatened pets, chocolate cake, and chocolates. The game has an open world style interactive house, where the player can explore different rooms and find items to use for pranks. The game also has a basement that has something surprising.

-

The Features and the Graphics

-

The game has many features that make it fun and exciting to play. Some of them are:

- -

The game also has impressive graphics that create a realistic and immersive experience for the player. The game uses Unity WebGL technology, which allows for smooth animations and high-quality visuals. The game also has sound effects and music that add to the atmosphere of the game.

-

How to Download Scary Teacher 3D on Your Device?

-

If you want to download Scary Teacher 3D on your device, you can follow these simple steps:

-

For iOS Users

-
    -
  1. Go to App Store on your device or click [here](^1^) to access the game's page.
  2. -
  3. Tap on the "Get" button to download the game for free.
  4. -
  5. Wait for the download to finish and then tap on the game icon to launch it.
  6. -
  7. Enjoy playing Scary Teacher 3D!
  8. -
-

For Android Users

-
    -
  1. Go to Google Play Store on your device or click [here](^2^) to access the game's page.
  2. -
  3. Tap on the " Install" button to download the game for free.
  4. -
  5. Wait for the download to finish and then tap on the game icon to launch it.
  6. -
  7. Enjoy playing Scary Teacher 3D!
  8. -
-

How to Play Scary Teacher 3D?

-

Playing Scary Teacher 3D is easy and fun. You just need to follow these basic steps:

- -

Tips and Tricks to Prank Your Creepy Teacher

-

If you want to prank your teacher like a pro, you might want to use some of these tips and tricks:

- -

Alternatives to Scary Teacher 3D

-

If you love pranking games, you might also want to check out some of these alternatives to Scary Teacher 3D:

-

How to download Scary Teacher 3D on iPhone, iPad and iPod touch
-Scary Teacher 3D game review and tips
-Best horror games for Android: Scary Teacher 3D and more
-Scary Teacher 3D: How to complete missions and tasks without getting caught
-Scary Teacher 3D vs Granny: Which game is scarier and more fun?
-Scary Teacher 3D mod apk download for free
-Scary Teacher 3D cheats and hacks: How to get unlimited coins and energy
-Scary Teacher 3D online multiplayer mode: How to play with friends
-Scary Teacher 3D update: What's new in the latest version?
-Scary Teacher 3D walkthrough: How to solve all the mysteries in the house
-Scary Teacher 3D characters: Who are Miss T and the genius girl?
-Scary Teacher 3D alternatives: Other games like Scary Teacher 3D
-Scary Teacher 3D for PC: How to play Scary Teacher 3D on Windows and Mac
-Scary Teacher 3D fan art and memes: The best creations from the community
-Scary Teacher 3D secrets and easter eggs: What you might have missed in the game
-Scary Teacher 3D ratings and reviews: What do users think about the game?
-Scary Teacher 3D gameplay videos and tutorials: How to watch and learn from the experts
-Scary Teacher 3D challenges and pranks: How to scare Miss T in creative ways
-Scary Teacher 3D trivia and facts: How much do you know about the game?
-Scary Teacher 3D merchandise and gifts: Where to buy and how to get them
-Scary Teacher 3D bugs and glitches: How to fix them and report them
-Scary Teacher 3D wiki and guide: Everything you need to know about the game
-Scary Teacher 3D release date and history: When did the game come out and how has it changed?
-Scary Teacher 3D developer and publisher: Who made the game and how to contact them?
-Scary Teacher 3D soundtrack and music: How to listen to and download the songs from the game
-Scary Teacher 3D costumes and skins: How to customize your character and unlock new outfits
-Scary Teacher 3D news and updates: What's coming next for the game?
-Scary Teacher 3D forum and community: Where to chat with other players and fans of the game
-Scary Teacher 3D awards and nominations: What accolades has the game received?
-Scary Teacher 3D memes generator: How to create your own funny memes with the game

- - - - - - - -
NameDescription
Scary Neighbor 3DA game where you prank your scary neighbor who lives next door.
Hello NeighborA game where you sneak into your neighbor's house and find out his secrets.
GrannyA game where you escape from a creepy granny who wants to kill you.
Kick The BuddyA game where you unleash your anger on a ragdoll buddy with various weapons.
Troll Face QuestA game where you solve funny puzzles and troll popular characters.
-

Conclusion

-

Scary Teacher 3D is a fun and exciting game that lets you prank your mean teacher and make her pay for her crimes. You can download it for free on your iOS or Android device and enjoy its amazing graphics, features, and gameplay. You can also try some of the alternatives to Scary Teacher 3D if you want more prank games. So, what are you waiting for? Download Scary Teacher 3D now and have a blast!

-

FAQs

-

Here are some of the frequently asked questions about Scary Teacher 3D:

-
    -
  1. How many levels are there in Scary Teacher 3D?
  2. -

    There are currently 15 levels in Scary Teacher 3D, each with a different prank and a different room. The developers might add more levels in the future updates.

    -
  3. How can I get more coins in Scary Teacher 3D?
  4. -

    You can get more coins in Scary Teacher 3D by completing levels, watching ads, or buying them with real money. You can use coins to buy items or unlock new levels.

    -
  5. Is Scary Teacher 3D offline or online?
  6. -

    Scary Teacher 3D is an offline game, which means you can play it without an internet connection. However, some features like watching ads or buying coins might require an internet connection.

    -
  7. Is Scary Teacher 3D scary or funny?
  8. -

    Scary Teacher 3D is both scary and funny, depending on your perspective. The game has some horror themes and jump scares that might frighten some players, but it also has a lot of humor and comedy that might make you laugh. The game is not meant to be taken seriously, but rather as a fun way to prank your teacher.

    -
  9. Can I play Scary Teacher 3D on PC or Mac?
  10. -

    Scary Teacher 3D is designed for mobile devices, but you can also play it on PC or Mac using an emulator. An emulator is a software that mimics the functions of a mobile device on your computer. You can download an emulator like BlueStacks or NoxPlayer and then install Scary Teacher 3D from the emulator's app store. However, playing Scary Teacher 3D on PC or Mac might not be as smooth as playing it on your phone or tablet.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/firsk/ai_otto/models.py b/spaces/firsk/ai_otto/models.py deleted file mode 100644 index dd9e0c087357ecfc5a1548eddb5a30d77d2b5bf5..0000000000000000000000000000000000000000 --- a/spaces/firsk/ai_otto/models.py +++ /dev/null @@ -1,986 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages - - -class DurationDiscriminator(nn.Module): # vits2 - def __init__( - self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0 - ): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d( - in_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d( - 2 * filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential(nn.Linear(filter_channels, 1), nn.Sigmoid()) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - - -class TransformerCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = ( - attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - isflow=True, - gin_channels=self.gin_channels, - ) - if share_parameter - else None - ) - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer( - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout, - filter_channels, - mean_only=True, - wn_sharing_parameter=self.wn, - gin_channels=self.gin_channels, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class StochasticDurationPredictor(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - ): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3) - ) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv( - filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout - ) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3) - ) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv( - filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout - ) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = ( - torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) - * x_mask - ) - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum( - (F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2] - ) - logq = ( - torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q**2)) * x_mask, [1, 2]) - - logdet_tot_q - ) - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = ( - torch.sum(0.5 * (math.log(2 * math.pi) + (z**2)) * x_mask, [1, 2]) - - logdet_tot - ) - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = ( - torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) - * noise_scale - ) - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__( - self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0 - ): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d( - in_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__( - self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0, - ): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels**-0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels**-0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - self.ja_bert_proj = nn.Conv1d(768, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, ja_bert, g=None): - bert_emb = self.bert_proj(bert).transpose(1, 2) - ja_bert_emb = self.ja_bert_proj(ja_bert).transpose(1, 2) - x = ( - self.emb(x) - + self.tone_emb(tone) - + self.language_emb(language) - + bert_emb - + ja_bert_emb - ) * math.sqrt( - self.hidden_channels - ) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for layer in self.ups: - remove_weight_norm(layer) - for layer in self.resblocks: - layer.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm is False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for layer in self.convs: - x = layer(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm is False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for layer in self.convs: - x = layer(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class ReferenceEncoder(nn.Module): - """ - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - """ - - def __init__(self, spec_channels, gin_channels=0): - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [ - weight_norm( - nn.Conv2d( - in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1), - ) - ) - for i in range(K) - ] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) # noqa: E501 - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU( - input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True, - ) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer=4, - n_layers_trans_flow=6, - flow_share_parameter=False, - use_transformer_flow=True, - **kwargs - ): - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get( - "use_spk_conditioned_encoder", True - ) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder( - n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - if use_transformer_flow: - self.flow = TransformerCouplingBlock( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers_trans_flow, - 5, - p_dropout, - n_flow_layer, - gin_channels=gin_channels, - share_parameter=flow_share_parameter, - ) - else: - self.flow = ResidualCouplingBlock( - inter_channels, - hidden_channels, - 5, - 1, - n_flow_layer, - gin_channels=gin_channels, - ) - self.sdp = StochasticDurationPredictor( - hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels - ) - self.dp = DurationPredictor( - hidden_channels, 256, 3, 0.5, gin_channels=gin_channels - ) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert, ja_bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p( - x, x_lengths, tone, language, bert, ja_bert, g=g - ) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum( - -0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True - ) # [b, 1, t_s] - neg_cent2 = torch.matmul( - -0.5 * (z_p**2).transpose(1, 2), s_p_sq_r - ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul( - z_p.transpose(1, 2), (m_p * s_p_sq_r) - ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum( - -0.5 * (m_p**2) * s_p_sq_r, [1], keepdim=True - ) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = ( - torch.std(neg_cent) - * torch.randn_like(neg_cent) - * self.current_mas_noise_scale - ) - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = ( - monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)) - .unsqueeze(1) - .detach() - ) - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum( - x_mask - ) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return ( - o, - l_length, - attn, - ids_slice, - x_mask, - y_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - (x, logw, logw_), - ) - - def infer( - self, - x, - x_lengths, - sid, - tone, - language, - bert, - ja_bert, - noise_scale=0.667, - length_scale=1, - noise_scale_w=0.8, - max_len=None, - sdp_ratio=0, - y=None, - ): - # x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p( - x, x_lengths, tone, language, bert, ja_bert, g=g - ) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * ( - sdp_ratio - ) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to( - x_mask.dtype - ) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" "b/spaces/fkhuggingme/gpt-academic/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" deleted file mode 100644 index a564f21d231cd65c29b539573929ca5d2df63203..0000000000000000000000000000000000000000 --- "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" +++ /dev/null @@ -1,54 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - -def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - - i_say = f'请对下面的程序文件做一个概述,并对文件中的所有函数生成注释,使用markdown表格输出结果,文件名是{os.path.relpath(fp, project_folder)},文件内容是 ```{file_content}```' - i_say_show_user = f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述,并对文件中的所有函数生成注释: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - if not fast_debug: - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] - - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/flax-community/t5-vae/t5_vae_flax_alt/README.md b/spaces/flax-community/t5-vae/t5_vae_flax_alt/README.md deleted file mode 100644 index b45db5cdddac281e3fffbdb06e87dd68a03d4985..0000000000000000000000000000000000000000 --- a/spaces/flax-community/t5-vae/t5_vae_flax_alt/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# t5-vae-flax - -Model code for running a T5-VAE with flax. diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/empty.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/empty.py deleted file mode 100644 index 53357a1ca411a9c92d57b7fcfbcc431b5b1ab14c..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/empty.py +++ /dev/null @@ -1,92 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - -class EmptyEnv(MiniGridEnv): - """ - Empty grid environment, no obstacles, sparse reward - """ - - def __init__( - self, - size=8, - agent_start_pos=(1,1), - agent_start_dir=0, - ): - self.agent_start_pos = agent_start_pos - self.agent_start_dir = agent_start_dir - - super().__init__( - grid_size=size, - max_steps=4*size*size, - # Set this to True for maximum speed - see_through_walls=True - ) - - def _gen_grid(self, width, height): - # Create an empty grid - self.grid = Grid(width, height) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Place a goal square in the bottom-right corner - self.put_obj(Goal(), width - 2, height - 2) - - # Place the agent - if self.agent_start_pos is not None: - self.agent_pos = self.agent_start_pos - self.agent_dir = self.agent_start_dir - else: - self.place_agent() - - self.mission = "get to the green goal square" - -class EmptyEnv5x5(EmptyEnv): - def __init__(self, **kwargs): - super().__init__(size=5, **kwargs) - -class EmptyRandomEnv5x5(EmptyEnv): - def __init__(self): - super().__init__(size=5, agent_start_pos=None) - -class EmptyEnv6x6(EmptyEnv): - def __init__(self, **kwargs): - super().__init__(size=6, **kwargs) - -class EmptyRandomEnv6x6(EmptyEnv): - def __init__(self): - super().__init__(size=6, agent_start_pos=None) - -class EmptyEnv16x16(EmptyEnv): - def __init__(self, **kwargs): - super().__init__(size=16, **kwargs) - -register( - id='MiniGrid-Empty-5x5-v0', - entry_point='gym_minigrid.envs:EmptyEnv5x5' -) - -register( - id='MiniGrid-Empty-Random-5x5-v0', - entry_point='gym_minigrid.envs:EmptyRandomEnv5x5' -) - -register( - id='MiniGrid-Empty-6x6-v0', - entry_point='gym_minigrid.envs:EmptyEnv6x6' -) - -register( - id='MiniGrid-Empty-Random-6x6-v0', - entry_point='gym_minigrid.envs:EmptyRandomEnv6x6' -) - -register( - id='MiniGrid-Empty-8x8-v0', - entry_point='gym_minigrid.envs:EmptyEnv' -) - -register( - id='MiniGrid-Empty-16x16-v0', - entry_point='gym_minigrid.envs:EmptyEnv16x16' -) diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/imitationcasestudyenvs.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/imitationcasestudyenvs.py deleted file mode 100644 index e2c05e7226cb74beb37f49e2a4f56961386a36ed..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/imitationcasestudyenvs.py +++ /dev/null @@ -1,224 +0,0 @@ -from gym_minigrid.social_ai_envs.socialaiparamenv import SocialAIParamEnv -from gym_minigrid.parametric_env import * -from gym_minigrid.register import register - -import inspect, importlib - -# for used for automatic registration of environments -defined_classes = [name for name, _ in inspect.getmembers(importlib.import_module(__name__), inspect.isclass)] - - -# Emulation case study (table 2) - -# emulation without distractor -# training -class EEmulationNoDistrInformationSeekingParamEnv(SocialAIParamEnv): - - def construct_tree(self): - tree = ParameterTree() - - env_type_nd = tree.add_node("Env_type", type="param") - - # Information seeking - inf_seeking_nd = tree.add_node("Information_seeking", parent=env_type_nd, type="value") - - prag_fr_compl_nd = tree.add_node("Pragmatic_frame_complexity", parent=inf_seeking_nd, type="param") - tree.add_node("Eye_contact", parent=prag_fr_compl_nd, type="value") - - # scaffolding - scaffolding_nd = tree.add_node("Scaffolding", parent=inf_seeking_nd, type="param") - scaffolding_N_nd = tree.add_node("N", parent=scaffolding_nd, type="value") - - cue_type_nd = tree.add_node("Cue_type", parent=scaffolding_N_nd, type="param") - tree.add_node("Emulation", parent=cue_type_nd, type="value") - - problem_nd = tree.add_node("Problem", parent=inf_seeking_nd, type="param") - - boxes_nd = tree.add_node("Boxes", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=boxes_nd, type="param") - tree.add_node("1", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=boxes_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - switches_nd = tree.add_node("Switches", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=switches_nd, type="param") - tree.add_node("1", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=switches_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - generators_nd = tree.add_node("Generators", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=generators_nd, type="param") - tree.add_node("1", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=generators_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - levers_nd = tree.add_node("Levers", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=levers_nd, type="param") - tree.add_node("1", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=levers_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - doors_nd = tree.add_node("Marble", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=doors_nd, type="param") - tree.add_node("1", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=doors_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - return tree - -# testing -class EEmulationNoDistrDoorsInformationSeekingParamEnv(SocialAIParamEnv): - - def construct_tree(self): - tree = ParameterTree() - - env_type_nd = tree.add_node("Env_type", type="param") - - # Information seeking - inf_seeking_nd = tree.add_node("Information_seeking", parent=env_type_nd, type="value") - - prag_fr_compl_nd = tree.add_node("Pragmatic_frame_complexity", parent=inf_seeking_nd, type="param") - tree.add_node("Eye_contact", parent=prag_fr_compl_nd, type="value") - - # scaffolding - scaffolding_nd = tree.add_node("Scaffolding", parent=inf_seeking_nd, type="param") - scaffolding_N_nd = tree.add_node("N", parent=scaffolding_nd, type="value") - - cue_type_nd = tree.add_node("Cue_type", parent=scaffolding_N_nd, type="param") - tree.add_node("Emulation", parent=cue_type_nd, type="value") - - problem_nd = tree.add_node("Problem", parent=inf_seeking_nd, type="param") - - marble_nd = tree.add_node("Doors", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=marble_nd, type="param") - tree.add_node("1", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=marble_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - return tree - - - -# emulation with a distractor - -# training -class EEmulationDistrInformationSeekingParamEnv(SocialAIParamEnv): - - def construct_tree(self): - tree = ParameterTree() - - env_type_nd = tree.add_node("Env_type", type="param") - - # Information seeking - inf_seeking_nd = tree.add_node("Information_seeking", parent=env_type_nd, type="value") - - prag_fr_compl_nd = tree.add_node("Pragmatic_frame_complexity", parent=inf_seeking_nd, type="param") - tree.add_node("Eye_contact", parent=prag_fr_compl_nd, type="value") - - # scaffolding - scaffolding_nd = tree.add_node("Scaffolding", parent=inf_seeking_nd, type="param") - scaffolding_N_nd = tree.add_node("N", parent=scaffolding_nd, type="value") - - cue_type_nd = tree.add_node("Cue_type", parent=scaffolding_N_nd, type="param") - tree.add_node("Emulation", parent=cue_type_nd, type="value") - - problem_nd = tree.add_node("Problem", parent=inf_seeking_nd, type="param") - - boxes_nd = tree.add_node("Boxes", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=boxes_nd, type="param") - tree.add_node("2", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=boxes_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - switches_nd = tree.add_node("Switches", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=switches_nd, type="param") - tree.add_node("2", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=switches_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - generators_nd = tree.add_node("Generators", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=generators_nd, type="param") - tree.add_node("2", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=generators_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - levers_nd = tree.add_node("Levers", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=levers_nd, type="param") - tree.add_node("2", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=levers_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - doors_nd = tree.add_node("Marble", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=doors_nd, type="param") - tree.add_node("2", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=doors_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - return tree - -# testing -class EEmulationDistrDoorsInformationSeekingParamEnv(SocialAIParamEnv): - - def construct_tree(self): - tree = ParameterTree() - - env_type_nd = tree.add_node("Env_type", type="param") - - # Information seeking - inf_seeking_nd = tree.add_node("Information_seeking", parent=env_type_nd, type="value") - - prag_fr_compl_nd = tree.add_node("Pragmatic_frame_complexity", parent=inf_seeking_nd, type="param") - tree.add_node("Eye_contact", parent=prag_fr_compl_nd, type="value") - - # scaffolding - scaffolding_nd = tree.add_node("Scaffolding", parent=inf_seeking_nd, type="param") - scaffolding_N_nd = tree.add_node("N", parent=scaffolding_nd, type="value") - - cue_type_nd = tree.add_node("Cue_type", parent=scaffolding_N_nd, type="param") - tree.add_node("Emulation", parent=cue_type_nd, type="value") - - problem_nd = tree.add_node("Problem", parent=inf_seeking_nd, type="param") - - doors_nd = tree.add_node("Doors", parent=problem_nd, type="value") - version_nd = tree.add_node("N", parent=doors_nd, type="param") - tree.add_node("2", parent=version_nd, type="value") - peer_nd = tree.add_node("Peer", parent=doors_nd, type="param") - tree.add_node("Y", parent=peer_nd, type="value") - - return tree - - -# automatic registration of environments -defined_classes_ = [name for name, _ in inspect.getmembers(importlib.import_module(__name__), inspect.isclass)] - -envs = list(set(defined_classes_) - set(defined_classes)) -assert all([e.endswith("Env") for e in envs]) - -for env in envs: - try: - register( - id='SocialAI-{}-v1'.format(env), - entry_point='gym_minigrid.social_ai_envs:{}'.format(env) - ) - except: - print(f"Env : {env} registratoin failed.") - exit() - - -distr_emulation_test_set = [ - # "SocialAI-EEmulationDistrBoxesInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationDistrSwitchesInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationDistrMarbleInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationDistrGeneratorsInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationDistrLeversInformationSeekingParamEnv-v1", - "SocialAI-EEmulationDistrDoorsInformationSeekingParamEnv-v1", -] - -no_distr_emulation_test_set = [ - # "SocialAI-EEmulationNoDistrBoxesInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationNoDistrSwitchesInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationNoDistrMarbleInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationNoDistrGeneratorsInformationSeekingParamEnv-v1", - # "SocialAI-EEmulationNoDistrLeversInformationSeekingParamEnv-v1", - "SocialAI-EEmulationNoDistrDoorsInformationSeekingParamEnv-v1", -] diff --git a/spaces/golda/Churn_pred/prediction.py b/spaces/golda/Churn_pred/prediction.py deleted file mode 100644 index 0abcf9fc2470dbd454903626538d856d7e0b3217..0000000000000000000000000000000000000000 --- a/spaces/golda/Churn_pred/prediction.py +++ /dev/null @@ -1,82 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -from tensorflow.keras.models import load_model -import pickle -import json - - -# Load All Files -with open('final_pipeline.pkl', 'rb') as file_1: - final_pipeline = pickle.load(file_1) - -with open('num_skew.txt', 'r') as file_2: - num_skew = json.load(file_2) - -with open('num_norm.txt', 'r') as file_3: - num_norm = json.load(file_3) - -with open('cat_columns.txt', 'r') as file_4: - cat_columns = json.load(file_4) - -model_ann = load_model('churn.h5') - -def run(): - with st.form(key='form_prediksi_konsumen_minggat'): - age = st.number_input('age', min_value=18, max_value=80, step=1, help='Usia pelanggan?') - gender = st.radio('gender', ('F','M'), index=1, help='jenis kelamin?') - region_category = st.selectbox('region_category', ('City','Village', 'Town'), index=0, help='tempat konsumen?') - membership_category = st.selectbox('membership_category', ('Basic Membership','Gold Membership', 'No Membership','Platinum Membership', 'Premium Membership', 'Silver Membership'), index=0, help='jenis member konsumen?') - joined_through_referral = st.selectbox('joined_through_referral', ('No','Yes'), index=0, help='Masuk dengan referensi?') - preferred_offer_types = st.selectbox('preferred_offer_types', ('Credit/Debit Card Offers','Gift Vouchers/Coupons', 'Without Offers'), index=0, help='type of offer?') - medium_of_operation = st.selectbox('medium_of_operation', ('Both','Desktop', 'Smartphone'), index=0, help='mengakses toko dengan?') - used_special_discount = st.selectbox('used_special_discount', ('No','Yes'), index=0, help='Belanja dengan diskon?') - offer_application_preference = st.selectbox('offer_application_preference', ('Yes','No'), index=0, help='memberi tahu bila ada penawaran khusus?') - past_complaint = st.selectbox('past_complaint', ('Yes','No'), index=0, help='pernah komplain?') - internet_option = st.selectbox('internet_option', ('Wi-Fi','Mobile_Data', 'Fiber_Optic'), index=0, help='online menggunakan?') - complaint_status = st.selectbox('complaint_status', ('Not Applicable','Unsolved', 'Solved', 'Solved in Follow-up', 'No Information Available'), index=0, help='status komplain?') - feedback = st.selectbox('feedback', ('No reason specified','Poor Customer Service', 'Poor Product Quality', 'Poor Website', 'Products always in Stock', 'Quality Customer Care', 'Reasonable Price', 'Too many ads', 'User Friendly Website'), index=0, help='type of feedback?') - - days_since_last_login = st.slider('days_since_last_login', -999, 26, 0) - avg_time_spent = st.slider('avg_time_spent', 0, 3000, 0) - avg_transaction_value = st.slider('avg_transaction_value', 900, 850000, 50000) - avg_frequency_login_days = st.slider('avg_frequency_login_days', 0, 73, 5) - points_in_wallet = st.slider('points_in_wallet', 0, 2000, 1) - - submitted = st.form_submit_button('Predict') - - - data_inf = { - 'age': age, - 'gender': gender, - 'region_category': region_category, - 'membership_category': membership_category, - 'joined_through_referral': joined_through_referral, - 'preferred_offer_types': preferred_offer_types, - 'medium_of_operation': medium_of_operation, - 'used_special_discount': used_special_discount, - 'offer_application_preference': offer_application_preference, - 'past_complaint': past_complaint, - 'complaint_status': complaint_status, - 'feedback': feedback, - 'days_since_last_login': days_since_last_login, - 'avg_time_spent': avg_time_spent, - 'avg_transaction_value': avg_transaction_value, - 'avg_frequency_login_days': avg_frequency_login_days, - 'points_in_wallet': points_in_wallet, - 'internet_option': internet_option - } - - data_inf = pd.DataFrame([data_inf]) - data_inf - - if submitted: - y_pred_inf = final_pipeline.transform(data_inf) - - y_pred_inf = model_ann.predict(y_pred_inf) - y_pred_inf = np.where(y_pred_inf >= 0.5, 'ya', 'tidak') - - st.write('# apakah pelanggan anda pergi? \n', y_pred_inf) - -if __name__== '__main__': - run() \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1 Perbandingan dengan Metode Lain.md b/spaces/gotiQspiryo/whisper-ui/examples/Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1 Perbandingan dengan Metode Lain.md deleted file mode 100644 index 600a5c4fb2af89f7badce0be470c12668d94a54b..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1 Perbandingan dengan Metode Lain.md +++ /dev/null @@ -1,5 +0,0 @@ - -

Aplikasi Obyek Wisata Berbasis Web adalah sebuah sistem aplikasi web yang dibuat dan digunakan untuk memudahkan pengguna untuk mengetahui letak semua wisata yang ada di kabupaten bulukumba pada studi kasus ini. Aplikasi ini dilengkapi dengan fitur login admin, data destinasi wisata menggunakan maps, komentar forum, managemen wisata dan lain sebagainnya.

-

Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1


Download Filehttps://urlgoal.com/2uyNfq



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/gwang-kim/DATID-3D/eg3d/datid3d_data_gen.py b/spaces/gwang-kim/DATID-3D/eg3d/datid3d_data_gen.py deleted file mode 100644 index 497a1fd7d489493eb73ccb9768e346d3afa6f0a0..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/datid3d_data_gen.py +++ /dev/null @@ -1,204 +0,0 @@ - -import sys, os -sys.path.append(os.getcwd()) -from os.path import join as opj -import zipfile -import json -import pickle -from tqdm import tqdm -import argparse - -import numpy as np -import torch -import torch.nn.functional as F -from torch import autocast -from torchvision.transforms import ToPILImage -from diffusers import StableDiffusionImg2ImgPipeline, PNDMScheduler -from camera_utils import LookAtPoseSampler, FOV_to_intrinsics - - - -def parse_args(): - """Parse input arguments.""" - parser = argparse.ArgumentParser(description='Pose-aware dataset generation') - parser.add_argument('--strength', default=0.7, type=float) - parser.add_argument('--prompt', type=str) - parser.add_argument('--data_type', default='ffhq', type=str) # ffhq, cat - parser.add_argument('--guidance_scale', default=8, type=float) - parser.add_argument('--num_images', default=1000, type=int) - parser.add_argument('--sd_model_id', default='stabilityai/stable-diffusion-2-1-base', type=str) - parser.add_argument('--num_inference_steps', default=30, type=int) - parser.add_argument('--ffhq_eg3d_path', default='pretrained/ffhqrebalanced512-128.pkl', type=str) - parser.add_argument('--cat_eg3d_path', default='pretrained/afhqcats512-128.pkl', type=str) - parser.add_argument('--ffhq_pivot', default=0.2, type=float) - parser.add_argument('--cat_pivot', default=0.05, type=float) - parser.add_argument('--pitch_range', default=0.3, type=float) - parser.add_argument('--yaw_range', default=0.3, type=float) - parser.add_argument('--name_tag', default='', type=str) - parser.add_argument('--seed', default=15, type=int) - - args = parser.parse_args() - return args - -def make_zip(base_dir, prompt, data_type='ffhq', name_tag=''): - base_dir = os.path.abspath(base_dir) - - owd = os.path.abspath(os.getcwd()) - os.chdir(base_dir) - - json_path = opj(base_dir, "dataset.json") - - zip_path = opj(base_dir, f'data_{data_type}_{prompt.replace(" ", "_")}{name_tag}.zip') - zip_file = zipfile.ZipFile(zip_path, "w") - - with open(json_path, 'r') as file: - data = json.load(file) - zip_file.write(os.path.relpath(json_path, base_dir), compress_type=zipfile.ZIP_STORED) - - for label in data['labels']: - trg_img_path = label[0] - zip_file.write(trg_img_path, compress_type=zipfile.ZIP_STORED) - - zip_file.close() - os.chdir(owd) - -def pts2pil(pts): - pts = (pts + 1) / 2 - pts[pts > 1] = 1 - pts[pts < 0] = 0 - return ToPILImage()(pts[0]) - -if __name__ == '__main__': - args = parse_args() - - device = "cuda" - torch.manual_seed(args.seed) - np.random.seed(args.seed) - - data_type = args.data_type - prompt = args.prompt - strength = args.strength - guidance_scale = args.guidance_scale - num_inference_steps = args.num_inference_steps - num_images = args.num_images - name_tag = args.name_tag - - # 3DG options - ffhq_eg3d_path = args.ffhq_eg3d_path - cat_eg3d_path = args.cat_eg3d_path - cat_pivot = args.cat_pivot - ffhq_pivot = args.ffhq_pivot - pitch_range = args.pitch_range - yaw_range = args.yaw_range - num_frames = 240 - truncation_psi = 0.7 - truncation_cutoff = 14 - fov_deg = 18.837 - ft_img_size = 512 - - # Load 3DG - eg3d_path = None - if data_type == 'ffhq': - eg3d_path = args.ffhq_eg3d_path - pivot = ffhq_pivot - elif data_type == 'cat': - eg3d_path = args.cat_eg3d_path - pivot = cat_pivot - - with open(eg3d_path, 'rb') as f: - G = pickle.load(f)['G_ema'].to(device) # torch.nn.Module - G.train() - for param in G.parameters(): - param.requires_grad_(True) - - # SD options - model_id = args.sd_model_id - negative_prompt = None - eta = 0.0 - batch_size = 1 - model_inversion = False - - # Load SD - pipe = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - revision="fp16", - torch_dtype=torch.float16, - use_auth_token=True, - scheduler=PNDMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", - num_train_timesteps=1000, set_alpha_to_one=False, steps_offset=1, skip_prk_steps=1), - ).to(device) - pipe.safety_checker = None - print('SD model is loaded') - - # Outputs directory - base_dir = opj(f'./exp_data/data_{data_type}_{prompt.replace(" ", "_")}{name_tag}') - - src_img_dir = opj(base_dir, "src_imgs") - trg_img_dir = opj(base_dir, "trg_imgs") - - os.makedirs('exp_data', exist_ok=True) - os.makedirs(base_dir, exist_ok=True) - os.makedirs(src_img_dir, exist_ok=True) - os.makedirs(trg_img_dir, exist_ok=True) - labels = [] - - # Fine-tuning 3D generator - for i in tqdm(range(num_images)): - G.eval() - z = torch.from_numpy(np.random.randn(batch_size, G.z_dim)).to(device) - intrinsics = FOV_to_intrinsics(fov_deg, device=device) - - with torch.no_grad(): - yaw_idx = np.random.randint(num_frames) - pitch_idx = np.random.randint(num_frames) - - cam_pivot = torch.tensor([0, 0, pivot], device=device) - cam_radius = G.rendering_kwargs.get('avg_camera_radius', 2.7) - cam2world_pose = LookAtPoseSampler.sample(np.pi / 2 + yaw_range * np.sin(2 * np.pi * yaw_idx / num_frames), - np.pi / 2 - 0.05 + pitch_range * np.cos( - 2 * np.pi * pitch_idx / num_frames), - cam_pivot, radius=cam_radius, device=device, - batch_size=batch_size) - conditioning_cam2world_pose = LookAtPoseSampler.sample(np.pi / 2, np.pi / 2, cam_pivot, radius=cam_radius, - device=device, batch_size=batch_size) - camera_params = torch.cat([cam2world_pose.reshape(-1, 16), intrinsics.reshape(-1, 9).repeat(batch_size, 1)], - 1) - conditioning_params = torch.cat( - [conditioning_cam2world_pose.reshape(-1, 16), intrinsics.reshape(-1, 9).repeat(batch_size, 1)], 1) - - ws = G.mapping(z, conditioning_params, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff) - - img_pts = G.synthesis(ws, camera_params)['image'] - - src_img_pts = img_pts.detach() - src_img_pts = F.interpolate(src_img_pts, (ft_img_size, ft_img_size), mode='bilinear', align_corners=False) - with autocast("cuda"): - trg_img_pil = pipe(prompt=prompt, - image=src_img_pts, - strength=strength, - guidance_scale=guidance_scale, - num_inference_steps=num_inference_steps, - )['images'][0] - - src_idx = f'{i:05d}_src.png' - trg_idx = f'{i:05d}_trg.png' - - src_img_pil_path = opj(src_img_dir, src_idx) - trg_img_pil_path = opj(trg_img_dir, trg_idx) - - src_img_pil = pts2pil(src_img_pts.cpu()) - - src_img_pil.save(src_img_pil_path) - trg_img_pil.save(trg_img_pil_path) - - label = [trg_img_pil_path.replace(base_dir, '').replace('/trg_', 'trg_'), camera_params[0].tolist()] - - labels.append(label) - - - json_path = opj(base_dir, "dataset.json") - json_data = {'labels': labels} - with open(json_path, 'w') as outfile: - json.dump(json_data, outfile, indent=4) - - make_zip(base_dir, prompt, data_type, name_tag) diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/generate.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/generate.py deleted file mode 100644 index a8b7d55e6d190c193e427bd8d623c583b2dcdeda..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/generate.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - - -## this script is for generating images from pre-trained network based on StyleGAN1 (TensorFlow) and StyleGAN2-ada (PyTorch) ## - -import os -import click -import dnnlib -import numpy as np -import PIL.Image -import legacy -from typing import List, Optional - -""" -Generate images using pretrained network pickle. -Examples: - -\b -# Generate human full-body images without truncation -python generate.py --outdir=outputs/generate/stylegan_human_v2_1024 --trunc=1 --seeds=1,3,5,7 \\ - --network=pretrained_models/stylegan_human_v2_1024.pkl --version 2 - -\b -# Generate human full-body images with truncation -python generate.py --outdir=outputs/generate/stylegan_human_v2_1024 --trunc=0.8 --seeds=0-100\\ - --network=pretrained_models/stylegan_human_v2_1024.pkl --version 2 - -# \b -# Generate human full-body images using stylegan V1 -# python generate.py --outdir=outputs/generate/stylegan_human_v1_1024 \\ -# --network=pretrained_models/stylegan_human_v1_1024.pkl --version 1 -""" - - -@click.command() -@click.pass_context -@click.option('--network', 'network_pkl', help='Network pickle filename', required=True) -@click.option('--seeds', type=legacy.num_range, help='List of random seeds') -@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=1, show_default=True) -@click.option('--noise-mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True) -@click.option('--outdir', help='Where to save the output images', default='outputs/generate/', type=str, required=True, metavar='DIR') -@click.option('--version', help="stylegan version, 1, 2 or 3", type=int, default=2) -def generate_images( - ctx: click.Context, - network_pkl: str, - seeds: Optional[List[int]], - truncation_psi: float, - noise_mode: str, - outdir: str, - version: int -): - - print('Loading networks from "%s"...' % network_pkl) - if version == 1: - import dnnlib.tflib as tflib - tflib.init_tf() - G, D, Gs = legacy.load_pkl(network_pkl) - - else: - import torch - device = torch.device('cuda') - with dnnlib.util.open_url(network_pkl) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore - os.makedirs(outdir, exist_ok=True) - - if seeds is None: - ctx.fail('--seeds option is required.') - - # Generate images. - target_z = np.array([]) - target_w = np.array([]) - latent_out = outdir.replace('/images/', '') - for seed_idx, seed in enumerate(seeds): - if seed % 5000 == 0: - print('Generating image for seed %d (%d/%d) ...' % - (seed, seed_idx, len(seeds))) - - if version == 1: # stylegan v1 - z = np.random.RandomState(seed).randn(1, Gs.input_shape[1]) - # Generate image. - fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) - if noise_mode == 'const': - randomize_noise = False - else: - randomize_noise = True - images = Gs.run(z, None, truncation_psi=truncation_psi, - randomize_noise=randomize_noise, output_transform=fmt) - PIL.Image.fromarray(images[0], 'RGB').save( - f'{outdir}/seed{seed:04d}.png') - - else: # stylegan v2/v3 - label = torch.zeros([1, G.c_dim], device=device) - z = torch.from_numpy(np.random.RandomState( - seed).randn(1, G.z_dim)).to(device) - if target_z.size == 0: - target_z = z.cpu() - else: - target_z = np.append(target_z, z.cpu(), axis=0) - - w = G.mapping(z, label, truncation_psi=truncation_psi) - img = G.synthesis(w, noise_mode=noise_mode, force_fp32=True) - if target_w.size == 0: - target_w = w.cpu() - else: - target_w = np.append(target_w, w.cpu(), axis=0) - - img = (img.permute(0, 2, 3, 1) * 127.5 + - 128).clamp(0, 255).to(torch.uint8) - PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save( - f'{outdir}/seed{seed:04d}.png') - # print(target_z) - # print(target_z.shape,target_w.shape) - - -# ---------------------------------------------------------------------------- - -if __name__ == "__main__": - generate_images() - -# ---------------------------------------------------------------------------- diff --git a/spaces/hands012/gpt-academic/docs/README_EN.md b/spaces/hands012/gpt-academic/docs/README_EN.md deleted file mode 100644 index 65af23d7b2c989107a664d7bd3ef88cf7e55c7f7..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/docs/README_EN.md +++ /dev/null @@ -1,322 +0,0 @@ -> **Note** -> -> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct. -> -> When installing dependencies, **please strictly select the versions** specified in requirements.txt. -> -> `pip install -r requirements.txt` - -# GPT Academic Optimization (GPT Academic) - -**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. -To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).** - -> Note: -> -> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**! -> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation). -> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect. - -
- -Function | Description ---- | --- -One-click polishing | Supports one-click polishing and one-click searching for grammar errors in papers. -One-click Chinese-English translation | One-click Chinese-English translation. -One-click code interpretation | Displays, explains, generates, and adds comments to code. -[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys. -Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project -[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/... -Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts. -Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers. -Batch annotation generation | [Function plug-in] One-click batch generation of function annotations. -Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) in the five languages above? -Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running. -[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded) -[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click. -[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function plug-in] Given any Google Scholar search page URL, let GPT help you [write relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) -Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated. -Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting. -Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click. -Start Dark Gradio [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme. -[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right? -More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/) -More new feature displays (image generation, etc.)…… | See the end of this document for more... -
- -- New interface (modify the LAYOUT option in `config.py` to switch between "left and right layout" and "up and down layout") -
- -
- All buttons are dynamically generated by reading `functional.py`, and you can add custom functions freely to unleash the power of clipboard. -
- -
- -- polishing/correction -
- -
- -- If the output contains formulas, they will be displayed in both `tex` and render form, making it easy to copy and read. -
- -
- -- Tired of reading the project code? ChatGPT can explain it all. -
- -
- -- Multiple large language models are mixed, such as ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4. -
- -
- ---- -# Installation -## Method 1: Directly running (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure the API_KEY - -Configure the API KEY in `config.py`, [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py` and use the configurations in it to override the same configurations in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configurations in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your private information more secure. P.S. The project also supports configuring most options through `environment variables`. Please refer to the format of `docker-compose` file when writing. Reading priority: `environment variables` > `config_private.py` > `config.py`) - - -3. Install the dependencies -```sh -# (Option I: If familiar with python) (python version 3.9 or above, the newer the better), note: use official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If not familiar with python) Use anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # create anaconda environment -conda activate gptac_venv # activate anaconda environment -python -m pip install -r requirements.txt # this step is the same as pip installation -``` - -
If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand -

- -[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (prerequisites: familiar with Python + used Pytorch + computer configuration is strong enough): -```sh -# [Optional Step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: if you encounter the "Call ChatGLM fail cannot load ChatGLM parameters" error, refer to this: 1: The default installation above is torch + cpu version, to use cuda, you need to uninstall torch and reinstall torch + cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code = True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional Step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the root directory of the project - -# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. Currently supported models are as follows (the jittorllms series only supports the docker solution for the time being): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

-
- - - -4. Run it -```sh -python main.py -```5. Test Function Plugin -``` -- Test function plugin template function (ask GPT what happened today in history), based on which you can implement more complex functions as a template - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation - Method 2: Using Docker - -1. ChatGPT Only (Recommended for Most People) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # Download project -cd chatgpt_academic # Enter path -nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc. -docker build -t gpt-academic . # Install - -#(Last step - option 1) In a Linux environment, use `--net=host` for convenience and speed. -docker run --rm -it --net=host gpt-academic -#(Last step - option 2) On macOS/windows environment, only -p option can be used to expose the container's port (e.g. 50923) to the port of the main machine. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 3, and keep Plan 2. Modify the configuration of Plan 2 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -3. ChatGPT + LLAMA + Pangu + RWKV (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 2, and keep Plan 3. Modify the configuration of Plan 3 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -## Installation - Method 3: Other Deployment Options - -1. How to Use Reverse Proxy URL/Microsoft Cloud Azure API -Configure API_URL_REDIRECT according to the instructions in 'config.py'. - -2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers) -Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL2 (Windows Subsystem for Linux) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`) -Please visit [FastAPI Running Instructions](docs/WithFastapi.md) - -5. Using docker-compose to Run -Read the docker-compose.yml and follow the prompts. - ---- -# Advanced Usage -## Custom New Shortcut Buttons / Custom Function Plugins - -1. Custom New Shortcut Buttons (Academic Hotkey) -Open `core_functional.py` with any text editor, add an entry as follows and restart the program. (If the button has been successfully added and is visible, the prefix and suffix can be hot-modified without having to restart the program.) -For example, -``` -"Super English-to-Chinese": { - # Prefix, which will be added before your input. For example, used to describe your requests, such as translation, code explanation, polishing, etc. - "Prefix": "Please translate the following content into Chinese and then use a markdown table to explain the proprietary terms that appear in the text:\n\n", - - # Suffix, which is added after your input. For example, with the prefix, your input content can be surrounded by quotes. - "Suffix": "", -}, -``` -
- -
- -2. Custom Function Plugins - -Write powerful function plugins to perform any task you can think of, even those you cannot think of. -The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide. -For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New Feature Dynamics -1. Conversation saving function. Call `Save current conversation` in the function plugin area to save the current conversation as a readable and recoverable HTML file. In addition, call `Load conversation history archive` in the function plugin area (dropdown menu) to restore previous sessions. Tip: Clicking `Load conversation history archive` without specifying a file will display the cached history of HTML archives, and clicking `Delete all local conversation history` will delete all HTML archive caches. - -
- -
- - -2. Report generation. Most plugins will generate work reports after execution. - -
- - - -
- - -3. Modular function design with simple interfaces that support powerful functions. - -
- - -
- - -4. This is an open-source project that can "self-translate". - -
- -
- -5. Translating other open-source projects is a piece of cake. - -
- -
- -
- -
- -6. A small feature decorated with [live2d](https://github.com/fghrsh/live2d_demo) (disabled by default, need to modify `config.py`). - -
- -
- -7. Added MOSS large language model support. -
- -
- -8. OpenAI image generation. -
- -
- -9. OpenAI audio parsing and summarization. -
- -
- -10. Full-text proofreading and error correction of LaTeX. -
- -
- - -## Versions: -- version 3.5(Todo): Use natural language to call all function plugins of this project (high priority). -- version 3.4(Todo): Improve multi-threading support for chatglm local large models. -- version 3.3: +Internet information integration function. -- version 3.2: Function plugin supports more parameter interfaces (save conversation function, interpretation of any language code + simultaneous inquiry of any LLM combination). -- version 3.1: Support simultaneous inquiry of multiple GPT models! Support api2d, and support load balancing of multiple apikeys. -- version 3.0: Support chatglm and other small LLM models. -- version 2.6: Refactored plugin structure, improved interactivity, and added more plugins. -- version 2.5: Self-updating, solving the problem of text overflow and token overflow when summarizing large engineering source codes. -- version 2.4: (1) Added PDF full-text translation function; (2) Added the function of switching the position of the input area; (3) Added vertical layout option; (4) Optimized multi-threading function plugins. -- version 2.3: Enhanced multi-threading interactivity. -- version 2.2: Function plugin supports hot reloading. -- version 2.1: Collapsible layout. -- version 2.0: Introduction of modular function plugins. -- version 1.0: Basic functions. - -gpt_academic Developer QQ Group-2: 610599535 - -- Known Issues - - Some browser translation plugins interfere with the front-end operation of this software. - - Both high and low versions of gradio can lead to various exceptions. - -## Reference and Learning - -``` -Many other excellent designs have been referenced in the code, mainly including: - -# Project 1: THU ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# Project 2: THU JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# Project 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Project 4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Project 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# More: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/haofeixu/unimatch/unimatch/transformer.py b/spaces/haofeixu/unimatch/unimatch/transformer.py deleted file mode 100644 index 4878e23a64f6609b1bf10740b0a794d8da836c31..0000000000000000000000000000000000000000 --- a/spaces/haofeixu/unimatch/unimatch/transformer.py +++ /dev/null @@ -1,294 +0,0 @@ -import torch -import torch.nn as nn - -from .attention import (single_head_full_attention, single_head_split_window_attention, - single_head_full_attention_1d, single_head_split_window_attention_1d) -from .utils import generate_shift_window_attn_mask, generate_shift_window_attn_mask_1d - - -class TransformerLayer(nn.Module): - def __init__(self, - d_model=128, - nhead=1, - no_ffn=False, - ffn_dim_expansion=4, - ): - super(TransformerLayer, self).__init__() - - self.dim = d_model - self.nhead = nhead - self.no_ffn = no_ffn - - # multi-head attention - self.q_proj = nn.Linear(d_model, d_model, bias=False) - self.k_proj = nn.Linear(d_model, d_model, bias=False) - self.v_proj = nn.Linear(d_model, d_model, bias=False) - - self.merge = nn.Linear(d_model, d_model, bias=False) - - self.norm1 = nn.LayerNorm(d_model) - - # no ffn after self-attn, with ffn after cross-attn - if not self.no_ffn: - in_channels = d_model * 2 - self.mlp = nn.Sequential( - nn.Linear(in_channels, in_channels * ffn_dim_expansion, bias=False), - nn.GELU(), - nn.Linear(in_channels * ffn_dim_expansion, d_model, bias=False), - ) - - self.norm2 = nn.LayerNorm(d_model) - - def forward(self, source, target, - height=None, - width=None, - shifted_window_attn_mask=None, - shifted_window_attn_mask_1d=None, - attn_type='swin', - with_shift=False, - attn_num_splits=None, - ): - # source, target: [B, L, C] - query, key, value = source, target, target - - # for stereo: 2d attn in self-attn, 1d attn in cross-attn - is_self_attn = (query - key).abs().max() < 1e-6 - - # single-head attention - query = self.q_proj(query) # [B, L, C] - key = self.k_proj(key) # [B, L, C] - value = self.v_proj(value) # [B, L, C] - - if attn_type == 'swin' and attn_num_splits > 1: # self, cross-attn: both swin 2d - if self.nhead > 1: - # we observe that multihead attention slows down the speed and increases the memory consumption - # without bringing obvious performance gains and thus the implementation is removed - raise NotImplementedError - else: - message = single_head_split_window_attention(query, key, value, - num_splits=attn_num_splits, - with_shift=with_shift, - h=height, - w=width, - attn_mask=shifted_window_attn_mask, - ) - - elif attn_type == 'self_swin2d_cross_1d': # self-attn: swin 2d, cross-attn: full 1d - if self.nhead > 1: - raise NotImplementedError - else: - if is_self_attn: - if attn_num_splits > 1: - message = single_head_split_window_attention(query, key, value, - num_splits=attn_num_splits, - with_shift=with_shift, - h=height, - w=width, - attn_mask=shifted_window_attn_mask, - ) - else: - # full 2d attn - message = single_head_full_attention(query, key, value) # [N, L, C] - - else: - # cross attn 1d - message = single_head_full_attention_1d(query, key, value, - h=height, - w=width, - ) - - elif attn_type == 'self_swin2d_cross_swin1d': # self-attn: swin 2d, cross-attn: swin 1d - if self.nhead > 1: - raise NotImplementedError - else: - if is_self_attn: - if attn_num_splits > 1: - # self attn shift window - message = single_head_split_window_attention(query, key, value, - num_splits=attn_num_splits, - with_shift=with_shift, - h=height, - w=width, - attn_mask=shifted_window_attn_mask, - ) - else: - # full 2d attn - message = single_head_full_attention(query, key, value) # [N, L, C] - else: - if attn_num_splits > 1: - assert shifted_window_attn_mask_1d is not None - # cross attn 1d shift - message = single_head_split_window_attention_1d(query, key, value, - num_splits=attn_num_splits, - with_shift=with_shift, - h=height, - w=width, - attn_mask=shifted_window_attn_mask_1d, - ) - else: - message = single_head_full_attention_1d(query, key, value, - h=height, - w=width, - ) - - else: - message = single_head_full_attention(query, key, value) # [B, L, C] - - message = self.merge(message) # [B, L, C] - message = self.norm1(message) - - if not self.no_ffn: - message = self.mlp(torch.cat([source, message], dim=-1)) - message = self.norm2(message) - - return source + message - - -class TransformerBlock(nn.Module): - """self attention + cross attention + FFN""" - - def __init__(self, - d_model=128, - nhead=1, - ffn_dim_expansion=4, - ): - super(TransformerBlock, self).__init__() - - self.self_attn = TransformerLayer(d_model=d_model, - nhead=nhead, - no_ffn=True, - ffn_dim_expansion=ffn_dim_expansion, - ) - - self.cross_attn_ffn = TransformerLayer(d_model=d_model, - nhead=nhead, - ffn_dim_expansion=ffn_dim_expansion, - ) - - def forward(self, source, target, - height=None, - width=None, - shifted_window_attn_mask=None, - shifted_window_attn_mask_1d=None, - attn_type='swin', - with_shift=False, - attn_num_splits=None, - ): - # source, target: [B, L, C] - - # self attention - source = self.self_attn(source, source, - height=height, - width=width, - shifted_window_attn_mask=shifted_window_attn_mask, - attn_type=attn_type, - with_shift=with_shift, - attn_num_splits=attn_num_splits, - ) - - # cross attention and ffn - source = self.cross_attn_ffn(source, target, - height=height, - width=width, - shifted_window_attn_mask=shifted_window_attn_mask, - shifted_window_attn_mask_1d=shifted_window_attn_mask_1d, - attn_type=attn_type, - with_shift=with_shift, - attn_num_splits=attn_num_splits, - ) - - return source - - -class FeatureTransformer(nn.Module): - def __init__(self, - num_layers=6, - d_model=128, - nhead=1, - ffn_dim_expansion=4, - ): - super(FeatureTransformer, self).__init__() - - self.d_model = d_model - self.nhead = nhead - - self.layers = nn.ModuleList([ - TransformerBlock(d_model=d_model, - nhead=nhead, - ffn_dim_expansion=ffn_dim_expansion, - ) - for i in range(num_layers)]) - - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, feature0, feature1, - attn_type='swin', - attn_num_splits=None, - **kwargs, - ): - - b, c, h, w = feature0.shape - assert self.d_model == c - - feature0 = feature0.flatten(-2).permute(0, 2, 1) # [B, H*W, C] - feature1 = feature1.flatten(-2).permute(0, 2, 1) # [B, H*W, C] - - # 2d attention - if 'swin' in attn_type and attn_num_splits > 1: - # global and refine use different number of splits - window_size_h = h // attn_num_splits - window_size_w = w // attn_num_splits - - # compute attn mask once - shifted_window_attn_mask = generate_shift_window_attn_mask( - input_resolution=(h, w), - window_size_h=window_size_h, - window_size_w=window_size_w, - shift_size_h=window_size_h // 2, - shift_size_w=window_size_w // 2, - device=feature0.device, - ) # [K*K, H/K*W/K, H/K*W/K] - else: - shifted_window_attn_mask = None - - # 1d attention - if 'swin1d' in attn_type and attn_num_splits > 1: - window_size_w = w // attn_num_splits - - # compute attn mask once - shifted_window_attn_mask_1d = generate_shift_window_attn_mask_1d( - input_w=w, - window_size_w=window_size_w, - shift_size_w=window_size_w // 2, - device=feature0.device, - ) # [K, W/K, W/K] - else: - shifted_window_attn_mask_1d = None - - # concat feature0 and feature1 in batch dimension to compute in parallel - concat0 = torch.cat((feature0, feature1), dim=0) # [2B, H*W, C] - concat1 = torch.cat((feature1, feature0), dim=0) # [2B, H*W, C] - - for i, layer in enumerate(self.layers): - concat0 = layer(concat0, concat1, - height=h, - width=w, - attn_type=attn_type, - with_shift='swin' in attn_type and attn_num_splits > 1 and i % 2 == 1, - attn_num_splits=attn_num_splits, - shifted_window_attn_mask=shifted_window_attn_mask, - shifted_window_attn_mask_1d=shifted_window_attn_mask_1d, - ) - - # update feature1 - concat1 = torch.cat(concat0.chunk(chunks=2, dim=0)[::-1], dim=0) - - feature0, feature1 = concat0.chunk(chunks=2, dim=0) # [B, H*W, C] - - # reshape back - feature0 = feature0.view(b, h, w, c).permute(0, 3, 1, 2).contiguous() # [B, C, H, W] - feature1 = feature1.view(b, h, w, c).permute(0, 3, 1, 2).contiguous() # [B, C, H, W] - - return feature0, feature1 diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/evonorm.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/evonorm.py deleted file mode 100644 index d439c43b4b90452f6bf9afaca857bd8dd5be3bba..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/evonorm.py +++ /dev/null @@ -1,40 +0,0 @@ -import torch -import torch.nn as nn - - -class EvoNorm2d(nn.Module): - __constants__ = ['num_features', 'eps', 'nonlinearity'] - - def __init__(self, num_features, eps=1e-5, nonlinearity=True, group=32): - super(EvoNorm2d, self).__init__() - - self.num_features = num_features - self.eps = eps - self.nonlinearity = nonlinearity - self.group = group - - self.weight = nn.Parameter(torch.Tensor(1, num_features, 1, 1)) - self.bias = nn.Parameter(torch.Tensor(1, num_features, 1, 1)) - if self.nonlinearity: - self.v = nn.Parameter(torch.Tensor(1, num_features, 1, 1)) - - self.reset_parameters() - - def reset_parameters(self): - nn.init.ones_(self.weight) - nn.init.zeros_(self.bias) - if self.nonlinearity: - nn.init.ones_(self.v) - - def group_std(self, x, groups=32): - N, C, H, W = x.shape - x = torch.reshape(x, (N, groups, C // groups, H, W)) - std = torch.std(x, (3, 4), keepdim=True) - return torch.reshape(std + self.eps, (N, C, 1, 1)) - - def forward(self, x): - if self.nonlinearity: - num = x * torch.sigmoid(self.v * x) - return num / self.group_std(x, self.group) * self.weight + self.bias - else: - return x * self.weight + self.bias \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/GETTING_STARTED.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/GETTING_STARTED.md deleted file mode 100644 index acaf13f02c906b45ffc2f49ee5a0ce01d82b4786..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/GETTING_STARTED.md +++ /dev/null @@ -1,79 +0,0 @@ -## Getting Started with Detectron2 - -This document provides a brief intro of the usage of builtin command-line tools in detectron2. - -For a tutorial that involves actual coding with the API, -see our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) -which covers how to run inference with an -existing model, and how to train a builtin model on a custom dataset. - -For more advanced tutorials, refer to our [documentation](https://detectron2.readthedocs.io/tutorials/extend.html). - - -### Inference Demo with Pre-trained Models - -1. Pick a model and its config file from - [model zoo](MODEL_ZOO.md), - for example, `mask_rcnn_R_50_FPN_3x.yaml`. -2. We provide `demo.py` that is able to run builtin standard models. Run it with: -``` -cd demo/ -python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --input input1.jpg input2.jpg \ - [--other-options] - --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl -``` -The configs are made for training, therefore we need to specify `MODEL.WEIGHTS` to a model from model zoo for evaluation. -This command will run the inference and show visualizations in an OpenCV window. - -For details of the command line arguments, see `demo.py -h` or look at its source code -to understand its behavior. Some common arguments are: -* To run __on your webcam__, replace `--input files` with `--webcam`. -* To run __on a video__, replace `--input files` with `--video-input video.mp4`. -* To run __on cpu__, add `MODEL.DEVICE cpu` after `--opts`. -* To save outputs to a directory (for images) or a file (for webcam or video), use `--output`. - - -### Training & Evaluation in Command Line - -We provide a script in "tools/{,plain_}train_net.py", that is made to train -all the configs provided in detectron2. -You may want to use it as a reference to write your own training script. - -To train a model with "train_net.py", first -setup the corresponding datasets following -[datasets/README.md](./datasets/README.md), -then run: -``` -cd tools/ -./train_net.py --num-gpus 8 \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml -``` - -The configs are made for 8-GPU training. -To train on 1 GPU, you may need to [change some parameters](https://arxiv.org/abs/1706.02677), e.g.: -``` -./train_net.py \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ - --num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 -``` - -For most models, CPU training is not supported. - -To evaluate a model's performance, use -``` -./train_net.py \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ - --eval-only MODEL.WEIGHTS /path/to/checkpoint_file -``` -For more options, see `./train_net.py -h`. - -### Use Detectron2 APIs in Your Code - -See our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) -to learn how to use detectron2 APIs to: -1. run inference with an existing model -2. train a builtin model on a custom dataset - -See [detectron2/projects](https://github.com/facebookresearch/detectron2/tree/master/projects) -for more ways to build your project on detectron2. diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/distributed_sampler.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/distributed_sampler.py deleted file mode 100644 index 4ac57bbd10519be99114155d717802deac53e8fb..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/distributed_sampler.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import itertools -import math -from collections import defaultdict -from typing import Optional -import torch -from torch.utils.data.sampler import Sampler - -from detectron2.utils import comm - - -class TrainingSampler(Sampler): - """ - In training, we only care about the "infinite stream" of training data. - So this sampler produces an infinite stream of indices and - all workers cooperate to correctly shuffle the indices and sample different indices. - - The samplers in each worker effectively produces `indices[worker_id::num_workers]` - where `indices` is an infinite stream of indices consisting of - `shuffle(range(size)) + shuffle(range(size)) + ...` (if shuffle is True) - or `range(size) + range(size) + ...` (if shuffle is False) - """ - - def __init__(self, size: int, shuffle: bool = True, seed: Optional[int] = None): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - shuffle (bool): whether to shuffle the indices or not - seed (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - self._size = size - assert size > 0 - self._shuffle = shuffle - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - def __iter__(self): - start = self._rank - yield from itertools.islice(self._infinite_indices(), start, None, self._world_size) - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - if self._shuffle: - yield from torch.randperm(self._size, generator=g) - else: - yield from torch.arange(self._size) - - -class RepeatFactorTrainingSampler(Sampler): - """ - Similar to TrainingSampler, but suitable for training on class imbalanced data - like LVIS. In each epoch, an image may appear multiple times based on its "repeat - factor". The repeat factor for an image is a function of the frequency the rarest - category labeled in that image. The "frequency of category c" in [0, 1] is defined - as the fraction of images in the training set (without repeats) in which category c - appears. - - See :paper:`lvis` (>= v2) Appendix B.2. - """ - - def __init__(self, dataset_dicts, repeat_thresh, shuffle=True, seed=None): - """ - Args: - dataset_dicts (list[dict]): annotations in Detectron2 dataset format. - repeat_thresh (float): frequency threshold below which data is repeated. - shuffle (bool): whether to shuffle the indices or not - seed (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - self._shuffle = shuffle - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - # Get fractional repeat factors and split into whole number (_int_part) - # and fractional (_frac_part) parts. - rep_factors = self._get_repeat_factors(dataset_dicts, repeat_thresh) - self._int_part = torch.trunc(rep_factors) - self._frac_part = rep_factors - self._int_part - - def _get_repeat_factors(self, dataset_dicts, repeat_thresh): - """ - Compute (fractional) per-image repeat factors. - - Args: - See __init__. - - Returns: - torch.Tensor: the i-th element is the repeat factor for the dataset image - at index i. - """ - # 1. For each category c, compute the fraction of images that contain it: f(c) - category_freq = defaultdict(int) - for dataset_dict in dataset_dicts: # For each image (without repeats) - cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]} - for cat_id in cat_ids: - category_freq[cat_id] += 1 - num_images = len(dataset_dicts) - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t / f(c))) - category_rep = { - cat_id: max(1.0, math.sqrt(repeat_thresh / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - rep_factors = [] - for dataset_dict in dataset_dicts: - cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]} - rep_factor = max({category_rep[cat_id] for cat_id in cat_ids}) - rep_factors.append(rep_factor) - - return torch.tensor(rep_factors, dtype=torch.float32) - - def _get_epoch_indices(self, generator): - """ - Create a list of dataset indices (with repeats) to use for one epoch. - - Args: - generator (torch.Generator): pseudo random number generator used for - stochastic rounding. - - Returns: - torch.Tensor: list of dataset indices to use in one epoch. Each index - is repeated based on its calculated repeat factor. - """ - # Since repeat factors are fractional, we use stochastic rounding so - # that the target repeat factor is achieved in expectation over the - # course of training - rands = torch.rand(len(self._frac_part), generator=generator) - rep_factors = self._int_part + (rands < self._frac_part).float() - # Construct a list of indices in which we repeat images as specified - indices = [] - for dataset_index, rep_factor in enumerate(rep_factors): - indices.extend([dataset_index] * int(rep_factor.item())) - return torch.tensor(indices, dtype=torch.int64) - - def __iter__(self): - start = self._rank - yield from itertools.islice(self._infinite_indices(), start, None, self._world_size) - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - # Sample indices with repeats determined by stochastic rounding; each - # "epoch" may have a slightly different size due to the rounding. - indices = self._get_epoch_indices(g) - if self._shuffle: - randperm = torch.randperm(len(indices), generator=g) - yield from indices[randperm] - else: - yield from indices - - -class InferenceSampler(Sampler): - """ - Produce indices for inference. - Inference needs to run on the __exact__ set of samples, - therefore when the total number of samples is not divisible by the number of workers, - this sampler produces different number of samples on different workers. - """ - - def __init__(self, size: int): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - """ - self._size = size - assert size > 0 - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - shard_size = (self._size - 1) // self._world_size + 1 - begin = shard_size * self._rank - end = min(shard_size * (self._rank + 1), self._size) - self._local_indices = range(begin, end) - - def __iter__(self): - yield from self._local_indices - - def __len__(self): - return len(self._local_indices) diff --git a/spaces/hasibzunair/fifa-tryon-demo/models/networks.py b/spaces/hasibzunair/fifa-tryon-demo/models/networks.py deleted file mode 100644 index d2c86bc137f372b289df75b6e9213ea4b6c6a98d..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/models/networks.py +++ /dev/null @@ -1,1776 +0,0 @@ -import torch -import os -import torch.nn as nn -import functools -from torch.autograd import Variable -import numpy as np -import torch.nn.functional as F -import math -import torch -import itertools -import numpy as np -import torch.nn as nn -import torch.nn.functional as F -from grid_sample import grid_sample -from torch.autograd import Variable -from tps_grid_gen import TPSGridGen - - -############################################################################### -# Functions -############################################################################### -def weights_init(m): - classname = m.__class__.__name__ - if classname.find('Conv2d') != -1: - m.weight.data.normal_(0.0, 0.02) - elif classname.find('BatchNorm2d') != -1: - m.weight.data.normal_(1.0, 0.02) - m.bias.data.fill_(0) - - -def get_norm_layer(norm_type='instance'): - if norm_type == 'batch': - norm_layer = functools.partial(nn.BatchNorm2d, affine=True) - elif norm_type == 'instance': - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False) - else: - raise NotImplementedError('normalization layer [%s] is not found' % norm_type) - return norm_layer - - -def define_G(input_nc, output_nc, ngf, netG, L=1, S=1, n_downsample_global=3, n_blocks_global=9, n_local_enhancers=1, - n_blocks_local=3, norm='instance', gpu_ids=[]): - norm_layer = get_norm_layer(norm_type=norm) - if netG == 'global': - netG = GlobalGenerator(input_nc, output_nc, L, S, ngf, n_downsample_global, n_blocks_global, norm_layer) - elif netG == 'local': - netG = LocalEnhancer(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, - n_local_enhancers, n_blocks_local, norm_layer) - else: - raise ('generator not implemented!') - print(netG) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - netG.cuda(gpu_ids[0]) - netG.apply(weights_init) - return netG - - -def define_Unet(input_nc, gpu_ids=[]): - netG = Unet(input_nc) - netG.cuda(gpu_ids[0]) - netG.apply(weights_init) - return netG - - -def define_UnetMask(input_nc, gpu_ids=[]): - netG = UnetMask(input_nc,output_nc=4) - netG.cuda(gpu_ids[0]) - netG.apply(weights_init) - return netG - -def define_Refine(input_nc, output_nc, gpu_ids=[]): - netG = Refine(input_nc, output_nc) - netG.cuda(gpu_ids[0]) - netG.apply(weights_init) - return netG - -#################################################### -def define_Refine_ResUnet(input_nc, output_nc, gpu_ids=[]): - #ipdb.set_trace() - netG = Refine_ResUnet_New(input_nc, output_nc) #norm_layer=nn.InstanceNorm2d - #ipdb.set_trace() - netG.cuda(gpu_ids[0]) - netG.apply(weights_init) - return netG -#################################################### - -def define_D(input_nc, ndf, n_layers_D, norm='instance', use_sigmoid=False, num_D=1, getIntermFeat=False, gpu_ids=[]): - norm_layer = get_norm_layer(norm_type=norm) - netD = MultiscaleDiscriminator(input_nc, ndf, n_layers_D, norm_layer, use_sigmoid, num_D, getIntermFeat) - print(netD) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - netD.cuda(gpu_ids[0]) - netD.apply(weights_init) - return netD - - -def define_VAE(input_nc, gpu_ids=[]): - netVAE = VAE(19, 32, 32, 1024) - print(netVAE) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - netVAE.cuda(gpu_ids[0]) - return netVAE - - -def define_B(input_nc, output_nc, ngf, n_downsample_global=3, n_blocks_global=3, norm='instance', gpu_ids=[]): - norm_layer = get_norm_layer(norm_type=norm) - netB = BlendGenerator(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, norm_layer) - print(netB) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - netB.cuda(gpu_ids[0]) - netB.apply(weights_init) - return netB - - -def define_partial_enc(input_nc, gpu_ids=[]): - net = PartialConvEncoder(input_nc) - print(net) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - net.cuda(gpu_ids[0]) - net.apply(weights_init) - return net - - -def define_conv_enc(input_nc, gpu_ids=[]): - net = ConvEncoder(input_nc) - print(net) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - net.cuda(gpu_ids[0]) - net.apply(weights_init) - return net - - -def define_AttG(output_nc, gpu_ids=[]): - net = AttGenerator(output_nc) - print(net) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - net.cuda(gpu_ids[0]) - net.apply(weights_init) - return net - - -def print_network(net): - if isinstance(net, list): - net = net[0] - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - print(net) - print('Total number of parameters: %d' % num_params) - - -############################################################################## -# Losses -############################################################################## -class GANLoss(nn.Module): - def __init__(self, use_lsgan=True, target_real_label=1.0, target_fake_label=0.0, - tensor=torch.FloatTensor): - super(GANLoss, self).__init__() - self.real_label = target_real_label - self.fake_label = target_fake_label - self.real_label_var = None - self.fake_label_var = None - self.Tensor = tensor - if use_lsgan: - self.loss = nn.MSELoss() - else: - self.loss = nn.BCELoss() - - def get_target_tensor(self, input, target_is_real): - target_tensor = None - if target_is_real: - create_label = ((self.real_label_var is None) or - (self.real_label_var.numel() != input.numel())) - if create_label: - real_tensor = self.Tensor(input.size()).fill_(self.real_label) - self.real_label_var = Variable(real_tensor, requires_grad=False) - target_tensor = self.real_label_var - else: - create_label = ((self.fake_label_var is None) or - (self.fake_label_var.numel() != input.numel())) - if create_label: - fake_tensor = self.Tensor(input.size()).fill_(self.fake_label) - self.fake_label_var = Variable(fake_tensor, requires_grad=False) - target_tensor = self.fake_label_var - return target_tensor - - def __call__(self, input, target_is_real): - if isinstance(input[0], list): - loss = 0 - for input_i in input: - pred = input_i[-1] - target_tensor = self.get_target_tensor(pred, target_is_real) - loss += self.loss(pred, target_tensor) - return loss - else: - target_tensor = self.get_target_tensor(input[-1], target_is_real) - return self.loss(input[-1], target_tensor) - - -class VGGLossWarp(nn.Module): - def __init__(self, gpu_ids): - super(VGGLossWarp, self).__init__() - self.vgg = Vgg19().cuda() - self.criterion = nn.L1Loss() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - loss += self.weights[4] * self.criterion(x_vgg[4], y_vgg[4].detach()) - return loss - - -class VGGLoss(nn.Module): - def __init__(self, gpu_ids): - super(VGGLoss, self).__init__() - self.vgg = Vgg19().cuda() - self.criterion = nn.L1Loss() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach()) - return loss - - def warp(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - loss += self.weights[4] * self.criterion(x_vgg[4], y_vgg[4].detach()) - return loss - - -class StyleLoss(nn.Module): - def __init__(self, gpu_ids): - super(StyleLoss, self).__init__() - self.vgg = Vgg19().cuda() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - N, C, H, W = x_vgg[i].shape - for n in range(N): - phi_x = x_vgg[i][n] - phi_y = y_vgg[i][n] - phi_x = phi_x.reshape(C, H * W) - phi_y = phi_y.reshape(C, H * W) - G_x = torch.matmul(phi_x, phi_x.t()) / (C * H * W) - G_y = torch.matmul(phi_y, phi_y.t()) / (C * H * W) - loss += torch.sqrt(torch.mean((G_x - G_y) ** 2)) * self.weights[i] - return loss - - -############################################################################## -# Generator -############################################################################## - -class PartialConvEncoder(nn.Module): - def __init__(self, input_nc, ngf=32, norm_layer=nn.BatchNorm2d): - super(PartialConvEncoder, self).__init__() - activation = nn.ReLU(True) - self.pad1 = nn.ReflectionPad2d(3) - self.partial_conv1 = PartialConv(input_nc, ngf, kernel_size=7) - self.norm_layer1 = norm_layer(ngf) - self.activation = activation - ##down sample - mult = 2 ** 0 - self.down1 = PartialConv(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1) - self.norm_layer2 = norm_layer(ngf * mult * 2) - mult = 2 ** 1 - self.down2 = PartialConv(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1) - self.norm_layer3 = norm_layer(ngf * mult * 2) - - mult = 2 ** 2 - self.down3 = PartialConv(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1) - self.norm_layer4 = norm_layer(ngf * mult * 2) - - mult = 2 ** 3 - self.down4 = PartialConv(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1) - self.norm_layer5 = norm_layer(ngf * mult * 2) - - def forward(self, input, mask): - input = self.pad1(input) - mask = self.pad1(mask) - input, mask = self.partial_conv1(input, mask) - input = self.norm_layer1(input) - input = self.activation(input) - - input, mask = self.down1(input, mask) - input = self.norm_layer2(input) - input = self.activation(input) - input, mask = self.down2(input, mask) - input = self.norm_layer3(input) - input = self.activation(input) - input, mask = self.down3(input, mask) - input = self.norm_layer4(input) - input = self.activation(input) - input, mask = self.down4(input, mask) - input = self.norm_layer5(input) - input = self.activation(input) - return input - - -class ConvEncoder(nn.Module): - def __init__(self, input_nc, ngf=32, n_downsampling=4, n_blocks=4, norm_layer=nn.BatchNorm2d, - padding_type='reflect'): - super(ConvEncoder, self).__init__() - activation = nn.ReLU(True) - # print("input_nc",input_nc) - model = [nn.ReflectionPad2d(3), nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), norm_layer(ngf), activation] - ### downsample - for i in range(n_downsampling): - stride = 2 - - mult = 2 ** i - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=stride, padding=1), - norm_layer(ngf * mult * 2), activation] - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class AttGenerator(nn.Module): - def __init__(self, output_nc, ngf=32, n_blocks=4, n_downsampling=4, padding_type='reflect'): - super(AttGenerator, self).__init__() - mult = 2 ** n_downsampling - model = [] - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult * 2, norm_type='in', padding_type=padding_type)] - - self.model = nn.Sequential(*model) - self.upsampling = [] - self.out_channels = [] - self.AttNorm = [] - ##upsampling - norm_layer = nn.BatchNorm2d - activation = nn.ReLU(True) - - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - up_module = [nn.ConvTranspose2d(ngf * mult * 2, int(ngf * mult / 2) * 2, kernel_size=3, stride=2, padding=1, - output_padding=1), - norm_layer(int(ngf * mult / 2) * 2), activation - ] - up_module = nn.Sequential(*up_module) - self.upsampling += [up_module] - self.out_channels += [int(ngf * mult / 2) * 2] - self.upsampling = nn.Sequential(*self.upsampling) - - # - self.AttNorm += [AttentionNorm(5, self.out_channels[0], 2, 4)] - self.AttNorm += [AttentionNorm(5, self.out_channels[1], 2, 2)] - self.AttNorm += [AttentionNorm(5, self.out_channels[2], 1, 2)] - self.AttNorm += [AttentionNorm(5, self.out_channels[3], 1, 1)] - self.AttNorm = nn.Sequential(*self.AttNorm) - self.last_conv = [nn.ReflectionPad2d(3), nn.Conv2d(ngf * 2, output_nc, kernel_size=7, padding=0), nn.Tanh()] - self.last_conv = nn.Sequential(*self.last_conv) - - def forward(self, input, unattended): - up = self.model(unattended) - for i in range(4): - # print(i) - up = self.upsampling[i](up) - if i == 3: - break; - up = self.AttNorm[i](input, up) - return self.last_conv(up) - - -class PartialConv(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, stride=1, - padding=0, dilation=1, groups=1, bias=True): - super(PartialConv, self).__init__() - self.input_conv = nn.Conv2d(in_channels, out_channels, kernel_size, - stride, padding, dilation, groups, bias) - self.mask_conv = nn.Conv2d(in_channels, out_channels, kernel_size, - stride, padding, dilation, groups, False) - self.input_conv.apply(weights_init) - - torch.nn.init.constant_(self.mask_conv.weight, 1.0) - - # mask is not updated - for param in self.mask_conv.parameters(): - param.requires_grad = False - - def forward(self, input, mask): - # http://masc.cs.gmu.edu/wiki/partialconv - # C(X) = W^T * X + b, C(0) = b, D(M) = 1 * M + 0 = sum(M) - # W^T* (M .* X) / sum(M) + b = [C(M .* X) – C(0)] / D(M) + C(0) - - output = self.input_conv(input * mask) - if self.input_conv.bias is not None: - output_bias = self.input_conv.bias.view(1, -1, 1, 1).expand_as( - output) - else: - output_bias = torch.zeros_like(output) - - with torch.no_grad(): - output_mask = self.mask_conv(mask) - - no_update_holes = output_mask == 0 - mask_sum = output_mask.masked_fill_(no_update_holes, 1.0) - - output_pre = (output - output_bias) / mask_sum + output_bias - output = output_pre.masked_fill_(no_update_holes, 0.0) - - new_mask = torch.ones_like(output) - new_mask = new_mask.masked_fill_(no_update_holes, 0.0) - - return output, new_mask - - -class AttentionNorm(nn.Module): - def __init__(self, ref_channels, out_channels, first_rate, second_rate): - super(AttentionNorm, self).__init__() - self.first = first_rate - self.second = second_rate - mid_channels = int(out_channels / 2) - self.conv_1time_f = nn.Conv2d(ref_channels, mid_channels, kernel_size=3, stride=1, padding=1) - self.conv_2times_f = nn.Conv2d(ref_channels, mid_channels, kernel_size=3, stride=2, padding=1) - self.conv_4times_f = nn.Conv2d(ref_channels, mid_channels, kernel_size=3, stride=4, padding=1) - - self.conv_1time_s = nn.Conv2d(mid_channels, out_channels, kernel_size=3, stride=1, padding=1) - self.conv_2times_s = nn.Conv2d(mid_channels, out_channels, kernel_size=3, stride=2, padding=1) - self.conv_4times_s = nn.Conv2d(mid_channels, out_channels, kernel_size=3, stride=4, padding=1) - - self.conv_1time_m = nn.Conv2d(mid_channels, out_channels, kernel_size=3, stride=1, padding=1) - self.conv_2times_m = nn.Conv2d(mid_channels, out_channels, kernel_size=3, stride=2, padding=1) - self.conv_4times_m = nn.Conv2d(mid_channels, out_channels, kernel_size=3, stride=4, padding=1) - self.norm = nn.BatchNorm2d(out_channels) - self.conv = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - - def forward(self, input, unattended): - # attention weights - # print(input.shape,unattended.shape) - if self.first == 1: - input = self.conv_1time_f(input) - elif self.first == 2: - input = self.conv_2times_f(input) - elif self.first == 4: - input = self.conv_4times_f(input) - mask = None - if self.second == 1: - bias = self.conv_1time_s(input) - mask = self.conv_1time_m(input) - elif self.second == 2: - bias = self.conv_2times_s(input) - mask = self.conv_2times_m(input) - elif self.second == 4: - bias = self.conv_4times_s(input) - mask = self.conv_4times_m(input) - mask = torch.sigmoid(mask) - attended = self.norm(unattended) - # print(attended.shape,mask.shape,bias.shape) - attended = attended * mask + bias - attended = torch.relu(attended) - attended = self.conv(attended) - output = attended + unattended - return output -class UnetMask(nn.Module): - def __init__(self, input_nc, output_nc=3): - super(UnetMask, self).__init__() - self.stn = STNNet() - nl = nn.InstanceNorm2d - self.conv1 = nn.Sequential(*[nn.Conv2d(input_nc, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU()]) - self.pool1 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv2 = nn.Sequential(*[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU(), - nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU()]) - self.pool2 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv3 = nn.Sequential(*[nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU(), - nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU()]) - self.pool3 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv4 = nn.Sequential(*[nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU(), - nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU()]) - self.drop4 = nn.Dropout(0.5) - self.pool4 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv5 = nn.Sequential(*[nn.Conv2d(512, 1024, kernel_size=3, stride=1, padding=1), nl(1024), nn.ReLU(), - nn.Conv2d(1024, 1024, kernel_size=3, stride=1, padding=1), nl(1024), nn.ReLU()]) - self.drop5 = nn.Dropout(0.5) - - self.up6 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(1024, 512, kernel_size=3, stride=1, padding=1), nl(512), - nn.ReLU()]) - - self.conv6 = nn.Sequential(*[nn.Conv2d(1024, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU(), - nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU()]) - self.up7 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nl(256), - nn.ReLU()]) - self.conv7 = nn.Sequential(*[nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU(), - nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU()]) - - self.up8 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1), nl(128), - nn.ReLU()]) - - self.conv8 = nn.Sequential(*[nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU(), - nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU()]) - - self.up9 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), nl(64), - nn.ReLU()]) - - self.conv9 = nn.Sequential(*[nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, output_nc, kernel_size=3, stride=1, padding=1) - ]) - - def forward(self, input, refer, mask,grid): - - - input, warped_mask,rx,ry,cx,cy,grid = self.stn(input, torch.cat([mask, refer, input], 1), mask,grid) - # print(input.shape) - - - conv1 = self.conv1(torch.cat([refer.detach(), input.detach()], 1)) - pool1 = self.pool1(conv1) - - conv2 = self.conv2(pool1) - pool2 = self.pool2(conv2) - - conv3 = self.conv3(pool2) - pool3 = self.pool3(conv3) - - conv4 = self.conv4(pool3) - drop4 = self.drop4(conv4) - pool4 = self.pool4(drop4) - - conv5 = self.conv5(pool4) - drop5 = self.drop5(conv5) - - up6 = self.up6(drop5) - conv6 = self.conv6(torch.cat([drop4, up6], 1)) - - up7 = self.up7(conv6) - conv7 = self.conv7(torch.cat([conv3, up7], 1)) - - up8 = self.up8(conv7) - conv8 = self.conv8(torch.cat([conv2, up8], 1)) - - up9 = self.up9(conv8) - conv9 = self.conv9(torch.cat([conv1, up9], 1)) - return conv9, input, warped_mask,grid - -class Unet(nn.Module): - def __init__(self, input_nc, output_nc=3): - super(Unet, self).__init__() - self.stn = STNNet() - nl = nn.InstanceNorm2d - self.conv1 = nn.Sequential(*[nn.Conv2d(input_nc, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU()]) - self.pool1 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv2 = nn.Sequential(*[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU(), - nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU()]) - self.pool2 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv3 = nn.Sequential(*[nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU(), - nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU()]) - self.pool3 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv4 = nn.Sequential(*[nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU(), - nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU()]) - self.drop4 = nn.Dropout(0.5) - self.pool4 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv5 = nn.Sequential(*[nn.Conv2d(512, 1024, kernel_size=3, stride=1, padding=1), nl(1024), nn.ReLU(), - nn.Conv2d(1024, 1024, kernel_size=3, stride=1, padding=1), nl(1024), nn.ReLU()]) - self.drop5 = nn.Dropout(0.5) - - self.up6 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(1024, 512, kernel_size=3, stride=1, padding=1), nl(512), - nn.ReLU()]) - - self.conv6 = nn.Sequential(*[nn.Conv2d(1024, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU(), - nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU()]) - self.up7 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nl(256), - nn.ReLU()]) - self.conv7 = nn.Sequential(*[nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU(), - nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU()]) - - self.up8 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1), nl(128), - nn.ReLU()]) - - self.conv8 = nn.Sequential(*[nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU(), - nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU()]) - - self.up9 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), nl(64), - nn.ReLU()]) - - self.conv9 = nn.Sequential(*[nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, output_nc, kernel_size=3, stride=1, padding=1) - ]) - - def forward(self, input, refer, mask): - input, warped_mask,rx,ry,cx,cy = self.stn(input, torch.cat([mask, refer, input], 1), mask) - # print(input.shape) - - conv1 = self.conv1(torch.cat([refer.detach(), input.detach()], 1)) - pool1 = self.pool1(conv1) - - conv2 = self.conv2(pool1) - pool2 = self.pool2(conv2) - - conv3 = self.conv3(pool2) - pool3 = self.pool3(conv3) - - conv4 = self.conv4(pool3) - drop4 = self.drop4(conv4) - pool4 = self.pool4(drop4) - - conv5 = self.conv5(pool4) - drop5 = self.drop5(conv5) - - up6 = self.up6(drop5) - conv6 = self.conv6(torch.cat([drop4, up6], 1)) - - up7 = self.up7(conv6) - conv7 = self.conv7(torch.cat([conv3, up7], 1)) - - up8 = self.up8(conv7) - conv8 = self.conv8(torch.cat([conv2, up8], 1)) - - up9 = self.up9(conv8) - conv9 = self.conv9(torch.cat([conv1, up9], 1)) - return conv9, input, warped_mask,rx,ry,cx,cy - - def refine(self, input): - conv1 = self.conv1(input) - pool1 = self.pool1(conv1) - - conv2 = self.conv2(pool1) - pool2 = self.pool2(conv2) - - conv3 = self.conv3(pool2) - pool3 = self.pool3(conv3) - - conv4 = self.conv4(pool3) - drop4 = self.drop4(conv4) - pool4 = self.pool4(drop4) - - conv5 = self.conv5(pool4) - drop5 = self.drop5(conv5) - - up6 = self.up6(drop5) - conv6 = self.conv6(torch.cat([drop4, up6], 1)) - - up7 = self.up7(conv6) - conv7 = self.conv7(torch.cat([conv3, up7], 1)) - - up8 = self.up8(conv7) - conv8 = self.conv8(torch.cat([conv2, up8], 1)) - - up9 = self.up9(conv8) - conv9 = self.conv9(torch.cat([conv1, up9], 1)) - return conv9 - - -class Refine(nn.Module): - def __init__(self, input_nc, output_nc=3): - super(Refine, self).__init__() - nl = nn.InstanceNorm2d - self.conv1 = nn.Sequential(*[nn.Conv2d(input_nc, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU()]) - self.pool1 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv2 = nn.Sequential(*[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU(), - nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU()]) - self.pool2 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv3 = nn.Sequential(*[nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU(), - nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU()]) - self.pool3 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv4 = nn.Sequential(*[nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU(), - nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU()]) - self.drop4 = nn.Dropout(0.5) - self.pool4 = nn.MaxPool2d(kernel_size=(2, 2)) - - self.conv5 = nn.Sequential(*[nn.Conv2d(512, 1024, kernel_size=3, stride=1, padding=1), nl(1024), nn.ReLU(), - nn.Conv2d(1024, 1024, kernel_size=3, stride=1, padding=1), nl(1024), nn.ReLU()]) - self.drop5 = nn.Dropout(0.5) - - self.up6 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(1024, 512, kernel_size=3, stride=1, padding=1), nl(512), - nn.ReLU()]) - - self.conv6 = nn.Sequential(*[nn.Conv2d(1024, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU(), - nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nl(512), nn.ReLU()]) - self.up7 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nl(256), - nn.ReLU()]) - self.conv7 = nn.Sequential(*[nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU(), - nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nl(256), nn.ReLU()]) - - self.up8 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1), nl(128), - nn.ReLU()]) - - self.conv8 = nn.Sequential(*[nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU(), - nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nl(128), nn.ReLU()]) - - self.up9 = nn.Sequential( - *[nn.UpsamplingNearest2d(scale_factor=2), nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), nl(64), - nn.ReLU()]) - - self.conv9 = nn.Sequential(*[nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), nl(64), nn.ReLU(), - nn.Conv2d(64, output_nc, kernel_size=3, stride=1, padding=1) - ]) - - def refine(self, input): - conv1 = self.conv1(input) - pool1 = self.pool1(conv1) - - conv2 = self.conv2(pool1) - pool2 = self.pool2(conv2) - - conv3 = self.conv3(pool2) - pool3 = self.pool3(conv3) - - conv4 = self.conv4(pool3) - drop4 = self.drop4(conv4) - pool4 = self.pool4(drop4) - - conv5 = self.conv5(pool4) - drop5 = self.drop5(conv5) - - up6 = self.up6(drop5) - conv6 = self.conv6(torch.cat([drop4, up6], 1)) - - up7 = self.up7(conv6) - conv7 = self.conv7(torch.cat([conv3, up7], 1)) - - up8 = self.up8(conv7) - conv8 = self.conv8(torch.cat([conv2, up8], 1)) - - up9 = self.up9(conv8) - conv9 = self.conv9(torch.cat([conv1, up9], 1)) - return conv9 - - -###### ResUnet new -class ResidualBlock(nn.Module): - def __init__(self, in_features=64, norm_layer=nn.BatchNorm2d): - super(ResidualBlock, self).__init__() - self.relu = nn.ReLU(True) - if norm_layer == None: - self.block = nn.Sequential( - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - nn.ReLU(inplace=True), - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - ) - else: - self.block = nn.Sequential( - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - norm_layer(in_features), - nn.ReLU(inplace=True), - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - norm_layer(in_features) - ) - - def forward(self, x): - residual = x - out = self.block(x) - out += residual - out = self.relu(out) - return out - - -class Refine_ResUnet_New(nn.Module): - def __init__(self, input_nc, output_nc, num_downs=5, ngf=32, - norm_layer=nn.BatchNorm2d, use_dropout=False): - super(Refine_ResUnet_New, self).__init__() - # construct unet structure - unet_block = ResUnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) - - for i in range(num_downs - 5): - unet_block = ResUnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout) - unet_block = ResUnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = ResUnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = ResUnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = ResUnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) - - self.model = unet_block - - def refine(self, input): - return self.model(input) - - -# Defines the submodule with skip connection. -# X -------------------identity---------------------- X -# |-- downsampling -- |submodule| -- upsampling --| -class ResUnetSkipConnectionBlock(nn.Module): - def __init__(self, outer_nc, inner_nc, input_nc=None, - submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False): - super(ResUnetSkipConnectionBlock, self).__init__() - self.outermost = outermost - use_bias = norm_layer == nn.InstanceNorm2d - - if input_nc is None: - input_nc = outer_nc - downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=3, - stride=2, padding=1, bias=use_bias) - # add two resblock - res_downconv = [ResidualBlock(inner_nc, norm_layer), ResidualBlock(inner_nc, norm_layer)] - res_upconv = [ResidualBlock(outer_nc, norm_layer), ResidualBlock(outer_nc, norm_layer)] - - downrelu = nn.ReLU(True) - uprelu = nn.ReLU(True) - if norm_layer != None: - downnorm = norm_layer(inner_nc) - upnorm = norm_layer(outer_nc) - - if outermost: - upsample = nn.Upsample(scale_factor=2, mode='nearest') - upconv = nn.Conv2d(inner_nc * 2, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - down = [downconv, downrelu] + res_downconv - up = [upsample, upconv] - model = down + [submodule] + up - elif innermost: - upsample = nn.Upsample(scale_factor=2, mode='nearest') - upconv = nn.Conv2d(inner_nc, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - down = [downconv, downrelu] + res_downconv - if norm_layer == None: - up = [upsample, upconv, uprelu] + res_upconv - else: - up = [upsample, upconv, upnorm, uprelu] + res_upconv - model = down + up - else: - upsample = nn.Upsample(scale_factor=2, mode='nearest') - upconv = nn.Conv2d(inner_nc*2, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - if norm_layer == None: - down = [downconv, downrelu] + res_downconv - up = [upsample, upconv, uprelu] + res_upconv - else: - down = [downconv, downnorm, downrelu] + res_downconv - up = [upsample, upconv, upnorm, uprelu] + res_upconv - - if use_dropout: - model = down + [submodule] + up + [nn.Dropout(0.5)] - else: - model = down + [submodule] + up - - self.model = nn.Sequential(*model) - - def forward(self, x): - if self.outermost: - return self.model(x) - else: - return torch.cat([x, self.model(x)], 1) -################## - - -class GlobalGenerator(nn.Module): - def __init__(self, input_nc, output_nc, L, S, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect'): - assert (n_blocks >= 0) - super(GlobalGenerator, self).__init__() - activation = nn.ReLU(True) - - model = [nn.ReflectionPad2d(3), nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), norm_layer(ngf), activation] - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1), - norm_layer(ngf * mult * 2), activation] - - ### resnet blocks - mult = 2 ** n_downsampling - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, norm_type='adain', padding_type=padding_type)] - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, - output_padding=1), - norm_layer(int(ngf * mult / 2)), activation] - model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - self.model = nn.Sequential(*model) - - # style encoder - self.enc_style = StyleEncoder(5, S, 16, self.get_num_adain_params(self.model), norm='none', activ='relu', - pad_type='reflect') - # label encoder - self.enc_label = LabelEncoder(5, L, 16, 64, norm='none', activ='relu', pad_type='reflect') - - def assign_adain_params(self, adain_params, model): - # assign the adain_params to the AdaIN layers in model - for m in model.modules(): - if m.__class__.__name__ == "AdaptiveInstanceNorm2d": - mean = adain_params[:, :m.num_features] - std = adain_params[:, m.num_features:2 * m.num_features] - m.bias = mean.contiguous().view(-1) - m.weight = std.contiguous().view(-1) - if adain_params.size(1) > 2 * m.num_features: - adain_params = adain_params[:, 2 * m.num_features:] - - def get_num_adain_params(self, model): - # return the number of AdaIN parameters needed by the model - num_adain_params = 0 - for m in model.modules(): - if m.__class__.__name__ == "AdaptiveInstanceNorm2d": - num_adain_params += 2 * m.num_features - return num_adain_params - - def forward(self, input, input_ref, image_ref): - fea1, fea2 = self.enc_label(input_ref) - adain_params = self.enc_style((image_ref, fea1, fea2)) - self.assign_adain_params(adain_params, self.model) - return self.model(input) - - -class BlendGenerator(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=3, norm_layer=nn.BatchNorm2d, - padding_type='reflect'): - assert (n_blocks >= 0) - super(BlendGenerator, self).__init__() - activation = nn.ReLU(True) - - model = [nn.ReflectionPad2d(3), nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), norm_layer(ngf), activation] - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1), - norm_layer(ngf * mult * 2), activation] - - ### resnet blocks - mult = 2 ** n_downsampling - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, norm_type='in', padding_type=padding_type)] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, - output_padding=1), - norm_layer(int(ngf * mult / 2)), activation] - model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), nn.Sigmoid()] - self.model = nn.Sequential(*model) - - def forward(self, input1, input2): - m = self.model(torch.cat([input1, input2], 1)) - return input1 * m + input2 * (1 - m), m - - # Define the Multiscale Discriminator. - - -class MultiscaleDiscriminator(nn.Module): - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, - use_sigmoid=False, num_D=3, getIntermFeat=False): - super(MultiscaleDiscriminator, self).__init__() - self.num_D = num_D - self.n_layers = n_layers - self.getIntermFeat = getIntermFeat - - for i in range(num_D): - netD = NLayerDiscriminator(input_nc, ndf, n_layers, norm_layer, use_sigmoid, getIntermFeat) - if getIntermFeat: - for j in range(n_layers + 2): - setattr(self, 'scale' + str(i) + '_layer' + str(j), getattr(netD, 'model' + str(j))) - else: - setattr(self, 'layer' + str(i), netD.model) - - self.downsample = nn.AvgPool2d(3, stride=2, padding=[1, 1], count_include_pad=False) - - def singleD_forward(self, model, input): - if self.getIntermFeat: - result = [input] - for i in range(len(model)): - result.append(model[i](result[-1])) - return result[1:] - else: - return [model(input)] - - def forward(self, input): - num_D = self.num_D - result = [] - input_downsampled = input - for i in range(num_D): - if self.getIntermFeat: - model = [getattr(self, 'scale' + str(num_D - 1 - i) + '_layer' + str(j)) for j in - range(self.n_layers + 2)] - else: - model = getattr(self, 'layer' + str(num_D - 1 - i)) - result.append(self.singleD_forward(model, input_downsampled)) - if i != (num_D - 1): - input_downsampled = self.downsample(input_downsampled) - return result - - -# Define the PatchGAN discriminator with the specified arguments. -class NLayerDiscriminator(nn.Module): - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_sigmoid=False, getIntermFeat=False): - super(NLayerDiscriminator, self).__init__() - self.getIntermFeat = getIntermFeat - self.n_layers = n_layers - - kw = 4 - padw = int(np.ceil((kw - 1.0) / 2)) - sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]] - - nf = ndf - for n in range(1, n_layers): - nf_prev = nf - nf = min(nf * 2, 512) - sequence += [[ - nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw), - norm_layer(nf), nn.LeakyReLU(0.2, True) - ]] - - nf_prev = nf - nf = min(nf * 2, 512) - sequence += [[ - nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw), - norm_layer(nf), - nn.LeakyReLU(0.2, True) - ]] - - sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]] - - if use_sigmoid: - sequence += [[nn.Sigmoid()]] - - if getIntermFeat: - for n in range(len(sequence)): - setattr(self, 'model' + str(n), nn.Sequential(*sequence[n])) - else: - sequence_stream = [] - for n in range(len(sequence)): - sequence_stream += sequence[n] - self.model = nn.Sequential(*sequence_stream) - - def forward(self, input): - if self.getIntermFeat: - res = [input] - for n in range(self.n_layers + 2): - model = getattr(self, 'model' + str(n)) - res.append(model(res[-1])) - return res[1:] - else: - return self.model(input) - - -from torchvision import models - - -class Vgg19(torch.nn.Module): - def __init__(self, requires_grad=False): - super(Vgg19, self).__init__() - vgg = models.vgg19(pretrained=False) - vgg_pretrained_features = vgg.features - self.vgg = vgg - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5(h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - - def extract(self, x): - x = self.vgg.features(x) - x = self.vgg.avgpool(x) - return x - - -# Define the MaskVAE -class VAE(nn.Module): - def __init__(self, nc, ngf, ndf, latent_variable_size): - super(VAE, self).__init__() - # self.cuda = True - self.nc = nc - self.ngf = ngf - self.ndf = ndf - self.latent_variable_size = latent_variable_size - - # encoder - self.e1 = nn.Conv2d(nc, ndf, 4, 2, 1) - self.bn1 = nn.BatchNorm2d(ndf) - - self.e2 = nn.Conv2d(ndf, ndf * 2, 4, 2, 1) - self.bn2 = nn.BatchNorm2d(ndf * 2) - - self.e3 = nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1) - self.bn3 = nn.BatchNorm2d(ndf * 4) - - self.e4 = nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1) - self.bn4 = nn.BatchNorm2d(ndf * 8) - - self.e5 = nn.Conv2d(ndf * 8, ndf * 16, 4, 2, 1) - self.bn5 = nn.BatchNorm2d(ndf * 16) - - self.e6 = nn.Conv2d(ndf * 16, ndf * 32, 4, 2, 1) - self.bn6 = nn.BatchNorm2d(ndf * 32) - - self.e7 = nn.Conv2d(ndf * 32, ndf * 64, 4, 2, 1) - self.bn7 = nn.BatchNorm2d(ndf * 64) - - self.fc1 = nn.Linear(ndf * 64 * 4 * 4, latent_variable_size) - self.fc2 = nn.Linear(ndf * 64 * 4 * 4, latent_variable_size) - - # decoder - self.d1 = nn.Linear(latent_variable_size, ngf * 64 * 4 * 4) - - self.up1 = nn.UpsamplingNearest2d(scale_factor=2) - self.pd1 = nn.ReplicationPad2d(1) - self.d2 = nn.Conv2d(ngf * 64, ngf * 32, 3, 1) - self.bn8 = nn.BatchNorm2d(ngf * 32, 1.e-3) - - self.up2 = nn.UpsamplingNearest2d(scale_factor=2) - self.pd2 = nn.ReplicationPad2d(1) - self.d3 = nn.Conv2d(ngf * 32, ngf * 16, 3, 1) - self.bn9 = nn.BatchNorm2d(ngf * 16, 1.e-3) - - self.up3 = nn.UpsamplingNearest2d(scale_factor=2) - self.pd3 = nn.ReplicationPad2d(1) - self.d4 = nn.Conv2d(ngf * 16, ngf * 8, 3, 1) - self.bn10 = nn.BatchNorm2d(ngf * 8, 1.e-3) - - self.up4 = nn.UpsamplingNearest2d(scale_factor=2) - self.pd4 = nn.ReplicationPad2d(1) - self.d5 = nn.Conv2d(ngf * 8, ngf * 4, 3, 1) - self.bn11 = nn.BatchNorm2d(ngf * 4, 1.e-3) - - self.up5 = nn.UpsamplingNearest2d(scale_factor=2) - self.pd5 = nn.ReplicationPad2d(1) - self.d6 = nn.Conv2d(ngf * 4, ngf * 2, 3, 1) - self.bn12 = nn.BatchNorm2d(ngf * 2, 1.e-3) - - self.up6 = nn.UpsamplingNearest2d(scale_factor=2) - self.pd6 = nn.ReplicationPad2d(1) - self.d7 = nn.Conv2d(ngf * 2, ngf, 3, 1) - self.bn13 = nn.BatchNorm2d(ngf, 1.e-3) - - self.up7 = nn.UpsamplingNearest2d(scale_factor=2) - self.pd7 = nn.ReplicationPad2d(1) - self.d8 = nn.Conv2d(ngf, nc, 3, 1) - - self.leakyrelu = nn.LeakyReLU(0.2) - self.relu = nn.ReLU() - # self.sigmoid = nn.Sigmoid() - self.maxpool = nn.MaxPool2d((2, 2), (2, 2)) - - def encode(self, x): - h1 = self.leakyrelu(self.bn1(self.e1(x))) - h2 = self.leakyrelu(self.bn2(self.e2(h1))) - h3 = self.leakyrelu(self.bn3(self.e3(h2))) - h4 = self.leakyrelu(self.bn4(self.e4(h3))) - h5 = self.leakyrelu(self.bn5(self.e5(h4))) - h6 = self.leakyrelu(self.bn6(self.e6(h5))) - h7 = self.leakyrelu(self.bn7(self.e7(h6))) - h7 = h7.view(-1, self.ndf * 64 * 4 * 4) - return self.fc1(h7), self.fc2(h7) - - def reparametrize(self, mu, logvar): - std = logvar.mul(0.5).exp_() - # if self.cuda: - eps = torch.cuda.FloatTensor(std.size()).normal_() - # else: - # eps = torch.FloatTensor(std.size()).normal_() - eps = Variable(eps) - return eps.mul(std).add_(mu) - - def decode(self, z): - h1 = self.relu(self.d1(z)) - h1 = h1.view(-1, self.ngf * 64, 4, 4) - h2 = self.leakyrelu(self.bn8(self.d2(self.pd1(self.up1(h1))))) - h3 = self.leakyrelu(self.bn9(self.d3(self.pd2(self.up2(h2))))) - h4 = self.leakyrelu(self.bn10(self.d4(self.pd3(self.up3(h3))))) - h5 = self.leakyrelu(self.bn11(self.d5(self.pd4(self.up4(h4))))) - h6 = self.leakyrelu(self.bn12(self.d6(self.pd5(self.up5(h5))))) - h7 = self.leakyrelu(self.bn13(self.d7(self.pd6(self.up6(h6))))) - return self.d8(self.pd7(self.up7(h7))) - - def get_latent_var(self, x): - mu, logvar = self.encode(x) - z = self.reparametrize(mu, logvar) - return z, mu, logvar.mul(0.5).exp_() - - def forward(self, x): - mu, logvar = self.encode(x) - z = self.reparametrize(mu, logvar) - res = self.decode(z) - - return res, x, mu, logvar - - -# style encode part -class StyleEncoder(nn.Module): - def __init__(self, n_downsample, input_dim, dim, style_dim, norm, activ, pad_type): - super(StyleEncoder, self).__init__() - self.model = [] - self.model_middle = [] - self.model_last = [] - self.model += [ConvBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type)] - for i in range(2): - self.model += [ConvBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)] - dim *= 2 - for i in range(n_downsample - 2): - self.model_middle += [ConvBlock(dim, dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)] - self.model_last += [nn.AdaptiveAvgPool2d(1)] # global average pooling - self.model_last += [nn.Conv2d(dim, style_dim, 1, 1, 0)] - - self.model = nn.Sequential(*self.model) - self.model_middle = nn.Sequential(*self.model_middle) - self.model_last = nn.Sequential(*self.model_last) - - self.output_dim = dim - - self.sft1 = SFTLayer() - self.sft2 = SFTLayer() - - def forward(self, x): - fea = self.model(x[0]) - fea = self.sft1((fea, x[1])) - fea = self.model_middle(fea) - fea = self.sft2((fea, x[2])) - return self.model_last(fea) - - -# label encode part -class LabelEncoder(nn.Module): - def __init__(self, n_downsample, input_dim, dim, style_dim, norm, activ, pad_type): - super(LabelEncoder, self).__init__() - self.model = [] - self.model_last = [nn.ReLU()] - self.model += [ConvBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type)] - self.model += [ConvBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)] - dim *= 2 - self.model += [ConvBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation='none', pad_type=pad_type)] - dim *= 2 - for i in range(n_downsample - 3): - self.model_last += [ConvBlock(dim, dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)] - self.model_last += [ConvBlock(dim, dim, 4, 2, 1, norm=norm, activation='none', pad_type=pad_type)] - self.model = nn.Sequential(*self.model) - self.model_last = nn.Sequential(*self.model_last) - self.output_dim = dim - - def forward(self, x): - fea = self.model(x) - return fea, self.model_last(fea) - - -# Define the basic block -class ConvBlock(nn.Module): - def __init__(self, input_dim, output_dim, kernel_size, stride, - padding=0, norm='none', activation='relu', pad_type='zero'): - super(ConvBlock, self).__init__() - self.use_bias = True - # initialize padding - if pad_type == 'reflect': - self.pad = nn.ReflectionPad2d(padding) - elif pad_type == 'replicate': - self.pad = nn.ReplicationPad2d(padding) - elif pad_type == 'zero': - self.pad = nn.ZeroPad2d(padding) - else: - assert 0, "Unsupported padding type: {}".format(pad_type) - - # initialize normalization - norm_dim = output_dim - if norm == 'bn': - self.norm = nn.BatchNorm2d(norm_dim) - elif norm == 'in': - # self.norm = nn.InstanceNorm2d(norm_dim, track_running_stats=True) - self.norm = nn.InstanceNorm2d(norm_dim) - elif norm == 'ln': - self.norm = LayerNorm(norm_dim) - elif norm == 'adain': - self.norm = AdaptiveInstanceNorm2d(norm_dim) - elif norm == 'none' or norm == 'sn': - self.norm = None - else: - assert 0, "Unsupported normalization: {}".format(norm) - - # initialize activation - if activation == 'relu': - self.activation = nn.ReLU(inplace=True) - elif activation == 'lrelu': - self.activation = nn.LeakyReLU(0.2, inplace=True) - elif activation == 'prelu': - self.activation = nn.PReLU() - elif activation == 'selu': - self.activation = nn.SELU(inplace=True) - elif activation == 'tanh': - self.activation = nn.Tanh() - elif activation == 'none': - self.activation = None - else: - assert 0, "Unsupported activation: {}".format(activation) - - # initialize convolution - if norm == 'sn': - self.conv = SpectralNorm(nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias)) - else: - self.conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias) - - def forward(self, x): - x = self.conv(self.pad(x)) - if self.norm: - x = self.norm(x) - if self.activation: - x = self.activation(x) - return x - - -class LinearBlock(nn.Module): - def __init__(self, input_dim, output_dim, norm='none', activation='relu'): - super(LinearBlock, self).__init__() - use_bias = True - # initialize fully connected layer - if norm == 'sn': - self.fc = SpectralNorm(nn.Linear(input_dim, output_dim, bias=use_bias)) - else: - self.fc = nn.Linear(input_dim, output_dim, bias=use_bias) - - # initialize normalization - norm_dim = output_dim - if norm == 'bn': - self.norm = nn.BatchNorm1d(norm_dim) - elif norm == 'in': - self.norm = nn.InstanceNorm1d(norm_dim) - elif norm == 'ln': - self.norm = LayerNorm(norm_dim) - elif norm == 'none' or norm == 'sn': - self.norm = None - else: - assert 0, "Unsupported normalization: {}".format(norm) - - # initialize activation - if activation == 'relu': - self.activation = nn.ReLU(inplace=True) - elif activation == 'lrelu': - self.activation = nn.LeakyReLU(0.2, inplace=True) - elif activation == 'prelu': - self.activation = nn.PReLU() - elif activation == 'selu': - self.activation = nn.SELU(inplace=True) - elif activation == 'tanh': - self.activation = nn.Tanh() - elif activation == 'none': - self.activation = None - else: - assert 0, "Unsupported activation: {}".format(activation) - - def forward(self, x): - out = self.fc(x) - if self.norm: - out = self.norm(out) - if self.activation: - out = self.activation(out) - return out - - -# Define a resnet block -class ResnetBlock(nn.Module): - def __init__(self, dim, norm_type, padding_type, use_dropout=False): - super(ResnetBlock, self).__init__() - self.conv_block = self.build_conv_block(dim, norm_type, padding_type, use_dropout) - - def build_conv_block(self, dim, norm_type, padding_type, use_dropout): - conv_block = [] - conv_block += [ConvBlock(dim, dim, 3, 1, 1, norm=norm_type, activation='relu', pad_type=padding_type)] - conv_block += [ConvBlock(dim, dim, 3, 1, 1, norm=norm_type, activation='none', pad_type=padding_type)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - out = x + self.conv_block(x) - return out - - -class SFTLayer(nn.Module): - def __init__(self): - super(SFTLayer, self).__init__() - self.SFT_scale_conv1 = nn.Conv2d(64, 64, 1) - self.SFT_scale_conv2 = nn.Conv2d(64, 64, 1) - self.SFT_shift_conv1 = nn.Conv2d(64, 64, 1) - self.SFT_shift_conv2 = nn.Conv2d(64, 64, 1) - - def forward(self, x): - scale = self.SFT_scale_conv2(F.leaky_relu(self.SFT_scale_conv1(x[1]), 0.1, inplace=True)) - shift = self.SFT_shift_conv2(F.leaky_relu(self.SFT_shift_conv1(x[1]), 0.1, inplace=True)) - return x[0] * scale + shift - - -class ConvBlock_SFT(nn.Module): - def __init__(self, dim, norm_type, padding_type, use_dropout=False): - super(ResnetBlock_SFT, self).__init__() - self.sft1 = SFTLayer() - self.conv1 = ConvBlock(dim, dim, 4, 2, 1, norm=norm_type, activation='none', pad_type=padding_type) - - def forward(self, x): - fea = self.sft1((x[0], x[1])) - fea = F.relu(self.conv1(fea), inplace=True) - return (x[0] + fea, x[1]) - - -class ConvBlock_SFT_last(nn.Module): - def __init__(self, dim, norm_type, padding_type, use_dropout=False): - super(ResnetBlock_SFT_last, self).__init__() - self.sft1 = SFTLayer() - self.conv1 = ConvBlock(dim, dim, 4, 2, 1, norm=norm_type, activation='none', pad_type=padding_type) - - def forward(self, x): - fea = self.sft1((x[0], x[1])) - fea = F.relu(self.conv1(fea), inplace=True) - return x[0] + fea - - -# Definition of normalization layer -class AdaptiveInstanceNorm2d(nn.Module): - def __init__(self, num_features, eps=1e-5, momentum=0.1): - super(AdaptiveInstanceNorm2d, self).__init__() - self.num_features = num_features - self.eps = eps - self.momentum = momentum - # weight and bias are dynamically assigned - self.weight = None - self.bias = None - # just dummy buffers, not used - self.register_buffer('running_mean', torch.zeros(num_features)) - self.register_buffer('running_var', torch.ones(num_features)) - - def forward(self, x): - assert self.weight is not None and self.bias is not None, "Please assign weight and bias before calling AdaIN!" - b, c = x.size(0), x.size(1) - running_mean = self.running_mean.repeat(b) - running_var = self.running_var.repeat(b) - - # Apply instance norm - x_reshaped = x.contiguous().view(1, b * c, *x.size()[2:]) - - out = F.batch_norm( - x_reshaped, running_mean, running_var, self.weight, self.bias, - True, self.momentum, self.eps) - - return out.view(b, c, *x.size()[2:]) - - def __repr__(self): - return self.__class__.__name__ + '(' + str(self.num_features) + ')' - - -class LayerNorm(nn.Module): - def __init__(self, num_features, eps=1e-5, affine=True): - super(LayerNorm, self).__init__() - self.num_features = num_features - self.affine = affine - self.eps = eps - - if self.affine: - self.gamma = nn.Parameter(torch.Tensor(num_features).uniform_()) - self.beta = nn.Parameter(torch.zeros(num_features)) - - def forward(self, x): - shape = [-1] + [1] * (x.dim() - 1) - # print(x.size()) - if x.size(0) == 1: - # These two lines run much faster in pytorch 0.4 than the two lines listed below. - mean = x.view(-1).mean().view(*shape) - std = x.view(-1).std().view(*shape) - else: - mean = x.view(x.size(0), -1).mean(1).view(*shape) - std = x.view(x.size(0), -1).std(1).view(*shape) - - x = (x - mean) / (std + self.eps) - - if self.affine: - shape = [1, -1] + [1] * (x.dim() - 2) - x = x * self.gamma.view(*shape) + self.beta.view(*shape) - return x - - -def l2normalize(v, eps=1e-12): - return v / (v.norm() + eps) - - -class SpectralNorm(nn.Module): - """ - Based on the paper "Spectral Normalization for Generative Adversarial Networks" by Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida - and the Pytorch implementation https://github.com/christiancosgrove/pytorch-spectral-normalization-gan - """ - - def __init__(self, module, name='weight', power_iterations=1): - super(SpectralNorm, self).__init__() - self.module = module - self.name = name - self.power_iterations = power_iterations - if not self._made_params(): - self._make_params() - - def _update_u_v(self): - u = getattr(self.module, self.name + "_u") - v = getattr(self.module, self.name + "_v") - w = getattr(self.module, self.name + "_bar") - - height = w.data.shape[0] - for _ in range(self.power_iterations): - v.data = l2normalize(torch.mv(torch.t(w.view(height, -1).data), u.data)) - u.data = l2normalize(torch.mv(w.view(height, -1).data, v.data)) - - # sigma = torch.dot(u.data, torch.mv(w.view(height,-1).data, v.data)) - sigma = u.dot(w.view(height, -1).mv(v)) - setattr(self.module, self.name, w / sigma.expand_as(w)) - - def _made_params(self): - try: - u = getattr(self.module, self.name + "_u") - v = getattr(self.module, self.name + "_v") - w = getattr(self.module, self.name + "_bar") - return True - except AttributeError: - return False - - def _make_params(self): - w = getattr(self.module, self.name) - - height = w.data.shape[0] - width = w.view(height, -1).data.shape[1] - - u = nn.Parameter(w.data.new(height).normal_(0, 1), requires_grad=False) - v = nn.Parameter(w.data.new(width).normal_(0, 1), requires_grad=False) - u.data = l2normalize(u.data) - v.data = l2normalize(v.data) - w_bar = nn.Parameter(w.data) - - del self.module._parameters[self.name] - - self.module.register_parameter(self.name + "_u", u) - self.module.register_parameter(self.name + "_v", v) - self.module.register_parameter(self.name + "_bar", w_bar) - - def forward(self, *args): - self._update_u_v() - return self.module.forward(*args) - - -### STN TPS - -class CNN(nn.Module): - def __init__(self, num_output, input_nc=5, ngf=8, n_layers=5, norm_layer=nn.InstanceNorm2d, use_dropout=False): - super(CNN, self).__init__() - downconv = nn.Conv2d(5, ngf, kernel_size=4, stride=2, padding=1) - model = [downconv, nn.ReLU(True), norm_layer(ngf)] - for i in range(n_layers): - in_ngf = 2 ** i * ngf if 2 ** i * ngf < 1024 else 1024 - out_ngf = 2 ** (i + 1) * ngf if 2 ** i * ngf < 1024 else 1024 - downconv = nn.Conv2d(in_ngf, out_ngf, kernel_size=4, stride=2, padding=1) - model += [downconv, norm_layer(out_ngf), nn.ReLU(True)] - model += [nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), norm_layer(64), nn.ReLU(True)] - model += [nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), norm_layer(64), nn.ReLU(True)] - self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2) - self.model = nn.Sequential(*model) - self.fc1 = nn.Linear(512, 128) - self.fc2 = nn.Linear(128, num_output) - - def forward(self, x): - x = self.model(x) - x = self.maxpool(x) - x = x.view(x.shape[0], -1) - x = F.relu(self.fc1(x)) - x = F.dropout(x, training=self.training) - x = self.fc2(x) - - return x - - -class ClsNet(nn.Module): - - def __init__(self): - super(ClsNet, self).__init__() - self.cnn = CNN(10) - - def forward(self, x): - return F.log_softmax(self.cnn(x)) - - -class BoundedGridLocNet(nn.Module): - - def __init__(self, grid_height, grid_width, target_control_points): - super(BoundedGridLocNet, self).__init__() - self.cnn = CNN(grid_height * grid_width * 2) - - bias = torch.from_numpy(np.arctanh(target_control_points.numpy())) - bias = bias.view(-1) - self.cnn.fc2.bias.data.copy_(bias) - self.cnn.fc2.weight.data.zero_() - - def forward(self, x): - batch_size = x.size(0) - points = F.tanh(self.cnn(x)) - coor=points.view(batch_size, -1, 2) - # coor+=torch.randn(coor.shape).cuda()/10 - row=self.get_row(coor,5) - col=self.get_col(coor,5) - rx,ry,cx,cy=torch.tensor(0.08).cuda(),torch.tensor(0.08).cuda()\ - ,torch.tensor(0.08).cuda(),torch.tensor(0.08).cuda() - row_x,row_y=row[:,:,0],row[:,:,1] - col_x,col_y=col[:,:,0],col[:,:,1] - rx_loss=torch.max(rx,row_x).mean() - ry_loss=torch.max(ry,row_y).mean() - cx_loss=torch.max(cx,col_x).mean() - cy_loss=torch.max(cy,col_y).mean() - - - return coor,rx_loss,ry_loss,cx_loss,cy_loss - - def get_row(self,coor,num): - sec_dic=[] - for j in range(num): - sum=0 - buffer=0 - flag=False - max=-1 - for i in range(num-1): - differ=(coor[:,j*num+i+1,:]-coor[:,j*num+i,:])**2 - if not flag: - second_dif=0 - flag=True - else: - second_dif=torch.abs(differ-buffer) - sec_dic.append(second_dif) - - buffer=differ - sum+=second_dif - return torch.stack(sec_dic,dim=1) - - def get_col(self,coor,num): - sec_dic=[] - for i in range(num): - sum = 0 - buffer = 0 - flag = False - max = -1 - for j in range(num - 1): - differ = (coor[:, (j+1) * num + i , :] - coor[:, j * num + i, :]) ** 2 - if not flag: - second_dif = 0 - flag = True - else: - second_dif = torch.abs(differ-buffer) - sec_dic.append(second_dif) - buffer = differ - sum += second_dif - return torch.stack(sec_dic,dim=1) - -class UnBoundedGridLocNet(nn.Module): - - def __init__(self, grid_height, grid_width, target_control_points): - super(UnBoundedGridLocNet, self).__init__() - self.cnn = CNN(grid_height * grid_width * 2) - - bias = target_control_points.view(-1) - self.cnn.fc2.bias.data.copy_(bias) - self.cnn.fc2.weight.data.zero_() - - def forward(self, x): - batch_size = x.size(0) - points = self.cnn(x) - return points.view(batch_size, -1, 2) - - -class STNNet(nn.Module): - - def __init__(self): - super(STNNet, self).__init__() - range = 0.9 - r1 = range - r2 = range - grid_size_h = 5 - grid_size_w = 5 - - assert r1 < 1 and r2 < 1 # if >= 1, arctanh will cause error in BoundedGridLocNet - target_control_points = torch.Tensor(list(itertools.product( - np.arange(-r1, r1 + 0.00001, 2.0 * r1 / (grid_size_h - 1)), - np.arange(-r2, r2 + 0.00001, 2.0 * r2 / (grid_size_w - 1)), - ))) - Y, X = target_control_points.split(1, dim=1) - target_control_points = torch.cat([X, Y], dim=1) - self.target_control_points=target_control_points - # self.get_row(target_control_points,5) - GridLocNet = { - 'unbounded_stn': UnBoundedGridLocNet, - 'bounded_stn': BoundedGridLocNet, - }['bounded_stn'] - self.loc_net = GridLocNet(grid_size_h, grid_size_w, target_control_points) - - self.tps = TPSGridGen(256, 192, target_control_points) - - def get_row(self, coor, num): - for j in range(num): - sum = 0 - buffer = 0 - flag = False - max = -1 - for i in range(num - 1): - differ = (coor[j * num + i + 1, :] - coor[j * num + i, :]) ** 2 - if not flag: - second_dif = 0 - flag = True - else: - second_dif = torch.abs(differ - buffer) - - buffer = differ - sum += second_dif - print(sum / num) - def get_col(self,coor,num): - for i in range(num): - sum = 0 - buffer = 0 - flag = False - max = -1 - for j in range(num - 1): - differ = (coor[ (j + 1) * num + i, :] - coor[j * num + i, :]) ** 2 - if not flag: - second_dif = 0 - flag = True - else: - second_dif = torch.abs(differ-buffer) - - buffer = differ - sum += second_dif - print(sum) - def forward(self, x, reference, mask,grid_pic): - batch_size = x.size(0) - source_control_points,rx,ry,cx,cy = self.loc_net(reference) - source_control_points=(source_control_points) - # print('control points',source_control_points.shape) - source_coordinate = self.tps(source_control_points) - grid = source_coordinate.view(batch_size, 256, 192, 2) - # print('grid size',grid.shape) - transformed_x = grid_sample(x, grid, canvas=0) - warped_mask = grid_sample(mask, grid, canvas=0) - warped_gpic= grid_sample(grid_pic, grid, canvas=0) - return transformed_x, warped_mask,rx,ry,cx,cy,warped_gpic \ No newline at end of file diff --git a/spaces/heiyubili/bingo/src/components/voice.tsx b/spaces/heiyubili/bingo/src/components/voice.tsx deleted file mode 100644 index ab886394487445e4b0675770b76096bba0e61b0e..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/components/voice.tsx +++ /dev/null @@ -1,52 +0,0 @@ -import React, { useEffect } from 'react' -import { useSetAtom } from 'jotai' -import { useBing } from '@/lib/hooks/use-bing' -import Image from 'next/image' -import VoiceIcon from '@/assets/images/voice.svg' -import VoiceButton from './ui/voice' -import { SR } from '@/lib/bots/bing/sr' -import { voiceListenAtom } from '@/state' - -const sr = new SR(['发送', '清空', '退出']) - -const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => { - const setListen = useSetAtom(voiceListenAtom) - useEffect(() => { - if (sr.listening) return - sr.transcript = !isSpeaking - }, [isSpeaking]) - - useEffect(() => { - sr.onchange = (msg: string, command?: string) => { - switch (command) { - case '退出': - sr.stop() - break; - case '发送': - sendMessage(input) - case '清空': - setInput('') - break; - default: - setInput(input + msg) - } - } - }, [input, setInput, sendMessage]) - - const switchSR = (enable: boolean = false) => { - setListen(enable) - if (enable) { - sr.start() - } else { - sr.stop() - } - } - - return sr.listening ? ( - switchSR(false)} /> - ) : ( - start voice switchSR(true)} /> - ) -}; - -export default Voice; diff --git a/spaces/hero-intelligent/MT3/README.md b/spaces/hero-intelligent/MT3/README.md deleted file mode 100644 index 560a9f7a04f8fe0de4354b179c6f9f179aa3f7c5..0000000000000000000000000000000000000000 --- a/spaces/hero-intelligent/MT3/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MT3-1 -emoji: 🐢 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hf-audio/open_asr_leaderboard/README.md b/spaces/hf-audio/open_asr_leaderboard/README.md deleted file mode 100644 index 85702068723f8b35181e6671706e5c2c26f6875a..0000000000000000000000000000000000000000 --- a/spaces/hf-audio/open_asr_leaderboard/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Open ASR Leaderboard -emoji: 🏆 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hisfog/SQLdepth/app.py b/spaces/hisfog/SQLdepth/app.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/hisfog/SQLdepth/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/hlydecker/ImageBind_zeroshot_demo/models/transformer.py b/spaces/hlydecker/ImageBind_zeroshot_demo/models/transformer.py deleted file mode 100644 index 98902ac8f08868c486a7c74781e952bee444c2e6..0000000000000000000000000000000000000000 --- a/spaces/hlydecker/ImageBind_zeroshot_demo/models/transformer.py +++ /dev/null @@ -1,284 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Code modified from -# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py ; -# https://github.com/facebookresearch/deit/blob/main/models.py -# and https://github.com/facebookresearch/vissl/blob/main/vissl/models/trunks/vision_transformer.py - - -import copy -import fnmatch -import logging -from functools import partial -from typing import Callable, List - -import torch -import torch.nn as nn -import torch.utils.checkpoint as checkpoint - -from timm.models.layers import DropPath, trunc_normal_ - - -class Attention(nn.Module): - def __init__( - self, - dim, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, - # can set manually to be compat with prev weights - self.scale = qk_scale or head_dim**-0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Mlp(nn.Module): - def __init__( - self, - in_features, - hidden_features=None, - out_features=None, - act_layer=nn.GELU, - drop=0.0, - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class MultiheadAttention(nn.MultiheadAttention): - def forward(self, x: torch.Tensor, attn_mask: torch.Tensor): - return super().forward(x, x, x, need_weights=False, attn_mask=attn_mask)[0] - - -class ViTAttention(Attention): - def forward(self, x: torch.Tensor, attn_mask: torch.Tensor): - assert attn_mask is None - return super().forward(x) - - -class BlockWithMasking(nn.Module): - def __init__( - self, - dim: int, - attn_target: Callable, - mlp_ratio: int = 4, - act_layer: Callable = nn.GELU, - norm_layer: Callable = nn.LayerNorm, - ffn_dropout_rate: float = 0.0, - drop_path: float = 0.0, - layer_scale_type: str = None, - layer_scale_init_value: float = 1e-4, - ): - super().__init__() - - assert not isinstance( - attn_target, nn.Module - ), "attn_target should be a Callable. Otherwise attn_target is shared across blocks!" - self.attn = attn_target() - if drop_path > 0.0: - self.drop_path = DropPath(drop_path) - else: - self.drop_path = nn.Identity() - self.norm_1 = norm_layer(dim) - mlp_hidden_dim = int(mlp_ratio * dim) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_layer=act_layer, - drop=ffn_dropout_rate, - ) - self.norm_2 = norm_layer(dim) - self.layer_scale_type = layer_scale_type - if self.layer_scale_type is not None: - assert self.layer_scale_type in [ - "per_channel", - "scalar", - ], f"Found Layer scale type {self.layer_scale_type}" - if self.layer_scale_type == "per_channel": - # one gamma value per channel - gamma_shape = [1, 1, dim] - elif self.layer_scale_type == "scalar": - # single gamma value for all channels - gamma_shape = [1, 1, 1] - # two gammas: for each part of the fwd in the encoder - self.layer_scale_gamma1 = nn.Parameter( - torch.ones(size=gamma_shape) * layer_scale_init_value, - requires_grad=True, - ) - self.layer_scale_gamma2 = nn.Parameter( - torch.ones(size=gamma_shape) * layer_scale_init_value, - requires_grad=True, - ) - - def forward(self, x: torch.Tensor, attn_mask: torch.Tensor): - if self.layer_scale_type is None: - x = x + self.drop_path(self.attn(self.norm_1(x), attn_mask)) - x = x + self.drop_path(self.mlp(self.norm_2(x))) - else: - x = ( - x - + self.drop_path(self.attn(self.norm_1(x), attn_mask)) - * self.layer_scale_gamma1 - ) - x = x + self.drop_path(self.mlp(self.norm_2(x))) * self.layer_scale_gamma2 - return x - - -_LAYER_NORM = partial(nn.LayerNorm, eps=1e-6) - - -class SimpleTransformer(nn.Module): - def __init__( - self, - attn_target: Callable, - embed_dim: int, - num_blocks: int, - block: Callable = BlockWithMasking, - pre_transformer_layer: Callable = None, - post_transformer_layer: Callable = None, - drop_path_rate: float = 0.0, - drop_path_type: str = "progressive", - norm_layer: Callable = _LAYER_NORM, - mlp_ratio: int = 4, - ffn_dropout_rate: float = 0.0, - layer_scale_type: str = None, # from cait; possible values are None, "per_channel", "scalar" - layer_scale_init_value: float = 1e-4, # from cait; float - weight_init_style: str = "jax", # possible values jax or pytorch - ): - """ - Simple Transformer with the following features - 1. Supports masked attention - 2. Supports DropPath - 3. Supports LayerScale - 4. Supports Dropout in Attention and FFN - 5. Makes few assumptions about the input except that it is a Tensor - """ - super().__init__() - self.pre_transformer_layer = pre_transformer_layer - if drop_path_type == "progressive": - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, num_blocks)] - elif drop_path_type == "uniform": - dpr = [drop_path_rate for i in range(num_blocks)] - else: - raise ValueError(f"Unknown drop_path_type: {drop_path_type}") - - self.blocks = nn.Sequential( - *[ - block( - dim=embed_dim, - attn_target=attn_target, - mlp_ratio=mlp_ratio, - ffn_dropout_rate=ffn_dropout_rate, - drop_path=dpr[i], - norm_layer=norm_layer, - layer_scale_type=layer_scale_type, - layer_scale_init_value=layer_scale_init_value, - ) - for i in range(num_blocks) - ] - ) - self.post_transformer_layer = post_transformer_layer - self.weight_init_style = weight_init_style - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - if self.weight_init_style == "jax": - # Based on MAE and official Jax ViT implementation - torch.nn.init.xavier_uniform_(m.weight) - elif self.weight_init_style == "pytorch": - # PyTorch ViT uses trunc_normal_ - trunc_normal_(m.weight, std=0.02) - - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, (nn.LayerNorm)): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def forward( - self, - tokens: torch.Tensor, - attn_mask: torch.Tensor = None, - use_checkpoint: bool = False, - checkpoint_every_n: int = 1, - checkpoint_blk_ids: List[int] = None, - ): - """ - Inputs - - tokens: data of shape N x L x D (or L x N x D depending on the attention implementation) - - attn: mask of shape L x L - - Output - - x: data of shape N x L x D (or L x N x D depending on the attention implementation) - """ - if self.pre_transformer_layer: - tokens = self.pre_transformer_layer(tokens) - if use_checkpoint and checkpoint_blk_ids is None: - checkpoint_blk_ids = [ - blk_id - for blk_id in range(len(self.blocks)) - if blk_id % checkpoint_every_n == 0 - ] - if checkpoint_blk_ids: - checkpoint_blk_ids = set(checkpoint_blk_ids) - for blk_id, blk in enumerate(self.blocks): - if use_checkpoint and blk_id in checkpoint_blk_ids: - tokens = checkpoint.checkpoint( - blk, tokens, attn_mask, use_reentrant=False - ) - else: - tokens = blk(tokens, attn_mask=attn_mask) - if self.post_transformer_layer: - tokens = self.post_transformer_layer(tokens) - return tokens diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_Loss_Dice_lr1en3.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_Loss_Dice_lr1en3.py deleted file mode 100644 index ce37df3163793e687f3a0a78922009e1e15e6f55..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_Loss_Dice_lr1en3.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from nnunet.training.network_training.nnUNet_variants.loss_function.nnUNetTrainerV2_Loss_Dice import \ - nnUNetTrainerV2_Loss_Dice, nnUNetTrainerV2_Loss_DicewithBG - - -class nnUNetTrainerV2_Loss_Dice_LR1en3(nnUNetTrainerV2_Loss_Dice): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, fp16) - self.initial_lr = 1e-3 - - -class nnUNetTrainerV2_Loss_DicewithBG_LR1en3(nnUNetTrainerV2_Loss_DicewithBG): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, fp16) - self.initial_lr = 1e-3 - diff --git a/spaces/hohonu-vicml/DirectedDiffusion/DirectedDiffusion/AttnEditorUtils.py b/spaces/hohonu-vicml/DirectedDiffusion/DirectedDiffusion/AttnEditorUtils.py deleted file mode 100644 index 220bb61f2f7e419b0210a1f14aae2031b53da2fb..0000000000000000000000000000000000000000 --- a/spaces/hohonu-vicml/DirectedDiffusion/DirectedDiffusion/AttnEditorUtils.py +++ /dev/null @@ -1,162 +0,0 @@ -import torch -import os -import numpy as np -import torchvision -from PIL import Image -from transformers import CLIPModel, CLIPTextModel, CLIPTokenizer, CLIPProcessor -from diffusers import AutoencoderKL, UNet2DConditionModel - - -def get_embeds(prompt, clip, clip_tokenizer, device="cuda"): - tokens = clip_tokenizer( - prompt, - padding="max_length", - max_length=clip_tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - return_overflowing_tokens=True, - ) - embeds = clip(tokens.input_ids.to(device)).last_hidden_state - return embeds - - -@torch.no_grad() -def get_image_from_latent(vae, latent): - latent = latent / 0.18215 - image = vae.decode(latent.to(vae.dtype)).sample - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).detach().numpy() - image = (image[0] * 255).round().astype("uint8") - return Image.fromarray(image) - - -@torch.no_grad() -def get_latent_from_image(vae, image, device="cuda"): - generator = torch.cuda.manual_seed(798122) - # Resize and transpose for numpy b h w c -> torch b c h w - # image = image.resize((width, height), resample=Image.Resampling.LANCZOS) - image = np.array(image).astype(np.float16) / 255.0 * 2.0 - 1.0 - image = torch.from_numpy(image[np.newaxis, ...].transpose(0, 3, 1, 2)) - # If there is alpha channel, composite alpha for white, as the diffusion model does not support alpha channel - if image.shape[1] > 3: - image = image[:, :3] * image[:, 3:] + (1 - image[:, 3:]) - # Move image to GPU - image = image.to(device) - # Encode image - init_latent = vae.encode(image).latent_dist.sample(generator=generator) * 0.18215 - return init_latent - - -def load_all_models(model_path_diffusion): - - clip_tokenizer = CLIPTokenizer.from_pretrained( - model_path_diffusion, subfolder="tokenizer" - ) - clip_text_model = CLIPTextModel.from_pretrained( - model_path_diffusion, subfolder="text_encoder", torch_dtype=torch.float16 - ) - - # Init diffusion model - auth_token = True # Replace this with huggingface auth token as a string if model is not already downloaded - # model_path_diffusion = "assets/models/stable-diffusion-v1-4" - unet = UNet2DConditionModel.from_pretrained( - model_path_diffusion, - subfolder="unet", - revision="fp16", - torch_dtype=torch.float16, - ) - vae = AutoencoderKL.from_pretrained( - model_path_diffusion, - subfolder="vae", - revision="fp16", - torch_dtype=torch.float16, - ) - # Move to GPU - device = "cuda" - unet.to(device) - vae.to(device) - clip_text_model.to(device) - model_bundle = {} - model_bundle["unet"] = unet - model_bundle["vae"] = vae - model_bundle["clip_tokenizer"] = clip_tokenizer - model_bundle["clip_text_model"] = clip_text_model - return model_bundle - - -@torch.no_grad() -def check_clip_score(clip_model, clip_processor, prompts=[], images=[]): - if len(prompts) == 1: - dim = 0 - if len(images) == 1: - dim = 1 - inputs = clip_processor( - text=prompts, images=images, return_tensors="pt", padding=True - ) - inputs["pixel_values"] = torch.tensor( - inputs["pixel_values"], dtype=clip_model.dtype, device=clip_model.device - ) - inputs["input_ids"] = torch.tensor(inputs["input_ids"], device=clip_model.device) - inputs["attention_mask"] = torch.tensor( - inputs["attention_mask"], device=clip_model.device - ) - outputs = clip_model(**inputs) - a = clip_model.get_image_features(inputs["pixel_values"]) - b = clip_model.get_text_features(inputs["input_ids"]) - prob = torch.matmul(a, b.t()).softmax(dim=dim) - return prob - - -def get_attn(unet, use=True): - attn = [] - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn2" in name: - if module.attn.size() == torch.Size([8, 1024, 77]): - attn.append(module.attn) - attn = torch.cat(attn, dim=0) - attn = torch.sum(attn, dim=0) - resized = torch.zeros([64, 64, 77]) - f = torchvision.transforms.Resize(size=(64, 64)) - for i in range(77): - dim = int(np.sqrt(attn.shape[0])) - attn_slice = attn[..., i].view(1, dim, dim) - resized[..., i] = f(attn_slice)[0] - return resized.cpu().numpy() - - -def save_attn(unet): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention" and "attn2" in name: - folder = "/tmp" - filepath = os.path.join(folder, name + ".pt") - torch.save(module.attn, filepath) - print(filepath) - - -def use_add_noise(unet, level, use=True): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention": - module.use_add_noise = use - module.noise_level = level - - -def use_edited_attention(unet, use=True): - for name, module in unet.named_modules(): - module_name = type(module).__name__ - if module_name == "CrossAttention": - module.use_edited_attn = use - - -def prompt_token(prompt, index): - tokens = clip_tokenizer( - prompt, - padding="max_length", - max_length=clip_tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - return_overflowing_tokens=True, - ).input_ids[0] - return clip_tokenizer.decode(tokens[index : index + 1]) diff --git a/spaces/huggan/pix2pix-facades/app.py b/spaces/huggan/pix2pix-facades/app.py deleted file mode 100644 index 3c79a8408220f3d0c38137d7b818a5513a6be271..0000000000000000000000000000000000000000 --- a/spaces/huggan/pix2pix-facades/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import tensorflow as tf -import pathlib -import gradio as gr -import matplotlib.pyplot as plt -from huggingface_hub import from_pretrained_keras -import numpy as np - -# Normalizing the images to [-1, 1] -def normalize_test(input_image): - input_image = tf.cast(input_image, tf.float32) - input_image = (input_image / 127.5) - 1 - return input_image - -def resize(input_image, height, width): - input_image = tf.image.resize(input_image, [height, width], - method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) - return input_image - -def load_image_infer(image_file): - input_image = resize(image_file, 256, 256) - input_image = normalize_test(input_image) - - return input_image - -def generate_images(test_input): - test_input = load_image_infer(test_input) - prediction = generator(np.expand_dims(test_input, axis=0), training=True) - fig = plt.figure(figsize=(128, 128)) - title = ['Predicted Image'] - - plt.title('Predicted Image') - # Getting the pixel values in the [0, 1] range to plot. - plt.imshow(prediction[0,:,:,:] * 0.5 + 0.5) - plt.axis('off') - return fig - - -generator = from_pretrained_keras("keras-io/pix2pix-generator") - - -img = gr.inputs.Image(shape=(256,256)) -plot = gr.outputs.Image(type="plot") - -description = "Conditional GAN model that translates image-to-image." -gr.Interface(generate_images, inputs = img, outputs = plot, -title = "Pix2Pix Facade Reconstructor", description = description, examples = [["./img.png"]]).launch() diff --git a/spaces/huggingface-projects/color-palette-generator-sd/frontend/src/routes/+layout.ts b/spaces/huggingface-projects/color-palette-generator-sd/frontend/src/routes/+layout.ts deleted file mode 100644 index 189f71e2e1b31d4e92a0493e33539bdd5128d987..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/color-palette-generator-sd/frontend/src/routes/+layout.ts +++ /dev/null @@ -1 +0,0 @@ -export const prerender = true; diff --git a/spaces/hussain-shk/IndiSent/model_configs/custom_transformer.py b/spaces/hussain-shk/IndiSent/model_configs/custom_transformer.py deleted file mode 100644 index b122e1bf5c81534aae35bb6235c1feaf45b7bada..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/model_configs/custom_transformer.py +++ /dev/null @@ -1,38 +0,0 @@ -from fairseq.models import register_model_architecture -from fairseq.models.transformer import base_architecture - - -@register_model_architecture("transformer", "transformer_2x") -def transformer_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_4x") -def transformer_huge(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1536) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1536) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_9x") -def transformer_xlarge(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 2048) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 8192) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 2048) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 8192) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_architecture(args) diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_r50.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_r50.py deleted file mode 100644 index 236721a526489b2cac7ba66a22bfc3d650e744cd..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_r50.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.5, 0.0) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/faces_emore" -config.num_classes = 85742 -config.num_image = 5822653 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/speed.py b/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/speed.py deleted file mode 100644 index 45e95237da65e44f35a172c25ac6dc4e313e4eae..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/speed.py +++ /dev/null @@ -1,23 +0,0 @@ -from easydict import EasyDict as edict - -# configs for test speed - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "synthetic" -config.num_classes = 100 * 10000 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = [] diff --git a/spaces/iamironman4279/SadTalker/src/face3d/util/load_mats.py b/spaces/iamironman4279/SadTalker/src/face3d/util/load_mats.py deleted file mode 100644 index f9a6fcc71de1d7dad8b0f81c67dc1c213764ff0b..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/util/load_mats.py +++ /dev/null @@ -1,120 +0,0 @@ -"""This script is to load 3D face model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -from PIL import Image -from scipy.io import loadmat, savemat -from array import array -import os.path as osp - -# load expression basis -def LoadExpBasis(bfm_folder='BFM'): - n_vertex = 53215 - Expbin = open(osp.join(bfm_folder, 'Exp_Pca.bin'), 'rb') - exp_dim = array('i') - exp_dim.fromfile(Expbin, 1) - expMU = array('f') - expPC = array('f') - expMU.fromfile(Expbin, 3*n_vertex) - expPC.fromfile(Expbin, 3*exp_dim[0]*n_vertex) - Expbin.close() - - expPC = np.array(expPC) - expPC = np.reshape(expPC, [exp_dim[0], -1]) - expPC = np.transpose(expPC) - - expEV = np.loadtxt(osp.join(bfm_folder, 'std_exp.txt')) - - return expPC, expEV - - -# transfer original BFM09 to our face model -def transferBFM09(bfm_folder='BFM'): - print('Transfer BFM09 to BFM_model_front......') - original_BFM = loadmat(osp.join(bfm_folder, '01_MorphableModel.mat')) - shapePC = original_BFM['shapePC'] # shape basis - shapeEV = original_BFM['shapeEV'] # corresponding eigen value - shapeMU = original_BFM['shapeMU'] # mean face - texPC = original_BFM['texPC'] # texture basis - texEV = original_BFM['texEV'] # eigen value - texMU = original_BFM['texMU'] # mean texture - - expPC, expEV = LoadExpBasis(bfm_folder) - - # transfer BFM09 to our face model - - idBase = shapePC*np.reshape(shapeEV, [-1, 199]) - idBase = idBase/1e5 # unify the scale to decimeter - idBase = idBase[:, :80] # use only first 80 basis - - exBase = expPC*np.reshape(expEV, [-1, 79]) - exBase = exBase/1e5 # unify the scale to decimeter - exBase = exBase[:, :64] # use only first 64 basis - - texBase = texPC*np.reshape(texEV, [-1, 199]) - texBase = texBase[:, :80] # use only first 80 basis - - # our face model is cropped along face landmarks and contains only 35709 vertex. - # original BFM09 contains 53490 vertex, and expression basis provided by Guo et al. contains 53215 vertex. - # thus we select corresponding vertex to get our face model. - - index_exp = loadmat(osp.join(bfm_folder, 'BFM_front_idx.mat')) - index_exp = index_exp['idx'].astype(np.int32) - 1 # starts from 0 (to 53215) - - index_shape = loadmat(osp.join(bfm_folder, 'BFM_exp_idx.mat')) - index_shape = index_shape['trimIndex'].astype( - np.int32) - 1 # starts from 0 (to 53490) - index_shape = index_shape[index_exp] - - idBase = np.reshape(idBase, [-1, 3, 80]) - idBase = idBase[index_shape, :, :] - idBase = np.reshape(idBase, [-1, 80]) - - texBase = np.reshape(texBase, [-1, 3, 80]) - texBase = texBase[index_shape, :, :] - texBase = np.reshape(texBase, [-1, 80]) - - exBase = np.reshape(exBase, [-1, 3, 64]) - exBase = exBase[index_exp, :, :] - exBase = np.reshape(exBase, [-1, 64]) - - meanshape = np.reshape(shapeMU, [-1, 3])/1e5 - meanshape = meanshape[index_shape, :] - meanshape = np.reshape(meanshape, [1, -1]) - - meantex = np.reshape(texMU, [-1, 3]) - meantex = meantex[index_shape, :] - meantex = np.reshape(meantex, [1, -1]) - - # other info contains triangles, region used for computing photometric loss, - # region used for skin texture regularization, and 68 landmarks index etc. - other_info = loadmat(osp.join(bfm_folder, 'facemodel_info.mat')) - frontmask2_idx = other_info['frontmask2_idx'] - skinmask = other_info['skinmask'] - keypoints = other_info['keypoints'] - point_buf = other_info['point_buf'] - tri = other_info['tri'] - tri_mask2 = other_info['tri_mask2'] - - # save our face model - savemat(osp.join(bfm_folder, 'BFM_model_front.mat'), {'meanshape': meanshape, 'meantex': meantex, 'idBase': idBase, 'exBase': exBase, 'texBase': texBase, - 'tri': tri, 'point_buf': point_buf, 'tri_mask2': tri_mask2, 'keypoints': keypoints, 'frontmask2_idx': frontmask2_idx, 'skinmask': skinmask}) - - -# load landmarks for standard face, which is used for image preprocessing -def load_lm3d(bfm_folder): - - Lm3D = loadmat(osp.join(bfm_folder, 'similarity_Lm3D_all.mat')) - Lm3D = Lm3D['lm'] - - # calculate 5 facial landmarks using 68 landmarks - lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1 - Lm3D = np.stack([Lm3D[lm_idx[0], :], np.mean(Lm3D[lm_idx[[1, 2]], :], 0), np.mean( - Lm3D[lm_idx[[3, 4]], :], 0), Lm3D[lm_idx[5], :], Lm3D[lm_idx[6], :]], axis=0) - Lm3D = Lm3D[[1, 2, 0, 3, 4], :] - - return Lm3D - - -if __name__ == '__main__': - transferBFM09() \ No newline at end of file diff --git a/spaces/iamtahiralvi/stabilityai-stable-diffusion-2-1/README.md b/spaces/iamtahiralvi/stabilityai-stable-diffusion-2-1/README.md deleted file mode 100644 index c4c96903bc8e5ef8e75d647f1a17465b06f11a59..0000000000000000000000000000000000000000 --- a/spaces/iamtahiralvi/stabilityai-stable-diffusion-2-1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stabilityai Stable Diffusion 2 1 -emoji: 💻 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: gpl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/imageomics/dashboard-prototype/run.sh b/spaces/imageomics/dashboard-prototype/run.sh deleted file mode 100644 index 50af9e0f7b7db4b4a1ddc3f8aa3895c2d06e1848..0000000000000000000000000000000000000000 --- a/spaces/imageomics/dashboard-prototype/run.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/bash -gunicorn -w 5 -b :7860 -t 360 dashboard:server diff --git a/spaces/imperialwool/funapi/routes/ytApi/get.py b/spaces/imperialwool/funapi/routes/ytApi/get.py deleted file mode 100644 index b5fc763045dc54bdbf99620052b850b8aec69333..0000000000000000000000000000000000000000 --- a/spaces/imperialwool/funapi/routes/ytApi/get.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -import yt_dlp -from .. import helpers - -def get(request, check = "huh"): - url = helpers.getFromRequest(request, "url") - if not url: return {"status": "error", "details": { "error_code": 101, "error_details": "No link provided" }}, 400 - - bitrate = helpers.getFromRequest(request, "bitrate") - if not bitrate: bitrate = "64k" - - quality = helpers.getFromRequest(request, "quality") - if not quality or quality.lower() not in ['best', 'worst']: quality = 'worst' - else: quality = quality.lower() - - urlcode = url.partition('?v=')[2] - if not urlcode: urlcode = "NPRNRQh2fAo" - - config = helpers.configFile() - - if os.path.exists(f"{config['static-path']}/{check}/{urlcode}.ogg"): - return {"status": "pass", 'done-or-not': True, 'ytdlp-code': 0, 'urlcode': urlcode, "path": f"{config['static-path']}/{check}/{urlcode}.ogg", "quality": quality, "bitrate": bitrate} - - if os.path.exists(f"{config['temp-path']}/{urlcode}.ogg"): - return {"status": "pass", 'done-or-not': False, 'ytdlp-code': 0, 'urlcode': urlcode, "path": f"{config['temp-path']}/{urlcode}.ogg", "quality": quality, "bitrate": bitrate} - - ydl_opts = { - 'format': f'ogg/{quality}audio/{quality}', - 'outtmpl': f"{config['temp-path']}/{urlcode}.ogg", - 'progress_hooks': [helpers.thisIsHook], - } - - try: - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - error_code = ydl.download(url) - except Exception as e: return {"status": "error", "details": {"error_code": 102, "error_details": str(e)}}, 400 - return {"status": "pass", 'done-or-not': False, 'ytdlp-code': error_code, 'urlcode': urlcode, "path": f"{config['temp-path']}/{urlcode}.ogg", "quality": quality, "bitrate": bitrate} \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/CRACK DzSoft PowerPoint Slide Show Converter 3.2.2.5 Create Self-Running Slide Shows from PowerPoint.md b/spaces/inamXcontru/PoeticTTS/CRACK DzSoft PowerPoint Slide Show Converter 3.2.2.5 Create Self-Running Slide Shows from PowerPoint.md deleted file mode 100644 index c6585322b48c161f1b851cec67b643549fc9dfa6..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/CRACK DzSoft PowerPoint Slide Show Converter 3.2.2.5 Create Self-Running Slide Shows from PowerPoint.md +++ /dev/null @@ -1,6 +0,0 @@ -

CRACK DzSoft PowerPoint Slide Show Converter 3.2.2.5


Download Filehttps://gohhs.com/2uz4jm



- - aaccfb2cb3
-
-
-

diff --git a/spaces/inamXcontru/PoeticTTS/Desi Beat Bodyguard Full Video Song Hd 720p Downloadl The Ultimate Party Song from Salman Khans Bodyguard.md b/spaces/inamXcontru/PoeticTTS/Desi Beat Bodyguard Full Video Song Hd 720p Downloadl The Ultimate Party Song from Salman Khans Bodyguard.md deleted file mode 100644 index 2874cde5aa883368386a9746ba9f5d5c9c248378..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Desi Beat Bodyguard Full Video Song Hd 720p Downloadl The Ultimate Party Song from Salman Khans Bodyguard.md +++ /dev/null @@ -1,6 +0,0 @@ -

Desi Beat Bodyguard Full Video Song Hd 720p Downloadl


DOWNLOAD ……… https://gohhs.com/2uz40t



- - aaccfb2cb3
-
-
-

diff --git a/spaces/indichealth/indic-health-demo/utils/__init__.py b/spaces/indichealth/indic-health-demo/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/innovatorved/whisper.api/app/tests/test_api/test_ping.py b/spaces/innovatorved/whisper.api/app/tests/test_api/test_ping.py deleted file mode 100644 index b48758c022b235c5511f8f4f0a828818a753ae30..0000000000000000000000000000000000000000 --- a/spaces/innovatorved/whisper.api/app/tests/test_api/test_ping.py +++ /dev/null @@ -1,12 +0,0 @@ -# File: whisper.api/app/tests/test_api/__init__.py - -from fastapi.testclient import TestClient -from app.main import app - -client = TestClient(app) - - -def test_ping_main(): - response = client.get("/ping") - assert response.status_code == 200 - assert response.json() == {"ping": "pong"} diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/2004 Quickbooks Crack [UPDATED] Key Generator.md b/spaces/inplisQlawa/anything-midjourney-v4-1/2004 Quickbooks Crack [UPDATED] Key Generator.md deleted file mode 100644 index de9026ee52ab8948d199d91bcbbcc105d72fc7a1..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/2004 Quickbooks Crack [UPDATED] Key Generator.md +++ /dev/null @@ -1,6 +0,0 @@ -

2004 Quickbooks Crack Key Generator


Download Filehttps://urlin.us/2uEvr6



-
- d5da3c52bf
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Citrix Xenserver 6.1 License Crack ((NEW)).md b/spaces/inplisQlawa/anything-midjourney-v4-1/Citrix Xenserver 6.1 License Crack ((NEW)).md deleted file mode 100644 index 88efe590b2fc337e003c3479db9846ca2c4b85a5..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Citrix Xenserver 6.1 License Crack ((NEW)).md +++ /dev/null @@ -1,10 +0,0 @@ - -

i am tired of this. i have talked to citrix and told them this is not right. i have tried to work with them and tell them i cannot afford this and i need to be able to manage this out of a budget. so now i need your help to get citrix to change the pricing. there needs to be more flexibility so that i can manage this out of a budget.

-

citrix xenserver 6.1 license crack


Download ⇒⇒⇒ https://urlin.us/2uEyHF



-

i have a xenserver 6.1 with 3 mpp servers in it. every week i get an automated email from citrix saying that i need to renew my license, and that it costs $300 for each server. i have a budget of $5,000 a year. i do not have that kind of budget.

-

we have been talking about this for a few years, and i have yet to get a straight answer from citrix as to why they can’t just lower the cost to allow us to manage this out of a budget. i am contacting their support team again to make sure i get a straight answer.

-

i just had a quick look at the citrix system console, and it seems to me like that is not correct. in the system console, i have 6 entries for xenserver 6.1, but only 2 of them have a valid license. the other 4 entries do not have a valid license.

-

service pack, scom and updates to the management packs have been a constant theme in the xendesktop world for the past couple of years. last year, the xendesktop 7.0 refresh was introduced with the promise of a simpler, less cluttered console. in my opinion, the effort was not all that successful. icons for citrix products are simply too large and the user interface is fairly convoluted. i have already commented on that in an earlier blog post.

-

it seems that ms finally is taking their ignorance to the next level with scom. i have a very limited understanding of how citrix xenserver operates. i was able to have a xendesktop project configured, but for some reason the citrix run-as profiles were never created.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Corel Products Keygen Xforce Free Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Corel Products Keygen Xforce Free Download.md deleted file mode 100644 index c4ad77ed8dbeaa37666200976567ab825fd0871d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Corel Products Keygen Xforce Free Download.md +++ /dev/null @@ -1,102 +0,0 @@ -
-

Corel Products Keygen Xforce Free Download: A Review

-

If you are looking for a way to activate your Corel products, such as CorelDRAW, Corel Painter, Corel VideoStudio, Corel PaintShop Pro, etc., you might have come across Corel Products Keygen Xforce Free Download. This is a software that claims to generate serial numbers and activation codes for various Corel products on both Windows and Mac platforms. But is it really worth downloading and using? In this article, we will review Corel Products Keygen Xforce Free Download and see if it lives up to its promises.

-

Corel Products Keygen Xforce Free Download


Download Ziphttps://urlin.us/2uEvzZ



- -

What is Corel Products Keygen Xforce Free Download?

-

Corel Products Keygen Xforce Free Download is a software that is made by X-Force Crack Team, a group of hackers and crackers who are known for creating keygens and cracks for various software products. The software is a universal keygen that supports dozens of Corel products, such as CorelDRAW Graphics Suite, Corel Painter, Corel VideoStudio, Corel PaintShop Pro, Corel WordPerfect Office series, etc. It is mainly used to generate serial numbers and activation codes for these products on both Windows 32-bit and 64-bit platforms.

-

However, Corel Products Keygen Xforce Free Download is not an official product of Corel Corporation. It is a modified version of the software that bypasses the activation process and allows users to use the products for free. It is usually distributed through various websites that offer free downloads of keygens, cracks, patches, etc.

- -

Why You Should Avoid Corel Products Keygen Xforce Free Download?

-

While it might be tempting to use Corel Products Keygen Xforce Free Download to save some money and use your Corel products, there are many reasons why you should avoid it at all costs. Here are some of them:

- - -

What is the Best Alternative to Corel Products Keygen Xforce Free Download?

-

The best alternative to Corel Products Keygen Xforce Free Download is to use the original and genuine version of your Corel products from the official website of Corel Corporation. This way, you can enjoy the following benefits:

- -

To use the original version of your Corel products, you need to purchase a license key from the official website of Corel Corporation. The license key will activate your products and allow you to use them for a lifetime on one computer. The price of the license key varies depending on the product and edition you choose:

- - - - - - - - - - - - - - - - - - -
ProductEditionPrice
CorelDRAW Graphics SuiteX5$499
X6$499
X7$499
X8$499
Corel PainterX3$429
2015$429
2016$429
Corel VideoStudioX7 Ultimate$99.99
X8 Ultimate$99.99
X9 Ultimate$99.99
Corel PaintShop ProX7 Ultimate$99.99
X8 Ultimate$99.99
X9 Ultimate$99.99
Corel WordPerfect OfficeX5 Standard Edition$249.99
X6 Standard Edition$249.99
X7 Standard Edition$249.99
- -

Conclusion

-

In conclusion, Corel Products Keygen Xforce Free Download is not a wise choice to activate your Corel products. It is illegal, unsafe, unreliable, and unsupported. Instead of risking your data and computer by using it, you should opt for the original and genuine version of your Corel products from the official website of Corel Corporation. This will ensure that you can use your products with ease and confidence.

-

-

How to Download and Use Corel Products Keygen Xforce Free Download?

-

If you still want to try Corel Products Keygen Xforce Free Download, despite the risks and drawbacks, you need to follow these steps:

-
    -
  1. Find a website that offers Corel Products Keygen Xforce Free Download. You can use any search engine to look for it, but be careful of the sites that might contain malware or viruses.
  2. -
  3. Download the zip file of Corel Products Keygen Xforce Free Download from the website. You might need to complete some surveys or offers to unlock the download link.
  4. -
  5. Extract the zip file to a folder on your computer. You might need a password to extract the file, which should be provided by the website.
  6. -
  7. Run the keygen.exe file as administrator. You will see a window with a list of Corel products and a generate button.
  8. -
  9. Select the Corel product that you want to activate from the list. Make sure it matches the version and edition of your installed product.
  10. -
  11. Click on generate button to generate a serial number and an activation code for your selected product.
  12. -
  13. Copy and paste the serial number and the activation code to your Corel product activation window. Click on activate button to complete the activation process.
  14. -
-

Congratulations! You have successfully activated your Corel product by using Corel Products Keygen Xforce Free Download.

- -

What are the Risks and Drawbacks of Using Corel Products Keygen Xforce Free Download?

-

As we have mentioned before, using Corel Products Keygen Xforce Free Download is not a wise choice. It comes with many risks and drawbacks that can outweigh the benefits. Here are some of them:

- - -

Conclusion

-

In conclusion, Corel Products Keygen Xforce Free Download is not a wise choice to activate your Corel products. It is illegal, unsafe, unreliable, and unsupported. Instead of risking your data and computer by using it, you should opt for the original and genuine version of your Corel products from the official website of Corel Corporation. This will ensure that you can use your products with ease and confidence.

-

How to Download and Install Corel Products from the Official Website?

-

If you want to use the original and genuine version of your Corel products, you need to download and install them from the official website of Corel Corporation. Here are the steps to do so:

-
    -
  1. Go to the official website of Corel Corporation at https://www.corel.com/.
  2. -
  3. Select the product that you want to use from the menu bar or the product list. You can also use the search box to find your product.
  4. -
  5. Click on the Buy Now button to purchase a license key for your product. You can choose between a subscription or a perpetual license, depending on your preference and budget.
  6. -
  7. Enter your payment details and complete the checkout process. You will receive an email confirmation with your order number and license key.
  8. -
  9. Click on the Download button to download the installer file for your product. You can also access your download link from your Corel account.
  10. -
  11. Run the installer file and follow the instructions to install your product on your computer. You might need to enter your license key during the installation process.
  12. -
  13. Launch your product and enjoy its features and benefits.
  14. -
-

Congratulations! You have successfully downloaded and installed your Corel product from the official website of Corel Corporation.

- -

How to Get Help and Support for Your Corel Products?

-

If you encounter any problems or issues while using your Corel products, you can get help and support from various sources. Here are some of them:

- - -

Conclusion

-

In conclusion, Corel Products Keygen Xforce Free Download is not a wise choice to activate your Corel products. It is illegal, unsafe, unreliable, and unsupported. Instead of risking your data and computer by using it, you should opt for the original and genuine version of your Corel products from the official website of Corel Corporation. This will ensure that you can use your products with ease and confidence.

-

Conclusion

-

In conclusion, Corel Products Keygen Xforce Free Download is not a wise choice to activate your Corel products. It is illegal, unsafe, unreliable, and unsupported. Instead of risking your data and computer by using it, you should opt for the original and genuine version of your Corel products from the official website of Corel Corporation. This will ensure that you can use your products with ease and confidence.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Cleo 4.1 Gta San Andreas.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Download Cleo 4.1 Gta San Andreas.md deleted file mode 100644 index 64dae87788afd4f3283b5c29ca75bb5f7c8e478c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Cleo 4.1 Gta San Andreas.md +++ /dev/null @@ -1,29 +0,0 @@ -
-

How to Download CLEO 4.1 for GTA San Andreas

-

CLEO 4.1 is a library of scripts and plugins that allow you to modify and enhance the gameplay of Grand Theft Auto: San Andreas, one of the most popular open-world action games of all time. With CLEO 4.1, you can add new features, missions, vehicles, weapons, cheats, and more to your game.

-

In this article, we will show you how to download and install CLEO 4.1 for GTA San Andreas on your PC. Follow these simple steps and enjoy the new possibilities of CLEO 4.1.

-

download cleo 4.1 gta san andreas


Downloadhttps://urlin.us/2uExXf



-
    -
  1. Go to the official website of CLEO 4.1 at http://cleo.li/ and click on the "Download" button.
  2. -
  3. Extract the downloaded ZIP file to a folder of your choice.
  4. -
  5. Copy the files "CLEO.asi", "cleo.ini", and "vorbisHooked.dll" from the extracted folder to your GTA San Andreas installation directory (usually "C:\Program Files\Rockstar Games\GTA San Andreas").
  6. -
  7. Copy the folder "CLEO" from the extracted folder to your GTA San Andreas installation directory as well.
  8. -
  9. Make a backup of the file "vorbisFile.dll" in your GTA San Andreas installation directory and replace it with the one from the extracted folder.
  10. -
  11. You have successfully installed CLEO 4.1 for GTA San Andreas. To run the game, use the file "gta_sa.exe" in your GTA San Andreas installation directory.
  12. -
-

To use CLEO scripts and plugins, you need to download them from various websites and place them in the "CLEO" folder in your GTA San Andreas installation directory. You can find many CLEO scripts and plugins at https://www.gtagarage.com/mods/index.php?C=24.

-

CLEO 4.1 supports different versions of GTA San Andreas: 1.0, 1.01, 3.0 (steam), but scripts and plugins are not guaranteed to be compatible[^1^]. CLEO requires an 'ASI Loader' installed to run which is provided with the release[^1^]. The ASI Loader requires overwriting one original game file: vorbisFile.dll - be sure to make a backup of this file[^1^]. No additional files are required to run CLEO scripts or plugins[^1^].

-

Grand Theft Auto: San Andreas is a game developed by Rockstar Games and released in 2004. It is set in the fictional state of San Andreas, which is based on California and Nevada, and follows the story of Carl Johnson, a former gang member who returns to his hometown after five years of exile[^2^]. The game features a large and diverse open world, where the player can explore various cities, countryside, deserts, mountains, and more. The game also offers many activities, such as driving, shooting, fighting, gambling, racing, flying, swimming, and more.

-

We hope this article was helpful for you. If you have any questions or problems with downloading or installing CLEO 4.1 for GTA San Andreas, please leave a comment below. Have fun with CLEO 4.1!

In this section, we will introduce some of the most popular and useful CLEO scripts and plugins that you can download and use for GTA San Andreas. These are some of the examples of what CLEO 4.1 can do for your game.

-

- -

These are just some of the many CLEO scripts and plugins that you can find online. There are thousands of them for different purposes and preferences. You can search for them on websites like https://www.gtagarage.com/, https://www.gtainside.com/, or https://libertycity.net/. You can also create your own CLEO scripts and plugins using the tools and tutorials provided by the CLEO community.

-

CLEO 4.1 is a powerful and versatile tool that can transform your GTA San Andreas experience. It is easy to install and use, and it offers endless possibilities for fun and creativity. We hope you enjoy using CLEO 4.1 as much as we do.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hotel Rwanda (2004) 720p BluRay X265 HEVC [Dual Audio] AC3 [Hindi 2.0 English 5.1] - MRDhila LINK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hotel Rwanda (2004) 720p BluRay X265 HEVC [Dual Audio] AC3 [Hindi 2.0 English 5.1] - MRDhila LINK.md deleted file mode 100644 index bc3ff2602fe8b968c01a9ff52ce81c65b615deac..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hotel Rwanda (2004) 720p BluRay X265 HEVC [Dual Audio] AC3 [Hindi 2.0 English 5.1] - MRDhila LINK.md +++ /dev/null @@ -1,9 +0,0 @@ -

Hotel Rwanda (2004) 720p BluRay x265 HEVC [Dual Audio] AC3 [Hindi 2.0 English 5.1] - MRDhila


Download Ziphttps://urlin.us/2uExSq



- -Hotel Rwanda (2004) 720p BluRay X265 HEVC [Dual Audio] AC3 [Hindi 2.0 + English 5.1]: MRDhila. The download was added 6 years ago to Download Movies...... Hotel Rwanda (2004) 720p BluRay X265 HEVC [Dual Audio] AC3 [Hindi 2.0 + English 5.1]: MRDhila. The download was added 6 years ago to Movie Downloads... -... -Hotel Rwanda (2004) 720p BluRay X265 HEVC [Dual Audio] AC3 [Hindi 2.0 + English 5.1]: MRDhila. The download was added 6 years ago to Movie Downloads... -... 8a78ff9644
-
-
-

diff --git a/spaces/jackyliang42/code-as-policies/prompts/tabletop_ui.py b/spaces/jackyliang42/code-as-policies/prompts/tabletop_ui.py deleted file mode 100644 index ae654730a480720f0995a14c5fd891156db78a44..0000000000000000000000000000000000000000 --- a/spaces/jackyliang42/code-as-policies/prompts/tabletop_ui.py +++ /dev/null @@ -1,173 +0,0 @@ -# Python 2D robot control script -import numpy as np -from env_utils import put_first_on_second, get_obj_pos, get_obj_names, say, get_corner_name, get_side_name, is_obj_visible, stack_objects_in_order -from plan_utils import parse_obj_name, parse_position, parse_question, transform_shape_pts - -objects = ['yellow block', 'green block', 'yellow bowl', 'blue block', 'blue bowl', 'green bowl'] -# place the yellow block on the yellow bowl. -say('Ok - putting the yellow block on the yellow bowl') -put_first_on_second('yellow block', 'yellow bowl') -objects = ['yellow block', 'green block', 'yellow bowl', 'blue block', 'blue bowl', 'green bowl'] -# which block did you move. -say('I moved the yellow block') -objects = ['yellow block', 'green block', 'yellow bowl', 'blue block', 'blue bowl', 'green bowl'] -# move the green block to the top right corner. -say('Got it - putting the green block on the top right corner') -corner_pos = parse_position('top right corner') -put_first_on_second('green block', corner_pos) -objects = ['yellow block', 'green block', 'yellow bowl', 'blue block', 'blue bowl', 'green bowl'] -# stack the blue bowl on the yellow bowl on the green block. -order_bottom_to_top = ['green block', 'yellow block', 'blue bowl'] -say(f'Sure - stacking from top to bottom: {", ".join(order_bottom_to_top)}') -stack_objects_in_order(object_names=order_bottom_to_top) -objects = ['cyan block', 'white block', 'cyan bowl', 'blue block', 'blue bowl', 'white bowl'] -# move the cyan block into its corresponding bowl. -matches = {'cyan block': 'cyan bowl'} -say('Got it - placing the cyan block on the cyan bowl') -for first, second in matches.items(): - put_first_on_second(first, get_obj_pos(second)) -objects = ['cyan block', 'white block', 'cyan bowl', 'blue block', 'blue bowl', 'white bowl'] -# make a line of blocks on the right side. -say('No problem! Making a line of blocks on the right side') -block_names = parse_obj_name('the blocks', f'objects = {get_obj_names()}') -line_pts = parse_position(f'a 30cm vertical line on the right with {len(block_names)} points') -for block_name, pt in zip(block_names, line_pts): - put_first_on_second(block_name, pt) -objects = ['yellow block', 'red block', 'yellow bowl', 'gray block', 'gray bowl', 'red bowl'] -# put the small banana colored thing in between the blue bowl and green block. -say('Sure thing - putting the yellow block between the blue bowl and the green block') -target_pos = parse_position('a point in the middle betweeen the blue bowl and the green block') -put_first_on_second('yellow block', target_pos) -objects = ['yellow block', 'red block', 'yellow bowl', 'gray block', 'gray bowl', 'red bowl'] -# can you cut the bowls in half. -say('no, I can only move objects around') -objects = ['yellow block', 'green block', 'yellow bowl', 'gray block', 'gray bowl', 'green bowl'] -# stack the blocks on the right side with the gray one on the bottom. -say('Ok. stacking the blocks on the right side with the gray block on the bottom') -right_side = parse_position('the right side') -put_first_on_second('gray block', right_side) -order_bottom_to_top = ['gray block', 'green block', 'yellow block'] -stack_objects_in_order(object_names=order_bottom_to_top) -objects = ['yellow block', 'green block', 'yellow bowl', 'blue block', 'blue bowl', 'green bowl'] -# hide the blue bowl. -bowl_name = np.random.choice(['yellow bowl', 'green bowl']) -say(f'Sounds good! Hiding the blue bowl under the {bowl_name}') -put_first_on_second(bowl_name, 'blue bowl') -objects = ['pink block', 'green block', 'pink bowl', 'blue block', 'blue bowl', 'green bowl'] -# move the grass-colored bowl to the left. -say('Sure - moving the green bowl left by 10 centimeters') -left_pos = parse_position('a point 10cm left of the green bowl') -put_first_on_second('green bowl', left_pos) -objects = ['pink block', 'green block', 'pink bowl', 'blue block', 'blue bowl', 'green bowl'] -# why did you move the red bowl. -say(f'I did not move the red bowl') -objects = ['pink block', 'green block', 'pink bowl', 'blue block', 'blue bowl', 'green bowl'] -# undo that. -say('Sure - moving the green bowl right by 10 centimeters') -left_pos = parse_position('a point 10cm right of the green bowl') -put_first_on_second('green bowl', left_pos) -objects = ['brown bowl', 'green block', 'brown block', 'green bowl', 'blue bowl', 'blue block'] -# place the top most block to the corner closest to the bottom most block. -top_block_name = parse_obj_name('top most block', f'objects = {get_obj_names()}') -bottom_block_name = parse_obj_name('bottom most block', f'objects = {get_obj_names()}') -closest_corner_pos = parse_position(f'the corner closest to the {bottom_block_name}', f'objects = {get_obj_names()}') -say(f'Putting the {top_block_name} on the {get_corner_name(closest_corner_pos)}') -put_first_on_second(top_block_name, closest_corner_pos) -objects = ['brown bowl', 'green block', 'brown block', 'green bowl', 'blue bowl', 'blue block'] -# move the brown bowl to the side closest to the green block. -closest_side_position = parse_position('the side closest to the green block') -say(f'Got it - putting the brown bowl on the {get_side_name(closest_side_position)}') -put_first_on_second('brown bowl', closest_side_position) -objects = ['brown bowl', 'green block', 'brown block', 'green bowl', 'blue bowl', 'blue block'] -# place the green block to the right of the bowl that has the blue block. -bowl_name = parse_obj_name('the bowl that has the blue block', f'objects = {get_obj_names()}') -if bowl_name: - target_pos = parse_position(f'a point 10cm to the right of the {bowl_name}') - say(f'No problem - placing the green block to the right of the {bowl_name}') - put_first_on_second('green block', target_pos) -else: - say('There are no bowls that has the blue block') -objects = ['brown bowl', 'green block', 'brown block', 'green bowl', 'blue bowl', 'blue block'] -# move the other blocks to the bottom corners. -block_names = parse_obj_name('blocks other than the blue block', f'objects = {get_obj_names()}') -corners = parse_position('the bottom corners') -for block_name, pos in zip(block_names, corners): - put_first_on_second(block_name, pos) -objects = ['pink block', 'gray block', 'orange block'] -# move the pinkish colored block on the bottom side. -say('Ok - putting the pink block on the bottom side') -bottom_side_pos = parse_position('the bottom side') -put_first_on_second('pink block', bottom_side_pos) -objects = ['yellow bowl', 'blue block', 'yellow block', 'blue bowl'] -# is the blue block to the right of the yellow bowl? -if parse_question('is the blue block to the right of the yellow bowl?', f'objects = {get_obj_names()}'): - say('yes, there is a blue block to the right of the yellow bow') -else: - say('no, there is\'t a blue block to the right of the yellow bow') -objects = ['yellow bowl', 'blue block', 'yellow block', 'blue bowl'] -# how many yellow objects are there? -n_yellow_objs = parse_question('how many yellow objects are there', f'objects = {get_obj_names()}') -say(f'there are {n_yellow_objs} yellow object') -objects = ['pink block', 'green block', 'pink bowl', 'blue block', 'blue bowl', 'green bowl'] -# move the left most block to the green bowl. -left_block_name = parse_obj_name('left most block', f'objects = {get_obj_names()}') -say(f'Moving the {left_block_name} on the green bowl') -put_first_on_second(left_block_name, 'green bowl') -objects = ['pink block', 'green block', 'pink bowl', 'blue block', 'blue bowl', 'green bowl'] -# move the other blocks to different corners. -block_names = parse_obj_name(f'blocks other than the {left_block_name}', f'objects = {get_obj_names()}') -corners = parse_position('the corners') -say(f'Ok - moving the other {len(block_names)} blocks to different corners') -for block_name, pos in zip(block_names, corners): - put_first_on_second(block_name, pos) -objects = ['pink block', 'green block', 'pink bowl', 'blue block', 'blue bowl', 'green bowl'] -# is the pink block on the green bowl. -if parse_question('is the pink block on the green bowl', f'objects = {get_obj_names()}'): - say('Yes - the pink block is on the green bowl.') -else: - say('No - the pink block is not on the green bowl.') -objects = ['pink block', 'green block', 'pink bowl', 'blue block', 'blue bowl', 'green bowl'] -# what are the blocks left of the green bowl. -left_block_names = parse_question('what are the blocks left of the green bowl', f'objects = {get_obj_names()}') -if len(left_block_names) > 0: - say(f'These blocks are left of the green bowl: {", ".join(left_block_names)}') -else: - say('There are no blocks left of the green bowl') -objects = ['yellow block', 'green block', 'yellow bowl', 'blue block', 'blue bowl', 'green bowl'] -# imagine that the bowls are different biomes on earth and imagine that the blocks are parts of a building. -say('ok') -objects = ['yellow block', 'green block', 'yellow bowl', 'blue block', 'blue bowl', 'green bowl'] -# now build a tower in the grasslands. -order_bottom_to_top = ['green bowl', 'blue block', 'green block', 'yellow block'] -say('stacking the blocks on the green bowl') -stack_objects_in_order(object_names=order_bottom_to_top) -objects = ['yellow block', 'green block', 'yellow bowl', 'gray block', 'gray bowl', 'green bowl'] -# show me what happens when the desert gets flooded by the ocean. -say('putting the yellow bowl on the blue bowl') -put_first_on_second('yellow bowl', 'blue bowl') -objects = ['pink block', 'gray block', 'orange block'] -# move all blocks 5cm toward the top. -say('Ok - moving all blocks 5cm toward the top') -block_names = parse_obj_name('the blocks', f'objects = {get_obj_names()}') -for block_name in block_names: - target_pos = parse_position(f'a point 5cm above the {block_name}') - put_first_on_second(block_name, target_pos) -objects = ['cyan block', 'white block', 'purple bowl', 'blue block', 'blue bowl', 'white bowl'] -# make a triangle of blocks in the middle. -block_names = parse_obj_name('the blocks', f'objects = {get_obj_names()}') -triangle_pts = parse_position(f'a triangle with size 10cm around the middle with {len(block_names)} points') -say('Making a triangle of blocks around the middle of the workspace') -for block_name, pt in zip(block_names, triangle_pts): - put_first_on_second(block_name, pt) -objects = ['cyan block', 'white block', 'purple bowl', 'blue block', 'blue bowl', 'white bowl'] -# make the triangle smaller. -triangle_pts = transform_shape_pts('scale it by 0.5x', shape_pts=triangle_pts) -say('Making the triangle smaller') -block_names = parse_obj_name('the blocks', f'objects = {get_obj_names()}') -for block_name, pt in zip(block_names, triangle_pts): - put_first_on_second(block_name, pt) -objects = ['brown bowl', 'red block', 'brown block', 'red bowl', 'pink bowl', 'pink block'] -# put the red block on the farthest bowl. -farthest_bowl_name = parse_obj_name('the bowl farthest from the red block', f'objects = {get_obj_names()}') -say(f'Putting the red block on the {farthest_bowl_name}') -put_first_on_second('red block', farthest_bowl_name) \ No newline at end of file diff --git a/spaces/jbetker/tortoise/utils/__init__.py b/spaces/jbetker/tortoise/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/lib/generateSeed.ts b/spaces/jbilcke-hf/ai-clip-factory/src/lib/generateSeed.ts deleted file mode 100644 index 563e25ec894ab5af54c5025a15a9b7a5918325de..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/lib/generateSeed.ts +++ /dev/null @@ -1,3 +0,0 @@ -export function generateSeed() { - return Math.floor(Math.random() * Math.pow(2, 31)); -} \ No newline at end of file diff --git a/spaces/jbilcke-hf/webapp-factory-llama-node/public/js/tailwindcss@3.3.2.js b/spaces/jbilcke-hf/webapp-factory-llama-node/public/js/tailwindcss@3.3.2.js deleted file mode 100644 index 387ad38afef64b83b78e4962ab5deebf12984328..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/webapp-factory-llama-node/public/js/tailwindcss@3.3.2.js +++ /dev/null @@ -1,67 +0,0 @@ -(()=>{var zx=Object.create;var bn=Object.defineProperty;var $x=Object.getOwnPropertyDescriptor;var jx=Object.getOwnPropertyNames;var Ux=Object.getPrototypeOf,Vx=Object.prototype.hasOwnProperty;var Fc=t=>bn(t,"__esModule",{value:!0});var Nc=t=>{if(typeof require!="undefined")return require(t);throw new Error('Dynamic require of "'+t+'" is not supported')};var E=(t,e)=>()=>(t&&(e=t(t=0)),e);var b=(t,e)=>()=>(e||t((e={exports:{}}).exports,e),e.exports),Ve=(t,e)=>{Fc(t);for(var r in e)bn(t,r,{get:e[r],enumerable:!0})},Wx=(t,e,r)=>{if(e&&typeof e=="object"||typeof e=="function")for(let i of jx(e))!Vx.call(t,i)&&i!=="default"&&bn(t,i,{get:()=>e[i],enumerable:!(r=$x(e,i))||r.enumerable});return t},he=t=>Wx(Fc(bn(t!=null?zx(Ux(t)):{},"default",t&&t.__esModule&&"default"in t?{get:()=>t.default,enumerable:!0}:{value:t,enumerable:!0})),t);var g,u=E(()=>{g={platform:"",env:{},versions:{node:"14.17.6"}}});var Gx,we,ut=E(()=>{u();Gx=0,we={readFileSync:t=>self[t]||"",statSync:()=>({mtimeMs:Gx++})}});var _a=b((hL,$c)=>{u();"use strict";var zc=class{constructor(e={}){if(!(e.maxSize&&e.maxSize>0))throw new TypeError("`maxSize` must be a number greater than 0");if(typeof e.maxAge=="number"&&e.maxAge===0)throw new TypeError("`maxAge` must be a number greater than 0");this.maxSize=e.maxSize,this.maxAge=e.maxAge||1/0,this.onEviction=e.onEviction,this.cache=new Map,this.oldCache=new Map,this._size=0}_emitEvictions(e){if(typeof this.onEviction=="function")for(let[r,i]of e)this.onEviction(r,i.value)}_deleteIfExpired(e,r){return typeof r.expiry=="number"&&r.expiry<=Date.now()?(typeof this.onEviction=="function"&&this.onEviction(e,r.value),this.delete(e)):!1}_getOrDeleteIfExpired(e,r){if(this._deleteIfExpired(e,r)===!1)return r.value}_getItemValue(e,r){return r.expiry?this._getOrDeleteIfExpired(e,r):r.value}_peek(e,r){let i=r.get(e);return this._getItemValue(e,i)}_set(e,r){this.cache.set(e,r),this._size++,this._size>=this.maxSize&&(this._size=0,this._emitEvictions(this.oldCache),this.oldCache=this.cache,this.cache=new Map)}_moveToRecent(e,r){this.oldCache.delete(e),this._set(e,r)}*_entriesAscending(){for(let e of this.oldCache){let[r,i]=e;this.cache.has(r)||this._deleteIfExpired(r,i)===!1&&(yield e)}for(let e of this.cache){let[r,i]=e;this._deleteIfExpired(r,i)===!1&&(yield e)}}get(e){if(this.cache.has(e)){let r=this.cache.get(e);return this._getItemValue(e,r)}if(this.oldCache.has(e)){let r=this.oldCache.get(e);if(this._deleteIfExpired(e,r)===!1)return this._moveToRecent(e,r),r.value}}set(e,r,{maxAge:i=this.maxAge===1/0?void 0:Date.now()+this.maxAge}={}){this.cache.has(e)?this.cache.set(e,{value:r,maxAge:i}):this._set(e,{value:r,expiry:i})}has(e){return this.cache.has(e)?!this._deleteIfExpired(e,this.cache.get(e)):this.oldCache.has(e)?!this._deleteIfExpired(e,this.oldCache.get(e)):!1}peek(e){if(this.cache.has(e))return this._peek(e,this.cache);if(this.oldCache.has(e))return this._peek(e,this.oldCache)}delete(e){let r=this.cache.delete(e);return r&&this._size--,this.oldCache.delete(e)||r}clear(){this.cache.clear(),this.oldCache.clear(),this._size=0}resize(e){if(!(e&&e>0))throw new TypeError("`maxSize` must be a number greater than 0");let r=[...this._entriesAscending()],i=r.length-e;i<0?(this.cache=new Map(r),this.oldCache=new Map,this._size=r.length):(i>0&&this._emitEvictions(r.slice(0,i)),this.oldCache=new Map(r.slice(i)),this.cache=new Map,this._size=0),this.maxSize=e}*keys(){for(let[e]of this)yield e}*values(){for(let[,e]of this)yield e}*[Symbol.iterator](){for(let e of this.cache){let[r,i]=e;this._deleteIfExpired(r,i)===!1&&(yield[r,i.value])}for(let e of this.oldCache){let[r,i]=e;this.cache.has(r)||this._deleteIfExpired(r,i)===!1&&(yield[r,i.value])}}*entriesDescending(){let e=[...this.cache];for(let r=e.length-1;r>=0;--r){let i=e[r],[n,s]=i;this._deleteIfExpired(n,s)===!1&&(yield[n,s.value])}e=[...this.oldCache];for(let r=e.length-1;r>=0;--r){let i=e[r],[n,s]=i;this.cache.has(n)||this._deleteIfExpired(n,s)===!1&&(yield[n,s.value])}}*entriesAscending(){for(let[e,r]of this._entriesAscending())yield[e,r.value]}get size(){if(!this._size)return this.oldCache.size;let e=0;for(let r of this.oldCache.keys())this.cache.has(r)||e++;return Math.min(this._size+e,this.maxSize)}};$c.exports=zc});var jc,Uc=E(()=>{u();jc=t=>t&&t._hash});function xn(t){return jc(t,{ignoreUnknown:!0})}var Vc=E(()=>{u();Uc()});function _t(t){if(t=`${t}`,t==="0")return"0";if(/^[+-]?(\d+|\d*\.\d+)(e[+-]?\d+)?(%|\w+)?$/.test(t))return t.replace(/^[+-]?/,r=>r==="-"?"":"-");let e=["var","calc","min","max","clamp"];for(let r of e)if(t.includes(`${r}(`))return`calc(${t} * -1)`}var kn=E(()=>{u()});var Wc,Gc=E(()=>{u();Wc=["preflight","container","accessibility","pointerEvents","visibility","position","inset","isolation","zIndex","order","gridColumn","gridColumnStart","gridColumnEnd","gridRow","gridRowStart","gridRowEnd","float","clear","margin","boxSizing","lineClamp","display","aspectRatio","height","maxHeight","minHeight","width","minWidth","maxWidth","flex","flexShrink","flexGrow","flexBasis","tableLayout","captionSide","borderCollapse","borderSpacing","transformOrigin","translate","rotate","skew","scale","transform","animation","cursor","touchAction","userSelect","resize","scrollSnapType","scrollSnapAlign","scrollSnapStop","scrollMargin","scrollPadding","listStylePosition","listStyleType","listStyleImage","appearance","columns","breakBefore","breakInside","breakAfter","gridAutoColumns","gridAutoFlow","gridAutoRows","gridTemplateColumns","gridTemplateRows","flexDirection","flexWrap","placeContent","placeItems","alignContent","alignItems","justifyContent","justifyItems","gap","space","divideWidth","divideStyle","divideColor","divideOpacity","placeSelf","alignSelf","justifySelf","overflow","overscrollBehavior","scrollBehavior","textOverflow","hyphens","whitespace","wordBreak","borderRadius","borderWidth","borderStyle","borderColor","borderOpacity","backgroundColor","backgroundOpacity","backgroundImage","gradientColorStops","boxDecorationBreak","backgroundSize","backgroundAttachment","backgroundClip","backgroundPosition","backgroundRepeat","backgroundOrigin","fill","stroke","strokeWidth","objectFit","objectPosition","padding","textAlign","textIndent","verticalAlign","fontFamily","fontSize","fontWeight","textTransform","fontStyle","fontVariantNumeric","lineHeight","letterSpacing","textColor","textOpacity","textDecoration","textDecorationColor","textDecorationStyle","textDecorationThickness","textUnderlineOffset","fontSmoothing","placeholderColor","placeholderOpacity","caretColor","accentColor","opacity","backgroundBlendMode","mixBlendMode","boxShadow","boxShadowColor","outlineStyle","outlineWidth","outlineOffset","outlineColor","ringWidth","ringColor","ringOpacity","ringOffsetWidth","ringOffsetColor","blur","brightness","contrast","dropShadow","grayscale","hueRotate","invert","saturate","sepia","filter","backdropBlur","backdropBrightness","backdropContrast","backdropGrayscale","backdropHueRotate","backdropInvert","backdropOpacity","backdropSaturate","backdropSepia","backdropFilter","transitionProperty","transitionDelay","transitionDuration","transitionTimingFunction","willChange","content"]});function Hc(t,e){return t===void 0?e:Array.isArray(t)?t:[...new Set(e.filter(i=>t!==!1&&t[i]!==!1).concat(Object.keys(t).filter(i=>t[i]!==!1)))]}var Yc=E(()=>{u()});var Qc={};Ve(Qc,{default:()=>We});var We,Sn=E(()=>{u();We=new Proxy({},{get:()=>String})});function Ta(t,e,r){typeof g!="undefined"&&g.env.JEST_WORKER_ID||r&&Jc.has(r)||(r&&Jc.add(r),console.warn(""),e.forEach(i=>console.warn(t,"-",i)))}function Oa(t){return We.dim(t)}var Jc,V,Ge=E(()=>{u();Sn();Jc=new Set;V={info(t,e){Ta(We.bold(We.cyan("info")),...Array.isArray(t)?[t]:[e,t])},warn(t,e){Ta(We.bold(We.yellow("warn")),...Array.isArray(t)?[t]:[e,t])},risk(t,e){Ta(We.bold(We.magenta("risk")),...Array.isArray(t)?[t]:[e,t])}}});var _n={};Ve(_n,{default:()=>Ea});function Vr({version:t,from:e,to:r}){V.warn(`${e}-color-renamed`,[`As of Tailwind CSS ${t}, \`${e}\` has been renamed to \`${r}\`.`,"Update your configuration file to silence this warning."])}var Ea,Wr=E(()=>{u();Ge();Ea={inherit:"inherit",current:"currentColor",transparent:"transparent",black:"#000",white:"#fff",slate:{50:"#f8fafc",100:"#f1f5f9",200:"#e2e8f0",300:"#cbd5e1",400:"#94a3b8",500:"#64748b",600:"#475569",700:"#334155",800:"#1e293b",900:"#0f172a",950:"#020617"},gray:{50:"#f9fafb",100:"#f3f4f6",200:"#e5e7eb",300:"#d1d5db",400:"#9ca3af",500:"#6b7280",600:"#4b5563",700:"#374151",800:"#1f2937",900:"#111827",950:"#030712"},zinc:{50:"#fafafa",100:"#f4f4f5",200:"#e4e4e7",300:"#d4d4d8",400:"#a1a1aa",500:"#71717a",600:"#52525b",700:"#3f3f46",800:"#27272a",900:"#18181b",950:"#09090b"},neutral:{50:"#fafafa",100:"#f5f5f5",200:"#e5e5e5",300:"#d4d4d4",400:"#a3a3a3",500:"#737373",600:"#525252",700:"#404040",800:"#262626",900:"#171717",950:"#0a0a0a"},stone:{50:"#fafaf9",100:"#f5f5f4",200:"#e7e5e4",300:"#d6d3d1",400:"#a8a29e",500:"#78716c",600:"#57534e",700:"#44403c",800:"#292524",900:"#1c1917",950:"#0c0a09"},red:{50:"#fef2f2",100:"#fee2e2",200:"#fecaca",300:"#fca5a5",400:"#f87171",500:"#ef4444",600:"#dc2626",700:"#b91c1c",800:"#991b1b",900:"#7f1d1d",950:"#450a0a"},orange:{50:"#fff7ed",100:"#ffedd5",200:"#fed7aa",300:"#fdba74",400:"#fb923c",500:"#f97316",600:"#ea580c",700:"#c2410c",800:"#9a3412",900:"#7c2d12",950:"#431407"},amber:{50:"#fffbeb",100:"#fef3c7",200:"#fde68a",300:"#fcd34d",400:"#fbbf24",500:"#f59e0b",600:"#d97706",700:"#b45309",800:"#92400e",900:"#78350f",950:"#451a03"},yellow:{50:"#fefce8",100:"#fef9c3",200:"#fef08a",300:"#fde047",400:"#facc15",500:"#eab308",600:"#ca8a04",700:"#a16207",800:"#854d0e",900:"#713f12",950:"#422006"},lime:{50:"#f7fee7",100:"#ecfccb",200:"#d9f99d",300:"#bef264",400:"#a3e635",500:"#84cc16",600:"#65a30d",700:"#4d7c0f",800:"#3f6212",900:"#365314",950:"#1a2e05"},green:{50:"#f0fdf4",100:"#dcfce7",200:"#bbf7d0",300:"#86efac",400:"#4ade80",500:"#22c55e",600:"#16a34a",700:"#15803d",800:"#166534",900:"#14532d",950:"#052e16"},emerald:{50:"#ecfdf5",100:"#d1fae5",200:"#a7f3d0",300:"#6ee7b7",400:"#34d399",500:"#10b981",600:"#059669",700:"#047857",800:"#065f46",900:"#064e3b",950:"#022c22"},teal:{50:"#f0fdfa",100:"#ccfbf1",200:"#99f6e4",300:"#5eead4",400:"#2dd4bf",500:"#14b8a6",600:"#0d9488",700:"#0f766e",800:"#115e59",900:"#134e4a",950:"#042f2e"},cyan:{50:"#ecfeff",100:"#cffafe",200:"#a5f3fc",300:"#67e8f9",400:"#22d3ee",500:"#06b6d4",600:"#0891b2",700:"#0e7490",800:"#155e75",900:"#164e63",950:"#083344"},sky:{50:"#f0f9ff",100:"#e0f2fe",200:"#bae6fd",300:"#7dd3fc",400:"#38bdf8",500:"#0ea5e9",600:"#0284c7",700:"#0369a1",800:"#075985",900:"#0c4a6e",950:"#082f49"},blue:{50:"#eff6ff",100:"#dbeafe",200:"#bfdbfe",300:"#93c5fd",400:"#60a5fa",500:"#3b82f6",600:"#2563eb",700:"#1d4ed8",800:"#1e40af",900:"#1e3a8a",950:"#172554"},indigo:{50:"#eef2ff",100:"#e0e7ff",200:"#c7d2fe",300:"#a5b4fc",400:"#818cf8",500:"#6366f1",600:"#4f46e5",700:"#4338ca",800:"#3730a3",900:"#312e81",950:"#1e1b4b"},violet:{50:"#f5f3ff",100:"#ede9fe",200:"#ddd6fe",300:"#c4b5fd",400:"#a78bfa",500:"#8b5cf6",600:"#7c3aed",700:"#6d28d9",800:"#5b21b6",900:"#4c1d95",950:"#2e1065"},purple:{50:"#faf5ff",100:"#f3e8ff",200:"#e9d5ff",300:"#d8b4fe",400:"#c084fc",500:"#a855f7",600:"#9333ea",700:"#7e22ce",800:"#6b21a8",900:"#581c87",950:"#3b0764"},fuchsia:{50:"#fdf4ff",100:"#fae8ff",200:"#f5d0fe",300:"#f0abfc",400:"#e879f9",500:"#d946ef",600:"#c026d3",700:"#a21caf",800:"#86198f",900:"#701a75",950:"#4a044e"},pink:{50:"#fdf2f8",100:"#fce7f3",200:"#fbcfe8",300:"#f9a8d4",400:"#f472b6",500:"#ec4899",600:"#db2777",700:"#be185d",800:"#9d174d",900:"#831843",950:"#500724"},rose:{50:"#fff1f2",100:"#ffe4e6",200:"#fecdd3",300:"#fda4af",400:"#fb7185",500:"#f43f5e",600:"#e11d48",700:"#be123c",800:"#9f1239",900:"#881337",950:"#4c0519"},get lightBlue(){return Vr({version:"v2.2",from:"lightBlue",to:"sky"}),this.sky},get warmGray(){return Vr({version:"v3.0",from:"warmGray",to:"stone"}),this.stone},get trueGray(){return Vr({version:"v3.0",from:"trueGray",to:"neutral"}),this.neutral},get coolGray(){return Vr({version:"v3.0",from:"coolGray",to:"gray"}),this.gray},get blueGray(){return Vr({version:"v3.0",from:"blueGray",to:"slate"}),this.slate}}});function Aa(t,...e){for(let r of e){for(let i in r)t?.hasOwnProperty?.(i)||(t[i]=r[i]);for(let i of Object.getOwnPropertySymbols(r))t?.hasOwnProperty?.(i)||(t[i]=r[i])}return t}var Xc=E(()=>{u()});function Tt(t){if(Array.isArray(t))return t;let e=t.split("[").length-1,r=t.split("]").length-1;if(e!==r)throw new Error(`Path is invalid. Has unbalanced brackets: ${t}`);return t.split(/\.(?![^\[]*\])|[\[\]]/g).filter(Boolean)}var Tn=E(()=>{u()});function de(t,e){return On.future.includes(e)?t.future==="all"||(t?.future?.[e]??Kc[e]??!1):On.experimental.includes(e)?t.experimental==="all"||(t?.experimental?.[e]??Kc[e]??!1):!1}function Zc(t){return t.experimental==="all"?On.experimental:Object.keys(t?.experimental??{}).filter(e=>On.experimental.includes(e)&&t.experimental[e])}function ep(t){if(g.env.JEST_WORKER_ID===void 0&&Zc(t).length>0){let e=Zc(t).map(r=>We.yellow(r)).join(", ");V.warn("experimental-flags-enabled",[`You have enabled experimental features: ${e}`,"Experimental features in Tailwind CSS are not covered by semver, may introduce breaking changes, and can change at any time."])}}var Kc,On,Xe=E(()=>{u();Sn();Ge();Kc={optimizeUniversalDefaults:!1,generalizedModifiers:!0,get disableColorOpacityUtilitiesByDefault(){return!1},get relativeContentPathsByDefault(){return!1}},On={future:["hoverOnlyWhenSupported","respectDefaultRingColorOpacity","disableColorOpacityUtilitiesByDefault","relativeContentPathsByDefault"],experimental:["optimizeUniversalDefaults","generalizedModifiers"]}});function tp(t){(()=>{if(t.purge||!t.content||!Array.isArray(t.content)&&!(typeof t.content=="object"&&t.content!==null))return!1;if(Array.isArray(t.content))return t.content.every(r=>typeof r=="string"?!0:!(typeof r?.raw!="string"||r?.extension&&typeof r?.extension!="string"));if(typeof t.content=="object"&&t.content!==null){if(Object.keys(t.content).some(r=>!["files","relative","extract","transform"].includes(r)))return!1;if(Array.isArray(t.content.files)){if(!t.content.files.every(r=>typeof r=="string"?!0:!(typeof r?.raw!="string"||r?.extension&&typeof r?.extension!="string")))return!1;if(typeof t.content.extract=="object"){for(let r of Object.values(t.content.extract))if(typeof r!="function")return!1}else if(!(t.content.extract===void 0||typeof t.content.extract=="function"))return!1;if(typeof t.content.transform=="object"){for(let r of Object.values(t.content.transform))if(typeof r!="function")return!1}else if(!(t.content.transform===void 0||typeof t.content.transform=="function"))return!1;if(typeof t.content.relative!="boolean"&&typeof t.content.relative!="undefined")return!1}return!0}return!1})()||V.warn("purge-deprecation",["The `purge`/`content` options have changed in Tailwind CSS v3.0.","Update your configuration file to eliminate this warning.","https://tailwindcss.com/docs/upgrade-guide#configure-content-sources"]),t.safelist=(()=>{let{content:r,purge:i,safelist:n}=t;return Array.isArray(n)?n:Array.isArray(r?.safelist)?r.safelist:Array.isArray(i?.safelist)?i.safelist:Array.isArray(i?.options?.safelist)?i.options.safelist:[]})(),t.blocklist=(()=>{let{blocklist:r}=t;if(Array.isArray(r)){if(r.every(i=>typeof i=="string"))return r;V.warn("blocklist-invalid",["The `blocklist` option must be an array of strings.","https://tailwindcss.com/docs/content-configuration#discarding-classes"])}return[]})(),typeof t.prefix=="function"?(V.warn("prefix-function",["As of Tailwind CSS v3.0, `prefix` cannot be a function.","Update `prefix` in your configuration to be a string to eliminate this warning.","https://tailwindcss.com/docs/upgrade-guide#prefix-cannot-be-a-function"]),t.prefix=""):t.prefix=t.prefix??"",t.content={relative:(()=>{let{content:r}=t;return r?.relative?r.relative:de(t,"relativeContentPathsByDefault")})(),files:(()=>{let{content:r,purge:i}=t;return Array.isArray(i)?i:Array.isArray(i?.content)?i.content:Array.isArray(r)?r:Array.isArray(r?.content)?r.content:Array.isArray(r?.files)?r.files:[]})(),extract:(()=>{let r=(()=>t.purge?.extract?t.purge.extract:t.content?.extract?t.content.extract:t.purge?.extract?.DEFAULT?t.purge.extract.DEFAULT:t.content?.extract?.DEFAULT?t.content.extract.DEFAULT:t.purge?.options?.extractors?t.purge.options.extractors:t.content?.options?.extractors?t.content.options.extractors:{})(),i={},n=(()=>{if(t.purge?.options?.defaultExtractor)return t.purge.options.defaultExtractor;if(t.content?.options?.defaultExtractor)return t.content.options.defaultExtractor})();if(n!==void 0&&(i.DEFAULT=n),typeof r=="function")i.DEFAULT=r;else if(Array.isArray(r))for(let{extensions:s,extractor:a}of r??[])for(let o of s)i[o]=a;else typeof r=="object"&&r!==null&&Object.assign(i,r);return i})(),transform:(()=>{let r=(()=>t.purge?.transform?t.purge.transform:t.content?.transform?t.content.transform:t.purge?.transform?.DEFAULT?t.purge.transform.DEFAULT:t.content?.transform?.DEFAULT?t.content.transform.DEFAULT:{})(),i={};return typeof r=="function"&&(i.DEFAULT=r),typeof r=="object"&&r!==null&&Object.assign(i,r),i})()};for(let r of t.content.files)if(typeof r=="string"&&/{([^,]*?)}/g.test(r)){V.warn("invalid-glob-braces",[`The glob pattern ${Oa(r)} in your Tailwind CSS configuration is invalid.`,`Update it to ${Oa(r.replace(/{([^,]*?)}/g,"$1"))} to silence this warning.`]);break}return t}var rp=E(()=>{u();Xe();Ge()});function ve(t){if(Object.prototype.toString.call(t)!=="[object Object]")return!1;let e=Object.getPrototypeOf(t);return e===null||e===Object.prototype}var er=E(()=>{u()});function Ot(t){return Array.isArray(t)?t.map(e=>Ot(e)):typeof t=="object"&&t!==null?Object.fromEntries(Object.entries(t).map(([e,r])=>[e,Ot(r)])):t}var En=E(()=>{u()});function $t(t){return t.replace(/\\,/g,"\\2c ")}var An=E(()=>{u()});var Ca,ip=E(()=>{u();Ca={aliceblue:[240,248,255],antiquewhite:[250,235,215],aqua:[0,255,255],aquamarine:[127,255,212],azure:[240,255,255],beige:[245,245,220],bisque:[255,228,196],black:[0,0,0],blanchedalmond:[255,235,205],blue:[0,0,255],blueviolet:[138,43,226],brown:[165,42,42],burlywood:[222,184,135],cadetblue:[95,158,160],chartreuse:[127,255,0],chocolate:[210,105,30],coral:[255,127,80],cornflowerblue:[100,149,237],cornsilk:[255,248,220],crimson:[220,20,60],cyan:[0,255,255],darkblue:[0,0,139],darkcyan:[0,139,139],darkgoldenrod:[184,134,11],darkgray:[169,169,169],darkgreen:[0,100,0],darkgrey:[169,169,169],darkkhaki:[189,183,107],darkmagenta:[139,0,139],darkolivegreen:[85,107,47],darkorange:[255,140,0],darkorchid:[153,50,204],darkred:[139,0,0],darksalmon:[233,150,122],darkseagreen:[143,188,143],darkslateblue:[72,61,139],darkslategray:[47,79,79],darkslategrey:[47,79,79],darkturquoise:[0,206,209],darkviolet:[148,0,211],deeppink:[255,20,147],deepskyblue:[0,191,255],dimgray:[105,105,105],dimgrey:[105,105,105],dodgerblue:[30,144,255],firebrick:[178,34,34],floralwhite:[255,250,240],forestgreen:[34,139,34],fuchsia:[255,0,255],gainsboro:[220,220,220],ghostwhite:[248,248,255],gold:[255,215,0],goldenrod:[218,165,32],gray:[128,128,128],green:[0,128,0],greenyellow:[173,255,47],grey:[128,128,128],honeydew:[240,255,240],hotpink:[255,105,180],indianred:[205,92,92],indigo:[75,0,130],ivory:[255,255,240],khaki:[240,230,140],lavender:[230,230,250],lavenderblush:[255,240,245],lawngreen:[124,252,0],lemonchiffon:[255,250,205],lightblue:[173,216,230],lightcoral:[240,128,128],lightcyan:[224,255,255],lightgoldenrodyellow:[250,250,210],lightgray:[211,211,211],lightgreen:[144,238,144],lightgrey:[211,211,211],lightpink:[255,182,193],lightsalmon:[255,160,122],lightseagreen:[32,178,170],lightskyblue:[135,206,250],lightslategray:[119,136,153],lightslategrey:[119,136,153],lightsteelblue:[176,196,222],lightyellow:[255,255,224],lime:[0,255,0],limegreen:[50,205,50],linen:[250,240,230],magenta:[255,0,255],maroon:[128,0,0],mediumaquamarine:[102,205,170],mediumblue:[0,0,205],mediumorchid:[186,85,211],mediumpurple:[147,112,219],mediumseagreen:[60,179,113],mediumslateblue:[123,104,238],mediumspringgreen:[0,250,154],mediumturquoise:[72,209,204],mediumvioletred:[199,21,133],midnightblue:[25,25,112],mintcream:[245,255,250],mistyrose:[255,228,225],moccasin:[255,228,181],navajowhite:[255,222,173],navy:[0,0,128],oldlace:[253,245,230],olive:[128,128,0],olivedrab:[107,142,35],orange:[255,165,0],orangered:[255,69,0],orchid:[218,112,214],palegoldenrod:[238,232,170],palegreen:[152,251,152],paleturquoise:[175,238,238],palevioletred:[219,112,147],papayawhip:[255,239,213],peachpuff:[255,218,185],peru:[205,133,63],pink:[255,192,203],plum:[221,160,221],powderblue:[176,224,230],purple:[128,0,128],rebeccapurple:[102,51,153],red:[255,0,0],rosybrown:[188,143,143],royalblue:[65,105,225],saddlebrown:[139,69,19],salmon:[250,128,114],sandybrown:[244,164,96],seagreen:[46,139,87],seashell:[255,245,238],sienna:[160,82,45],silver:[192,192,192],skyblue:[135,206,235],slateblue:[106,90,205],slategray:[112,128,144],slategrey:[112,128,144],snow:[255,250,250],springgreen:[0,255,127],steelblue:[70,130,180],tan:[210,180,140],teal:[0,128,128],thistle:[216,191,216],tomato:[255,99,71],turquoise:[64,224,208],violet:[238,130,238],wheat:[245,222,179],white:[255,255,255],whitesmoke:[245,245,245],yellow:[255,255,0],yellowgreen:[154,205,50]}});function Gr(t,{loose:e=!1}={}){if(typeof t!="string")return null;if(t=t.trim(),t==="transparent")return{mode:"rgb",color:["0","0","0"],alpha:"0"};if(t in Ca)return{mode:"rgb",color:Ca[t].map(s=>s.toString())};let r=t.replace(Yx,(s,a,o,l,f)=>["#",a,a,o,o,l,l,f?f+f:""].join("")).match(Hx);if(r!==null)return{mode:"rgb",color:[parseInt(r[1],16),parseInt(r[2],16),parseInt(r[3],16)].map(s=>s.toString()),alpha:r[4]?(parseInt(r[4],16)/255).toString():void 0};let i=t.match(Qx)??t.match(Jx);if(i===null)return null;let n=[i[2],i[3],i[4]].filter(Boolean).map(s=>s.toString());return n.length===2&&n[0].startsWith("var(")?{mode:i[1],color:[n[0]],alpha:n[1]}:!e&&n.length!==3||n.length<3&&!n.some(s=>/^var\(.*?\)$/.test(s))?null:{mode:i[1],color:n,alpha:i[5]?.toString?.()}}function Pa({mode:t,color:e,alpha:r}){let i=r!==void 0;return t==="rgba"||t==="hsla"?`${t}(${e.join(", ")}${i?`, ${r}`:""})`:`${t}(${e.join(" ")}${i?` / ${r}`:""})`}var Hx,Yx,Et,Cn,np,At,Qx,Jx,qa=E(()=>{u();ip();Hx=/^#([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})?$/i,Yx=/^#([a-f\d])([a-f\d])([a-f\d])([a-f\d])?$/i,Et=/(?:\d+|\d*\.\d+)%?/,Cn=/(?:\s*,\s*|\s+)/,np=/\s*[,/]\s*/,At=/var\(--(?:[^ )]*?)\)/,Qx=new RegExp(`^(rgba?)\\(\\s*(${Et.source}|${At.source})(?:${Cn.source}(${Et.source}|${At.source}))?(?:${Cn.source}(${Et.source}|${At.source}))?(?:${np.source}(${Et.source}|${At.source}))?\\s*\\)$`),Jx=new RegExp(`^(hsla?)\\(\\s*((?:${Et.source})(?:deg|rad|grad|turn)?|${At.source})(?:${Cn.source}(${Et.source}|${At.source}))?(?:${Cn.source}(${Et.source}|${At.source}))?(?:${np.source}(${Et.source}|${At.source}))?\\s*\\)$`)});function Ke(t,e,r){if(typeof t=="function")return t({opacityValue:e});let i=Gr(t,{loose:!0});return i===null?r:Pa({...i,alpha:e})}function ke({color:t,property:e,variable:r}){let i=[].concat(e);if(typeof t=="function")return{[r]:"1",...Object.fromEntries(i.map(s=>[s,t({opacityVariable:r,opacityValue:`var(${r})`})]))};let n=Gr(t);return n===null?Object.fromEntries(i.map(s=>[s,t])):n.alpha!==void 0?Object.fromEntries(i.map(s=>[s,t])):{[r]:"1",...Object.fromEntries(i.map(s=>[s,Pa({...n,alpha:`var(${r})`})]))}}var Hr=E(()=>{u();qa()});function Se(t,e){let r=[],i=[],n=0,s=!1;for(let a=0;a{u()});function Pn(t){return Se(t,",").map(r=>{let i=r.trim(),n={raw:i},s=i.split(Kx),a=new Set;for(let o of s)sp.lastIndex=0,!a.has("KEYWORD")&&Xx.has(o)?(n.keyword=o,a.add("KEYWORD")):sp.test(o)?a.has("X")?a.has("Y")?a.has("BLUR")?a.has("SPREAD")||(n.spread=o,a.add("SPREAD")):(n.blur=o,a.add("BLUR")):(n.y=o,a.add("Y")):(n.x=o,a.add("X")):n.color?(n.unknown||(n.unknown=[]),n.unknown.push(o)):n.color=o;return n.valid=n.x!==void 0&&n.y!==void 0,n})}function ap(t){return t.map(e=>e.valid?[e.keyword,e.x,e.y,e.blur,e.spread,e.color].filter(Boolean).join(" "):e.raw).join(", ")}var Xx,Kx,sp,Da=E(()=>{u();Yr();Xx=new Set(["inset","inherit","initial","revert","unset"]),Kx=/\ +(?![^(]*\))/g,sp=/^-?(\d+|\.\d+)(.*?)$/g});function Ia(t){return Zx.some(e=>new RegExp(`^${e}\\(.*\\)`).test(t))}function K(t,e=!0){return t.startsWith("--")?`var(${t})`:t.includes("url(")?t.split(/(url\(.*?\))/g).filter(Boolean).map(r=>/^url\(.*?\)$/.test(r)?r:K(r,!1)).join(""):(t=t.replace(/([^\\])_+/g,(r,i)=>i+" ".repeat(r.length-1)).replace(/^_/g," ").replace(/\\_/g,"_"),e&&(t=t.trim()),t=t.replace(/(calc|min|max|clamp)\(.+\)/g,r=>{let i=[];return r.replace(/var\((--.+?)[,)]/g,(n,s)=>(i.push(s),n.replace(s,op))).replace(/(-?\d*\.?\d(?!\b-\d.+[,)](?![^+\-/*])\D)(?:%|[a-z]+)?|\))([+\-/*])/g,"$1 $2 ").replace(ek,()=>i.shift())}),t)}function Ra(t){return t.startsWith("url(")}function La(t){return!isNaN(Number(t))||Ia(t)}function Qr(t){return t.endsWith("%")&&La(t.slice(0,-1))||Ia(t)}function Jr(t){return t==="0"||new RegExp(`^[+-]?[0-9]*.?[0-9]+(?:[eE][+-]?[0-9]+)?${rk}$`).test(t)||Ia(t)}function lp(t){return ik.has(t)}function up(t){let e=Pn(K(t));for(let r of e)if(!r.valid)return!1;return!0}function fp(t){let e=0;return Se(t,"_").every(i=>(i=K(i),i.startsWith("var(")?!0:Gr(i,{loose:!0})!==null?(e++,!0):!1))?e>0:!1}function cp(t){let e=0;return Se(t,",").every(i=>(i=K(i),i.startsWith("var(")?!0:Ra(i)||sk(i)||["element(","image(","cross-fade(","image-set("].some(n=>i.startsWith(n))?(e++,!0):!1))?e>0:!1}function sk(t){t=K(t);for(let e of nk)if(t.startsWith(`${e}(`))return!0;return!1}function pp(t){let e=0;return Se(t,"_").every(i=>(i=K(i),i.startsWith("var(")?!0:ak.has(i)||Jr(i)||Qr(i)?(e++,!0):!1))?e>0:!1}function dp(t){let e=0;return Se(t,",").every(i=>(i=K(i),i.startsWith("var(")?!0:i.includes(" ")&&!/(['"])([^"']+)\1/g.test(i)||/^\d/g.test(i)?!1:(e++,!0)))?e>0:!1}function hp(t){return ok.has(t)}function mp(t){return lk.has(t)}function gp(t){return uk.has(t)}var Zx,op,ek,tk,rk,ik,nk,ak,ok,lk,uk,Xr=E(()=>{u();qa();Da();Yr();Zx=["min","max","clamp","calc"];op="--tw-placeholder",ek=new RegExp(op,"g");tk=["cm","mm","Q","in","pc","pt","px","em","ex","ch","rem","lh","rlh","vw","vh","vmin","vmax","vb","vi","svw","svh","lvw","lvh","dvw","dvh","cqw","cqh","cqi","cqb","cqmin","cqmax"],rk=`(?:${tk.join("|")})`;ik=new Set(["thin","medium","thick"]);nk=new Set(["linear-gradient","radial-gradient","repeating-linear-gradient","repeating-radial-gradient","conic-gradient"]);ak=new Set(["center","top","right","bottom","left"]);ok=new Set(["serif","sans-serif","monospace","cursive","fantasy","system-ui","ui-serif","ui-sans-serif","ui-monospace","ui-rounded","math","emoji","fangsong"]);lk=new Set(["xx-small","x-small","small","medium","large","x-large","x-large","xxx-large"]);uk=new Set(["larger","smaller"])});function wp(t){let e=["cover","contain"];return Se(t,",").every(r=>{let i=Se(r,"_").filter(Boolean);return i.length===1&&e.includes(i[0])?!0:i.length!==1&&i.length!==2?!1:i.every(n=>Jr(n)||Qr(n)||n==="auto")})}var yp=E(()=>{u();Xr();Yr()});function vp(t,e){t.walkClasses(r=>{r.value=e(r.value),r.raws&&r.raws.value&&(r.raws.value=$t(r.raws.value))})}function bp(t,e){if(!Ct(t))return;let r=t.slice(1,-1);if(!!e(r))return K(r)}function fk(t,e={},r){let i=e[t];if(i!==void 0)return _t(i);if(Ct(t)){let n=bp(t,r);return n===void 0?void 0:_t(n)}}function qn(t,e={},{validate:r=()=>!0}={}){let i=e.values?.[t];return i!==void 0?i:e.supportsNegativeValues&&t.startsWith("-")?fk(t.slice(1),e.values,r):bp(t,r)}function Ct(t){return t.startsWith("[")&&t.endsWith("]")}function xp(t){let e=t.lastIndexOf("/");return e===-1||e===t.length-1?[t,void 0]:Ct(t)&&!t.includes("]/[")?[t,void 0]:[t.slice(0,e),t.slice(e+1)]}function tr(t){if(typeof t=="string"&&t.includes("")){let e=t;return({opacityValue:r=1})=>e.replace("",r)}return t}function kp(t){return K(t.slice(1,-1))}function ck(t,e={},{tailwindConfig:r={}}={}){if(e.values?.[t]!==void 0)return tr(e.values?.[t]);let[i,n]=xp(t);if(n!==void 0){let s=e.values?.[i]??(Ct(i)?i.slice(1,-1):void 0);return s===void 0?void 0:(s=tr(s),Ct(n)?Ke(s,kp(n)):r.theme?.opacity?.[n]===void 0?void 0:Ke(s,r.theme.opacity[n]))}return qn(t,e,{validate:fp})}function pk(t,e={}){return e.values?.[t]}function qe(t){return(e,r)=>qn(e,r,{validate:t})}function dk(t,e){let r=t.indexOf(e);return r===-1?[void 0,t]:[t.slice(0,r),t.slice(r+1)]}function Ba(t,e,r,i){if(r.values&&e in r.values)for(let{type:s}of t??[]){let a=Ma[s](e,r,{tailwindConfig:i});if(a!==void 0)return[a,s,null]}if(Ct(e)){let s=e.slice(1,-1),[a,o]=dk(s,":");if(!/^[\w-_]+$/g.test(a))o=s;else if(a!==void 0&&!Sp.includes(a))return[];if(o.length>0&&Sp.includes(a))return[qn(`[${o}]`,r),a,null]}let n=Fa(t,e,r,i);for(let s of n)return s;return[]}function*Fa(t,e,r,i){let n=de(i,"generalizedModifiers"),[s,a]=xp(e);if(n&&r.modifiers!=null&&(r.modifiers==="any"||typeof r.modifiers=="object"&&(a&&Ct(a)||a in r.modifiers))||(s=e,a=void 0),a!==void 0&&s===""&&(s="DEFAULT"),a!==void 0&&typeof r.modifiers=="object"){let l=r.modifiers?.[a]??null;l!==null?a=l:Ct(a)&&(a=kp(a))}for(let{type:l}of t??[]){let f=Ma[l](s,r,{tailwindConfig:i});f!==void 0&&(yield[f,l,a??null])}}var Ma,Sp,Kr=E(()=>{u();An();Hr();Xr();kn();yp();Xe();Ma={any:qn,color:ck,url:qe(Ra),image:qe(cp),length:qe(Jr),percentage:qe(Qr),position:qe(pp),lookup:pk,"generic-name":qe(hp),"family-name":qe(dp),number:qe(La),"line-width":qe(lp),"absolute-size":qe(mp),"relative-size":qe(gp),shadow:qe(up),size:qe(wp)},Sp=Object.keys(Ma)});function W(t){return typeof t=="function"?t({}):t}var Na=E(()=>{u()});function rr(t){return typeof t=="function"}function Zr(t,...e){let r=e.pop();for(let i of e)for(let n in i){let s=r(t[n],i[n]);s===void 0?ve(t[n])&&ve(i[n])?t[n]=Zr({},t[n],i[n],r):t[n]=i[n]:t[n]=s}return t}function hk(t,...e){return rr(t)?t(...e):t}function mk(t){return t.reduce((e,{extend:r})=>Zr(e,r,(i,n)=>i===void 0?[n]:Array.isArray(i)?[n,...i]:[n,i]),{})}function gk(t){return{...t.reduce((e,r)=>Aa(e,r),{}),extend:mk(t)}}function _p(t,e){if(Array.isArray(t)&&ve(t[0]))return t.concat(e);if(Array.isArray(e)&&ve(e[0])&&ve(t))return[t,...e];if(Array.isArray(e))return e}function wk({extend:t,...e}){return Zr(e,t,(r,i)=>!rr(r)&&!i.some(rr)?Zr({},r,...i,_p):(n,s)=>Zr({},...[r,...i].map(a=>hk(a,n,s)),_p))}function*yk(t){let e=Tt(t);if(e.length===0||(yield e,Array.isArray(t)))return;let r=/^(.*?)\s*\/\s*([^/]+)$/,i=t.match(r);if(i!==null){let[,n,s]=i,a=Tt(n);a.alpha=s,yield a}}function vk(t){let e=(r,i)=>{for(let n of yk(r)){let s=0,a=t;for(;a!=null&&s(r[i]=rr(t[i])?t[i](e,za):t[i],r),{})}function Tp(t){let e=[];return t.forEach(r=>{e=[...e,r];let i=r?.plugins??[];i.length!==0&&i.forEach(n=>{n.__isOptionsFunction&&(n=n()),e=[...e,...Tp([n?.config??{}])]})}),e}function bk(t){return[...t].reduceRight((r,i)=>rr(i)?i({corePlugins:r}):Hc(i,r),Wc)}function xk(t){return[...t].reduceRight((r,i)=>[...r,...i],[])}function $a(t){let e=[...Tp(t),{prefix:"",important:!1,separator:":"}];return tp(Aa({theme:vk(wk(gk(e.map(r=>r?.theme??{})))),corePlugins:bk(e.map(r=>r.corePlugins)),plugins:xk(t.map(r=>r?.plugins??[]))},...e))}var za,Op=E(()=>{u();kn();Gc();Yc();Wr();Xc();Tn();rp();er();En();Kr();Hr();Na();za={colors:Ea,negative(t){return Object.keys(t).filter(e=>t[e]!=="0").reduce((e,r)=>{let i=_t(t[r]);return i!==void 0&&(e[`-${r}`]=i),e},{})},breakpoints(t){return Object.keys(t).filter(e=>typeof t[e]=="string").reduce((e,r)=>({...e,[`screen-${r}`]:t[r]}),{})}}});var Dn=b((wM,Ep)=>{u();Ep.exports={content:[],presets:[],darkMode:"media",theme:{accentColor:({theme:t})=>({...t("colors"),auto:"auto"}),animation:{none:"none",spin:"spin 1s linear infinite",ping:"ping 1s cubic-bezier(0, 0, 0.2, 1) infinite",pulse:"pulse 2s cubic-bezier(0.4, 0, 0.6, 1) infinite",bounce:"bounce 1s infinite"},aria:{checked:'checked="true"',disabled:'disabled="true"',expanded:'expanded="true"',hidden:'hidden="true"',pressed:'pressed="true"',readonly:'readonly="true"',required:'required="true"',selected:'selected="true"'},aspectRatio:{auto:"auto",square:"1 / 1",video:"16 / 9"},backdropBlur:({theme:t})=>t("blur"),backdropBrightness:({theme:t})=>t("brightness"),backdropContrast:({theme:t})=>t("contrast"),backdropGrayscale:({theme:t})=>t("grayscale"),backdropHueRotate:({theme:t})=>t("hueRotate"),backdropInvert:({theme:t})=>t("invert"),backdropOpacity:({theme:t})=>t("opacity"),backdropSaturate:({theme:t})=>t("saturate"),backdropSepia:({theme:t})=>t("sepia"),backgroundColor:({theme:t})=>t("colors"),backgroundImage:{none:"none","gradient-to-t":"linear-gradient(to top, var(--tw-gradient-stops))","gradient-to-tr":"linear-gradient(to top right, var(--tw-gradient-stops))","gradient-to-r":"linear-gradient(to right, var(--tw-gradient-stops))","gradient-to-br":"linear-gradient(to bottom right, var(--tw-gradient-stops))","gradient-to-b":"linear-gradient(to bottom, var(--tw-gradient-stops))","gradient-to-bl":"linear-gradient(to bottom left, var(--tw-gradient-stops))","gradient-to-l":"linear-gradient(to left, var(--tw-gradient-stops))","gradient-to-tl":"linear-gradient(to top left, var(--tw-gradient-stops))"},backgroundOpacity:({theme:t})=>t("opacity"),backgroundPosition:{bottom:"bottom",center:"center",left:"left","left-bottom":"left bottom","left-top":"left top",right:"right","right-bottom":"right bottom","right-top":"right top",top:"top"},backgroundSize:{auto:"auto",cover:"cover",contain:"contain"},blur:{0:"0",none:"0",sm:"4px",DEFAULT:"8px",md:"12px",lg:"16px",xl:"24px","2xl":"40px","3xl":"64px"},borderColor:({theme:t})=>({...t("colors"),DEFAULT:t("colors.gray.200","currentColor")}),borderOpacity:({theme:t})=>t("opacity"),borderRadius:{none:"0px",sm:"0.125rem",DEFAULT:"0.25rem",md:"0.375rem",lg:"0.5rem",xl:"0.75rem","2xl":"1rem","3xl":"1.5rem",full:"9999px"},borderSpacing:({theme:t})=>({...t("spacing")}),borderWidth:{DEFAULT:"1px",0:"0px",2:"2px",4:"4px",8:"8px"},boxShadow:{sm:"0 1px 2px 0 rgb(0 0 0 / 0.05)",DEFAULT:"0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)",md:"0 4px 6px -1px rgb(0 0 0 / 0.1), 0 2px 4px -2px rgb(0 0 0 / 0.1)",lg:"0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1)",xl:"0 20px 25px -5px rgb(0 0 0 / 0.1), 0 8px 10px -6px rgb(0 0 0 / 0.1)","2xl":"0 25px 50px -12px rgb(0 0 0 / 0.25)",inner:"inset 0 2px 4px 0 rgb(0 0 0 / 0.05)",none:"none"},boxShadowColor:({theme:t})=>t("colors"),brightness:{0:"0",50:".5",75:".75",90:".9",95:".95",100:"1",105:"1.05",110:"1.1",125:"1.25",150:"1.5",200:"2"},caretColor:({theme:t})=>t("colors"),colors:({colors:t})=>({inherit:t.inherit,current:t.current,transparent:t.transparent,black:t.black,white:t.white,slate:t.slate,gray:t.gray,zinc:t.zinc,neutral:t.neutral,stone:t.stone,red:t.red,orange:t.orange,amber:t.amber,yellow:t.yellow,lime:t.lime,green:t.green,emerald:t.emerald,teal:t.teal,cyan:t.cyan,sky:t.sky,blue:t.blue,indigo:t.indigo,violet:t.violet,purple:t.purple,fuchsia:t.fuchsia,pink:t.pink,rose:t.rose}),columns:{auto:"auto",1:"1",2:"2",3:"3",4:"4",5:"5",6:"6",7:"7",8:"8",9:"9",10:"10",11:"11",12:"12","3xs":"16rem","2xs":"18rem",xs:"20rem",sm:"24rem",md:"28rem",lg:"32rem",xl:"36rem","2xl":"42rem","3xl":"48rem","4xl":"56rem","5xl":"64rem","6xl":"72rem","7xl":"80rem"},container:{},content:{none:"none"},contrast:{0:"0",50:".5",75:".75",100:"1",125:"1.25",150:"1.5",200:"2"},cursor:{auto:"auto",default:"default",pointer:"pointer",wait:"wait",text:"text",move:"move",help:"help","not-allowed":"not-allowed",none:"none","context-menu":"context-menu",progress:"progress",cell:"cell",crosshair:"crosshair","vertical-text":"vertical-text",alias:"alias",copy:"copy","no-drop":"no-drop",grab:"grab",grabbing:"grabbing","all-scroll":"all-scroll","col-resize":"col-resize","row-resize":"row-resize","n-resize":"n-resize","e-resize":"e-resize","s-resize":"s-resize","w-resize":"w-resize","ne-resize":"ne-resize","nw-resize":"nw-resize","se-resize":"se-resize","sw-resize":"sw-resize","ew-resize":"ew-resize","ns-resize":"ns-resize","nesw-resize":"nesw-resize","nwse-resize":"nwse-resize","zoom-in":"zoom-in","zoom-out":"zoom-out"},divideColor:({theme:t})=>t("borderColor"),divideOpacity:({theme:t})=>t("borderOpacity"),divideWidth:({theme:t})=>t("borderWidth"),dropShadow:{sm:"0 1px 1px rgb(0 0 0 / 0.05)",DEFAULT:["0 1px 2px rgb(0 0 0 / 0.1)","0 1px 1px rgb(0 0 0 / 0.06)"],md:["0 4px 3px rgb(0 0 0 / 0.07)","0 2px 2px rgb(0 0 0 / 0.06)"],lg:["0 10px 8px rgb(0 0 0 / 0.04)","0 4px 3px rgb(0 0 0 / 0.1)"],xl:["0 20px 13px rgb(0 0 0 / 0.03)","0 8px 5px rgb(0 0 0 / 0.08)"],"2xl":"0 25px 25px rgb(0 0 0 / 0.15)",none:"0 0 #0000"},fill:({theme:t})=>({none:"none",...t("colors")}),flex:{1:"1 1 0%",auto:"1 1 auto",initial:"0 1 auto",none:"none"},flexBasis:({theme:t})=>({auto:"auto",...t("spacing"),"1/2":"50%","1/3":"33.333333%","2/3":"66.666667%","1/4":"25%","2/4":"50%","3/4":"75%","1/5":"20%","2/5":"40%","3/5":"60%","4/5":"80%","1/6":"16.666667%","2/6":"33.333333%","3/6":"50%","4/6":"66.666667%","5/6":"83.333333%","1/12":"8.333333%","2/12":"16.666667%","3/12":"25%","4/12":"33.333333%","5/12":"41.666667%","6/12":"50%","7/12":"58.333333%","8/12":"66.666667%","9/12":"75%","10/12":"83.333333%","11/12":"91.666667%",full:"100%"}),flexGrow:{0:"0",DEFAULT:"1"},flexShrink:{0:"0",DEFAULT:"1"},fontFamily:{sans:["ui-sans-serif","system-ui","-apple-system","BlinkMacSystemFont",'"Segoe UI"',"Roboto",'"Helvetica Neue"',"Arial",'"Noto Sans"',"sans-serif",'"Apple Color Emoji"','"Segoe UI Emoji"','"Segoe UI Symbol"','"Noto Color Emoji"'],serif:["ui-serif","Georgia","Cambria",'"Times New Roman"',"Times","serif"],mono:["ui-monospace","SFMono-Regular","Menlo","Monaco","Consolas",'"Liberation Mono"','"Courier New"',"monospace"]},fontSize:{xs:["0.75rem",{lineHeight:"1rem"}],sm:["0.875rem",{lineHeight:"1.25rem"}],base:["1rem",{lineHeight:"1.5rem"}],lg:["1.125rem",{lineHeight:"1.75rem"}],xl:["1.25rem",{lineHeight:"1.75rem"}],"2xl":["1.5rem",{lineHeight:"2rem"}],"3xl":["1.875rem",{lineHeight:"2.25rem"}],"4xl":["2.25rem",{lineHeight:"2.5rem"}],"5xl":["3rem",{lineHeight:"1"}],"6xl":["3.75rem",{lineHeight:"1"}],"7xl":["4.5rem",{lineHeight:"1"}],"8xl":["6rem",{lineHeight:"1"}],"9xl":["8rem",{lineHeight:"1"}]},fontWeight:{thin:"100",extralight:"200",light:"300",normal:"400",medium:"500",semibold:"600",bold:"700",extrabold:"800",black:"900"},gap:({theme:t})=>t("spacing"),gradientColorStops:({theme:t})=>t("colors"),gradientColorStopPositions:{"0%":"0%","5%":"5%","10%":"10%","15%":"15%","20%":"20%","25%":"25%","30%":"30%","35%":"35%","40%":"40%","45%":"45%","50%":"50%","55%":"55%","60%":"60%","65%":"65%","70%":"70%","75%":"75%","80%":"80%","85%":"85%","90%":"90%","95%":"95%","100%":"100%"},grayscale:{0:"0",DEFAULT:"100%"},gridAutoColumns:{auto:"auto",min:"min-content",max:"max-content",fr:"minmax(0, 1fr)"},gridAutoRows:{auto:"auto",min:"min-content",max:"max-content",fr:"minmax(0, 1fr)"},gridColumn:{auto:"auto","span-1":"span 1 / span 1","span-2":"span 2 / span 2","span-3":"span 3 / span 3","span-4":"span 4 / span 4","span-5":"span 5 / span 5","span-6":"span 6 / span 6","span-7":"span 7 / span 7","span-8":"span 8 / span 8","span-9":"span 9 / span 9","span-10":"span 10 / span 10","span-11":"span 11 / span 11","span-12":"span 12 / span 12","span-full":"1 / -1"},gridColumnEnd:{auto:"auto",1:"1",2:"2",3:"3",4:"4",5:"5",6:"6",7:"7",8:"8",9:"9",10:"10",11:"11",12:"12",13:"13"},gridColumnStart:{auto:"auto",1:"1",2:"2",3:"3",4:"4",5:"5",6:"6",7:"7",8:"8",9:"9",10:"10",11:"11",12:"12",13:"13"},gridRow:{auto:"auto","span-1":"span 1 / span 1","span-2":"span 2 / span 2","span-3":"span 3 / span 3","span-4":"span 4 / span 4","span-5":"span 5 / span 5","span-6":"span 6 / span 6","span-full":"1 / -1"},gridRowEnd:{auto:"auto",1:"1",2:"2",3:"3",4:"4",5:"5",6:"6",7:"7"},gridRowStart:{auto:"auto",1:"1",2:"2",3:"3",4:"4",5:"5",6:"6",7:"7"},gridTemplateColumns:{none:"none",1:"repeat(1, minmax(0, 1fr))",2:"repeat(2, minmax(0, 1fr))",3:"repeat(3, minmax(0, 1fr))",4:"repeat(4, minmax(0, 1fr))",5:"repeat(5, minmax(0, 1fr))",6:"repeat(6, minmax(0, 1fr))",7:"repeat(7, minmax(0, 1fr))",8:"repeat(8, minmax(0, 1fr))",9:"repeat(9, minmax(0, 1fr))",10:"repeat(10, minmax(0, 1fr))",11:"repeat(11, minmax(0, 1fr))",12:"repeat(12, minmax(0, 1fr))"},gridTemplateRows:{none:"none",1:"repeat(1, minmax(0, 1fr))",2:"repeat(2, minmax(0, 1fr))",3:"repeat(3, minmax(0, 1fr))",4:"repeat(4, minmax(0, 1fr))",5:"repeat(5, minmax(0, 1fr))",6:"repeat(6, minmax(0, 1fr))"},height:({theme:t})=>({auto:"auto",...t("spacing"),"1/2":"50%","1/3":"33.333333%","2/3":"66.666667%","1/4":"25%","2/4":"50%","3/4":"75%","1/5":"20%","2/5":"40%","3/5":"60%","4/5":"80%","1/6":"16.666667%","2/6":"33.333333%","3/6":"50%","4/6":"66.666667%","5/6":"83.333333%",full:"100%",screen:"100vh",min:"min-content",max:"max-content",fit:"fit-content"}),hueRotate:{0:"0deg",15:"15deg",30:"30deg",60:"60deg",90:"90deg",180:"180deg"},inset:({theme:t})=>({auto:"auto",...t("spacing"),"1/2":"50%","1/3":"33.333333%","2/3":"66.666667%","1/4":"25%","2/4":"50%","3/4":"75%",full:"100%"}),invert:{0:"0",DEFAULT:"100%"},keyframes:{spin:{to:{transform:"rotate(360deg)"}},ping:{"75%, 100%":{transform:"scale(2)",opacity:"0"}},pulse:{"50%":{opacity:".5"}},bounce:{"0%, 100%":{transform:"translateY(-25%)",animationTimingFunction:"cubic-bezier(0.8,0,1,1)"},"50%":{transform:"none",animationTimingFunction:"cubic-bezier(0,0,0.2,1)"}}},letterSpacing:{tighter:"-0.05em",tight:"-0.025em",normal:"0em",wide:"0.025em",wider:"0.05em",widest:"0.1em"},lineHeight:{none:"1",tight:"1.25",snug:"1.375",normal:"1.5",relaxed:"1.625",loose:"2",3:".75rem",4:"1rem",5:"1.25rem",6:"1.5rem",7:"1.75rem",8:"2rem",9:"2.25rem",10:"2.5rem"},listStyleType:{none:"none",disc:"disc",decimal:"decimal"},listStyleImage:{none:"none"},margin:({theme:t})=>({auto:"auto",...t("spacing")}),lineClamp:{1:"1",2:"2",3:"3",4:"4",5:"5",6:"6"},maxHeight:({theme:t})=>({...t("spacing"),none:"none",full:"100%",screen:"100vh",min:"min-content",max:"max-content",fit:"fit-content"}),maxWidth:({theme:t,breakpoints:e})=>({none:"none",0:"0rem",xs:"20rem",sm:"24rem",md:"28rem",lg:"32rem",xl:"36rem","2xl":"42rem","3xl":"48rem","4xl":"56rem","5xl":"64rem","6xl":"72rem","7xl":"80rem",full:"100%",min:"min-content",max:"max-content",fit:"fit-content",prose:"65ch",...e(t("screens"))}),minHeight:{0:"0px",full:"100%",screen:"100vh",min:"min-content",max:"max-content",fit:"fit-content"},minWidth:{0:"0px",full:"100%",min:"min-content",max:"max-content",fit:"fit-content"},objectPosition:{bottom:"bottom",center:"center",left:"left","left-bottom":"left bottom","left-top":"left top",right:"right","right-bottom":"right bottom","right-top":"right top",top:"top"},opacity:{0:"0",5:"0.05",10:"0.1",20:"0.2",25:"0.25",30:"0.3",40:"0.4",50:"0.5",60:"0.6",70:"0.7",75:"0.75",80:"0.8",90:"0.9",95:"0.95",100:"1"},order:{first:"-9999",last:"9999",none:"0",1:"1",2:"2",3:"3",4:"4",5:"5",6:"6",7:"7",8:"8",9:"9",10:"10",11:"11",12:"12"},outlineColor:({theme:t})=>t("colors"),outlineOffset:{0:"0px",1:"1px",2:"2px",4:"4px",8:"8px"},outlineWidth:{0:"0px",1:"1px",2:"2px",4:"4px",8:"8px"},padding:({theme:t})=>t("spacing"),placeholderColor:({theme:t})=>t("colors"),placeholderOpacity:({theme:t})=>t("opacity"),ringColor:({theme:t})=>({DEFAULT:t("colors.blue.500","#3b82f6"),...t("colors")}),ringOffsetColor:({theme:t})=>t("colors"),ringOffsetWidth:{0:"0px",1:"1px",2:"2px",4:"4px",8:"8px"},ringOpacity:({theme:t})=>({DEFAULT:"0.5",...t("opacity")}),ringWidth:{DEFAULT:"3px",0:"0px",1:"1px",2:"2px",4:"4px",8:"8px"},rotate:{0:"0deg",1:"1deg",2:"2deg",3:"3deg",6:"6deg",12:"12deg",45:"45deg",90:"90deg",180:"180deg"},saturate:{0:"0",50:".5",100:"1",150:"1.5",200:"2"},scale:{0:"0",50:".5",75:".75",90:".9",95:".95",100:"1",105:"1.05",110:"1.1",125:"1.25",150:"1.5"},screens:{sm:"640px",md:"768px",lg:"1024px",xl:"1280px","2xl":"1536px"},scrollMargin:({theme:t})=>({...t("spacing")}),scrollPadding:({theme:t})=>t("spacing"),sepia:{0:"0",DEFAULT:"100%"},skew:{0:"0deg",1:"1deg",2:"2deg",3:"3deg",6:"6deg",12:"12deg"},space:({theme:t})=>({...t("spacing")}),spacing:{px:"1px",0:"0px",.5:"0.125rem",1:"0.25rem",1.5:"0.375rem",2:"0.5rem",2.5:"0.625rem",3:"0.75rem",3.5:"0.875rem",4:"1rem",5:"1.25rem",6:"1.5rem",7:"1.75rem",8:"2rem",9:"2.25rem",10:"2.5rem",11:"2.75rem",12:"3rem",14:"3.5rem",16:"4rem",20:"5rem",24:"6rem",28:"7rem",32:"8rem",36:"9rem",40:"10rem",44:"11rem",48:"12rem",52:"13rem",56:"14rem",60:"15rem",64:"16rem",72:"18rem",80:"20rem",96:"24rem"},stroke:({theme:t})=>({none:"none",...t("colors")}),strokeWidth:{0:"0",1:"1",2:"2"},supports:{},data:{},textColor:({theme:t})=>t("colors"),textDecorationColor:({theme:t})=>t("colors"),textDecorationThickness:{auto:"auto","from-font":"from-font",0:"0px",1:"1px",2:"2px",4:"4px",8:"8px"},textIndent:({theme:t})=>({...t("spacing")}),textOpacity:({theme:t})=>t("opacity"),textUnderlineOffset:{auto:"auto",0:"0px",1:"1px",2:"2px",4:"4px",8:"8px"},transformOrigin:{center:"center",top:"top","top-right":"top right",right:"right","bottom-right":"bottom right",bottom:"bottom","bottom-left":"bottom left",left:"left","top-left":"top left"},transitionDelay:{0:"0s",75:"75ms",100:"100ms",150:"150ms",200:"200ms",300:"300ms",500:"500ms",700:"700ms",1e3:"1000ms"},transitionDuration:{DEFAULT:"150ms",0:"0s",75:"75ms",100:"100ms",150:"150ms",200:"200ms",300:"300ms",500:"500ms",700:"700ms",1e3:"1000ms"},transitionProperty:{none:"none",all:"all",DEFAULT:"color, background-color, border-color, text-decoration-color, fill, stroke, opacity, box-shadow, transform, filter, backdrop-filter",colors:"color, background-color, border-color, text-decoration-color, fill, stroke",opacity:"opacity",shadow:"box-shadow",transform:"transform"},transitionTimingFunction:{DEFAULT:"cubic-bezier(0.4, 0, 0.2, 1)",linear:"linear",in:"cubic-bezier(0.4, 0, 1, 1)",out:"cubic-bezier(0, 0, 0.2, 1)","in-out":"cubic-bezier(0.4, 0, 0.2, 1)"},translate:({theme:t})=>({...t("spacing"),"1/2":"50%","1/3":"33.333333%","2/3":"66.666667%","1/4":"25%","2/4":"50%","3/4":"75%",full:"100%"}),width:({theme:t})=>({auto:"auto",...t("spacing"),"1/2":"50%","1/3":"33.333333%","2/3":"66.666667%","1/4":"25%","2/4":"50%","3/4":"75%","1/5":"20%","2/5":"40%","3/5":"60%","4/5":"80%","1/6":"16.666667%","2/6":"33.333333%","3/6":"50%","4/6":"66.666667%","5/6":"83.333333%","1/12":"8.333333%","2/12":"16.666667%","3/12":"25%","4/12":"33.333333%","5/12":"41.666667%","6/12":"50%","7/12":"58.333333%","8/12":"66.666667%","9/12":"75%","10/12":"83.333333%","11/12":"91.666667%",full:"100%",screen:"100vw",min:"min-content",max:"max-content",fit:"fit-content"}),willChange:{auto:"auto",scroll:"scroll-position",contents:"contents",transform:"transform"},zIndex:{auto:"auto",0:"0",10:"10",20:"20",30:"30",40:"40",50:"50"}},plugins:[]}});function In(t){let e=(t?.presets??[Ap.default]).slice().reverse().flatMap(n=>In(n instanceof Function?n():n)),r={respectDefaultRingColorOpacity:{theme:{ringColor:({theme:n})=>({DEFAULT:"#3b82f67f",...n("colors")})}},disableColorOpacityUtilitiesByDefault:{corePlugins:{backgroundOpacity:!1,borderOpacity:!1,divideOpacity:!1,placeholderOpacity:!1,ringOpacity:!1,textOpacity:!1}}},i=Object.keys(r).filter(n=>de(t,n)).map(n=>r[n]);return[t,...i,...e]}var Ap,Cp=E(()=>{u();Ap=he(Dn());Xe()});var Pp={};Ve(Pp,{default:()=>ei});function ei(...t){let[,...e]=In(t[0]);return $a([...t,...e])}var ja=E(()=>{u();Op();Cp()});var qp={};Ve(qp,{default:()=>me});var me,jt=E(()=>{u();me={resolve:t=>t,extname:t=>"."+t.split(".").pop()}});function Rn(t){return typeof t=="object"&&t!==null}function Sk(t){return Object.keys(t).length===0}function Dp(t){return typeof t=="string"||t instanceof String}function Ua(t){return Rn(t)&&t.config===void 0&&!Sk(t)?null:Rn(t)&&t.config!==void 0&&Dp(t.config)?me.resolve(t.config):Rn(t)&&t.config!==void 0&&Rn(t.config)?null:Dp(t)?me.resolve(t):_k()}function _k(){for(let t of kk)try{let e=me.resolve(t);return we.accessSync(e),e}catch(e){}return null}var kk,Ip=E(()=>{u();ut();jt();kk=["./tailwind.config.js","./tailwind.config.cjs","./tailwind.config.mjs","./tailwind.config.ts"]});var Rp={};Ve(Rp,{default:()=>Va});var Va,Wa=E(()=>{u();Va={parse:t=>({href:t})}});var Ga=b(()=>{u()});var Ln=b((EM,Bp)=>{u();"use strict";var Lp=(Sn(),Qc),Mp=Ga(),ir=class extends Error{constructor(e,r,i,n,s,a){super(e);this.name="CssSyntaxError",this.reason=e,s&&(this.file=s),n&&(this.source=n),a&&(this.plugin=a),typeof r!="undefined"&&typeof i!="undefined"&&(typeof r=="number"?(this.line=r,this.column=i):(this.line=r.line,this.column=r.column,this.endLine=i.line,this.endColumn=i.column)),this.setMessage(),Error.captureStackTrace&&Error.captureStackTrace(this,ir)}setMessage(){this.message=this.plugin?this.plugin+": ":"",this.message+=this.file?this.file:"",typeof this.line!="undefined"&&(this.message+=":"+this.line+":"+this.column),this.message+=": "+this.reason}showSourceCode(e){if(!this.source)return"";let r=this.source;e==null&&(e=Lp.isColorSupported),Mp&&e&&(r=Mp(r));let i=r.split(/\r?\n/),n=Math.max(this.line-3,0),s=Math.min(this.line+2,i.length),a=String(s).length,o,l;if(e){let{bold:f,red:c,gray:p}=Lp.createColors(!0);o=m=>f(c(m)),l=m=>p(m)}else o=l=f=>f;return i.slice(n,s).map((f,c)=>{let p=n+1+c,m=" "+(" "+p).slice(-a)+" | ";if(p===this.line){let d=l(m.replace(/\d/g," "))+f.slice(0,this.column-1).replace(/[^\t]/g," ");return o(">")+l(m)+f+` - `+d+o("^")}return" "+l(m)+f}).join(` -`)}toString(){let e=this.showSourceCode();return e&&(e=` - -`+e+` -`),this.name+": "+this.message+e}};Bp.exports=ir;ir.default=ir});var Mn=b((AM,Ha)=>{u();"use strict";Ha.exports.isClean=Symbol("isClean");Ha.exports.my=Symbol("my")});var Ya=b((CM,Np)=>{u();"use strict";var Fp={colon:": ",indent:" ",beforeDecl:` -`,beforeRule:` -`,beforeOpen:" ",beforeClose:` -`,beforeComment:` -`,after:` -`,emptyBody:"",commentLeft:" ",commentRight:" ",semicolon:!1};function Tk(t){return t[0].toUpperCase()+t.slice(1)}var Bn=class{constructor(e){this.builder=e}stringify(e,r){if(!this[e.type])throw new Error("Unknown AST node type "+e.type+". Maybe you need to change PostCSS stringifier.");this[e.type](e,r)}document(e){this.body(e)}root(e){this.body(e),e.raws.after&&this.builder(e.raws.after)}comment(e){let r=this.raw(e,"left","commentLeft"),i=this.raw(e,"right","commentRight");this.builder("/*"+r+e.text+i+"*/",e)}decl(e,r){let i=this.raw(e,"between","colon"),n=e.prop+i+this.rawValue(e,"value");e.important&&(n+=e.raws.important||" !important"),r&&(n+=";"),this.builder(n,e)}rule(e){this.block(e,this.rawValue(e,"selector")),e.raws.ownSemicolon&&this.builder(e.raws.ownSemicolon,e,"end")}atrule(e,r){let i="@"+e.name,n=e.params?this.rawValue(e,"params"):"";if(typeof e.raws.afterName!="undefined"?i+=e.raws.afterName:n&&(i+=" "),e.nodes)this.block(e,i+n);else{let s=(e.raws.between||"")+(r?";":"");this.builder(i+n+s,e)}}body(e){let r=e.nodes.length-1;for(;r>0&&e.nodes[r].type==="comment";)r-=1;let i=this.raw(e,"semicolon");for(let n=0;n{if(n=l.raws[r],typeof n!="undefined")return!1})}return typeof n=="undefined"&&(n=Fp[i]),a.rawCache[i]=n,n}rawSemicolon(e){let r;return e.walk(i=>{if(i.nodes&&i.nodes.length&&i.last.type==="decl"&&(r=i.raws.semicolon,typeof r!="undefined"))return!1}),r}rawEmptyBody(e){let r;return e.walk(i=>{if(i.nodes&&i.nodes.length===0&&(r=i.raws.after,typeof r!="undefined"))return!1}),r}rawIndent(e){if(e.raws.indent)return e.raws.indent;let r;return e.walk(i=>{let n=i.parent;if(n&&n!==e&&n.parent&&n.parent===e&&typeof i.raws.before!="undefined"){let s=i.raws.before.split(` -`);return r=s[s.length-1],r=r.replace(/\S/g,""),!1}}),r}rawBeforeComment(e,r){let i;return e.walkComments(n=>{if(typeof n.raws.before!="undefined")return i=n.raws.before,i.includes(` -`)&&(i=i.replace(/[^\n]+$/,"")),!1}),typeof i=="undefined"?i=this.raw(r,null,"beforeDecl"):i&&(i=i.replace(/\S/g,"")),i}rawBeforeDecl(e,r){let i;return e.walkDecls(n=>{if(typeof n.raws.before!="undefined")return i=n.raws.before,i.includes(` -`)&&(i=i.replace(/[^\n]+$/,"")),!1}),typeof i=="undefined"?i=this.raw(r,null,"beforeRule"):i&&(i=i.replace(/\S/g,"")),i}rawBeforeRule(e){let r;return e.walk(i=>{if(i.nodes&&(i.parent!==e||e.first!==i)&&typeof i.raws.before!="undefined")return r=i.raws.before,r.includes(` -`)&&(r=r.replace(/[^\n]+$/,"")),!1}),r&&(r=r.replace(/\S/g,"")),r}rawBeforeClose(e){let r;return e.walk(i=>{if(i.nodes&&i.nodes.length>0&&typeof i.raws.after!="undefined")return r=i.raws.after,r.includes(` -`)&&(r=r.replace(/[^\n]+$/,"")),!1}),r&&(r=r.replace(/\S/g,"")),r}rawBeforeOpen(e){let r;return e.walk(i=>{if(i.type!=="decl"&&(r=i.raws.between,typeof r!="undefined"))return!1}),r}rawColon(e){let r;return e.walkDecls(i=>{if(typeof i.raws.between!="undefined")return r=i.raws.between.replace(/[^\s:]/g,""),!1}),r}beforeAfter(e,r){let i;e.type==="decl"?i=this.raw(e,null,"beforeDecl"):e.type==="comment"?i=this.raw(e,null,"beforeComment"):r==="before"?i=this.raw(e,null,"beforeRule"):i=this.raw(e,null,"beforeClose");let n=e.parent,s=0;for(;n&&n.type!=="root";)s+=1,n=n.parent;if(i.includes(` -`)){let a=this.raw(e,null,"indent");if(a.length)for(let o=0;o{u();"use strict";var Ok=Ya();function Qa(t,e){new Ok(e).stringify(t)}zp.exports=Qa;Qa.default=Qa});var ri=b((qM,$p)=>{u();"use strict";var{isClean:Fn,my:Ek}=Mn(),Ak=Ln(),Ck=Ya(),Pk=ti();function Ja(t,e){let r=new t.constructor;for(let i in t){if(!Object.prototype.hasOwnProperty.call(t,i)||i==="proxyCache")continue;let n=t[i],s=typeof n;i==="parent"&&s==="object"?e&&(r[i]=e):i==="source"?r[i]=n:Array.isArray(n)?r[i]=n.map(a=>Ja(a,r)):(s==="object"&&n!==null&&(n=Ja(n)),r[i]=n)}return r}var Nn=class{constructor(e={}){this.raws={},this[Fn]=!1,this[Ek]=!0;for(let r in e)if(r==="nodes"){this.nodes=[];for(let i of e[r])typeof i.clone=="function"?this.append(i.clone()):this.append(i)}else this[r]=e[r]}error(e,r={}){if(this.source){let{start:i,end:n}=this.rangeBy(r);return this.source.input.error(e,{line:i.line,column:i.column},{line:n.line,column:n.column},r)}return new Ak(e)}warn(e,r,i){let n={node:this};for(let s in i)n[s]=i[s];return e.warn(r,n)}remove(){return this.parent&&this.parent.removeChild(this),this.parent=void 0,this}toString(e=Pk){e.stringify&&(e=e.stringify);let r="";return e(this,i=>{r+=i}),r}assign(e={}){for(let r in e)this[r]=e[r];return this}clone(e={}){let r=Ja(this);for(let i in e)r[i]=e[i];return r}cloneBefore(e={}){let r=this.clone(e);return this.parent.insertBefore(this,r),r}cloneAfter(e={}){let r=this.clone(e);return this.parent.insertAfter(this,r),r}replaceWith(...e){if(this.parent){let r=this,i=!1;for(let n of e)n===this?i=!0:i?(this.parent.insertAfter(r,n),r=n):this.parent.insertBefore(r,n);i||this.remove()}return this}next(){if(!this.parent)return;let e=this.parent.index(this);return this.parent.nodes[e+1]}prev(){if(!this.parent)return;let e=this.parent.index(this);return this.parent.nodes[e-1]}before(e){return this.parent.insertBefore(this,e),this}after(e){return this.parent.insertAfter(this,e),this}root(){let e=this;for(;e.parent&&e.parent.type!=="document";)e=e.parent;return e}raw(e,r){return new Ck().raw(this,e,r)}cleanRaws(e){delete this.raws.before,delete this.raws.after,e||delete this.raws.between}toJSON(e,r){let i={},n=r==null;r=r||new Map;let s=0;for(let a in this){if(!Object.prototype.hasOwnProperty.call(this,a)||a==="parent"||a==="proxyCache")continue;let o=this[a];if(Array.isArray(o))i[a]=o.map(l=>typeof l=="object"&&l.toJSON?l.toJSON(null,r):l);else if(typeof o=="object"&&o.toJSON)i[a]=o.toJSON(null,r);else if(a==="source"){let l=r.get(o.input);l==null&&(l=s,r.set(o.input,s),s++),i[a]={inputId:l,start:o.start,end:o.end}}else i[a]=o}return n&&(i.inputs=[...r.keys()].map(a=>a.toJSON())),i}positionInside(e){let r=this.toString(),i=this.source.start.column,n=this.source.start.line;for(let s=0;se.root().toProxy():e[r]}}}toProxy(){return this.proxyCache||(this.proxyCache=new Proxy(this,this.getProxyProcessor())),this.proxyCache}addToError(e){if(e.postcssNode=this,e.stack&&this.source&&/\n\s{4}at /.test(e.stack)){let r=this.source;e.stack=e.stack.replace(/\n\s{4}at /,`$&${r.input.from}:${r.start.line}:${r.start.column}$&`)}return e}markDirty(){if(this[Fn]){this[Fn]=!1;let e=this;for(;e=e.parent;)e[Fn]=!1}}get proxyOf(){return this}};$p.exports=Nn;Nn.default=Nn});var ii=b((DM,jp)=>{u();"use strict";var qk=ri(),zn=class extends qk{constructor(e){e&&typeof e.value!="undefined"&&typeof e.value!="string"&&(e={...e,value:String(e.value)});super(e);this.type="decl"}get variable(){return this.prop.startsWith("--")||this.prop[0]==="$"}};jp.exports=zn;zn.default=zn});var Xa=b((IM,Up)=>{u();Up.exports=function(t,e){return{generate:()=>{let r="";return t(e,i=>{r+=i}),[r]}}}});var ni=b((RM,Vp)=>{u();"use strict";var Dk=ri(),$n=class extends Dk{constructor(e){super(e);this.type="comment"}};Vp.exports=$n;$n.default=$n});var Pt=b((LM,Zp)=>{u();"use strict";var{isClean:Wp,my:Gp}=Mn(),Hp=ii(),Yp=ni(),Ik=ri(),Qp,Ka,Za,Jp;function Xp(t){return t.map(e=>(e.nodes&&(e.nodes=Xp(e.nodes)),delete e.source,e))}function Kp(t){if(t[Wp]=!1,t.proxyOf.nodes)for(let e of t.proxyOf.nodes)Kp(e)}var Ie=class extends Ik{push(e){return e.parent=this,this.proxyOf.nodes.push(e),this}each(e){if(!this.proxyOf.nodes)return;let r=this.getIterator(),i,n;for(;this.indexes[r]{let n;try{n=e(r,i)}catch(s){throw r.addToError(s)}return n!==!1&&r.walk&&(n=r.walk(e)),n})}walkDecls(e,r){return r?e instanceof RegExp?this.walk((i,n)=>{if(i.type==="decl"&&e.test(i.prop))return r(i,n)}):this.walk((i,n)=>{if(i.type==="decl"&&i.prop===e)return r(i,n)}):(r=e,this.walk((i,n)=>{if(i.type==="decl")return r(i,n)}))}walkRules(e,r){return r?e instanceof RegExp?this.walk((i,n)=>{if(i.type==="rule"&&e.test(i.selector))return r(i,n)}):this.walk((i,n)=>{if(i.type==="rule"&&i.selector===e)return r(i,n)}):(r=e,this.walk((i,n)=>{if(i.type==="rule")return r(i,n)}))}walkAtRules(e,r){return r?e instanceof RegExp?this.walk((i,n)=>{if(i.type==="atrule"&&e.test(i.name))return r(i,n)}):this.walk((i,n)=>{if(i.type==="atrule"&&i.name===e)return r(i,n)}):(r=e,this.walk((i,n)=>{if(i.type==="atrule")return r(i,n)}))}walkComments(e){return this.walk((r,i)=>{if(r.type==="comment")return e(r,i)})}append(...e){for(let r of e){let i=this.normalize(r,this.last);for(let n of i)this.proxyOf.nodes.push(n)}return this.markDirty(),this}prepend(...e){e=e.reverse();for(let r of e){let i=this.normalize(r,this.first,"prepend").reverse();for(let n of i)this.proxyOf.nodes.unshift(n);for(let n in this.indexes)this.indexes[n]=this.indexes[n]+i.length}return this.markDirty(),this}cleanRaws(e){if(super.cleanRaws(e),this.nodes)for(let r of this.nodes)r.cleanRaws(e)}insertBefore(e,r){let i=this.index(e),n=i===0?"prepend":!1,s=this.normalize(r,this.proxyOf.nodes[i],n).reverse();i=this.index(e);for(let o of s)this.proxyOf.nodes.splice(i,0,o);let a;for(let o in this.indexes)a=this.indexes[o],i<=a&&(this.indexes[o]=a+s.length);return this.markDirty(),this}insertAfter(e,r){let i=this.index(e),n=this.normalize(r,this.proxyOf.nodes[i]).reverse();i=this.index(e);for(let a of n)this.proxyOf.nodes.splice(i+1,0,a);let s;for(let a in this.indexes)s=this.indexes[a],i=e&&(this.indexes[i]=r-1);return this.markDirty(),this}removeAll(){for(let e of this.proxyOf.nodes)e.parent=void 0;return this.proxyOf.nodes=[],this.markDirty(),this}replaceValues(e,r,i){return i||(i=r,r={}),this.walkDecls(n=>{r.props&&!r.props.includes(n.prop)||r.fast&&!n.value.includes(r.fast)||(n.value=n.value.replace(e,i))}),this.markDirty(),this}every(e){return this.nodes.every(e)}some(e){return this.nodes.some(e)}index(e){return typeof e=="number"?e:(e.proxyOf&&(e=e.proxyOf),this.proxyOf.nodes.indexOf(e))}get first(){if(!!this.proxyOf.nodes)return this.proxyOf.nodes[0]}get last(){if(!!this.proxyOf.nodes)return this.proxyOf.nodes[this.proxyOf.nodes.length-1]}normalize(e,r){if(typeof e=="string")e=Xp(Qp(e).nodes);else if(Array.isArray(e)){e=e.slice(0);for(let n of e)n.parent&&n.parent.removeChild(n,"ignore")}else if(e.type==="root"&&this.type!=="document"){e=e.nodes.slice(0);for(let n of e)n.parent&&n.parent.removeChild(n,"ignore")}else if(e.type)e=[e];else if(e.prop){if(typeof e.value=="undefined")throw new Error("Value field is missed in node creation");typeof e.value!="string"&&(e.value=String(e.value)),e=[new Hp(e)]}else if(e.selector)e=[new Ka(e)];else if(e.name)e=[new Za(e)];else if(e.text)e=[new Yp(e)];else throw new Error("Unknown node type in node creation");return e.map(n=>(n[Gp]||Ie.rebuild(n),n=n.proxyOf,n.parent&&n.parent.removeChild(n),n[Wp]&&Kp(n),typeof n.raws.before=="undefined"&&r&&typeof r.raws.before!="undefined"&&(n.raws.before=r.raws.before.replace(/\S/g,"")),n.parent=this.proxyOf,n))}getProxyProcessor(){return{set(e,r,i){return e[r]===i||(e[r]=i,(r==="name"||r==="params"||r==="selector")&&e.markDirty()),!0},get(e,r){return r==="proxyOf"?e:e[r]?r==="each"||typeof r=="string"&&r.startsWith("walk")?(...i)=>e[r](...i.map(n=>typeof n=="function"?(s,a)=>n(s.toProxy(),a):n)):r==="every"||r==="some"?i=>e[r]((n,...s)=>i(n.toProxy(),...s)):r==="root"?()=>e.root().toProxy():r==="nodes"?e.nodes.map(i=>i.toProxy()):r==="first"||r==="last"?e[r].toProxy():e[r]:e[r]}}}getIterator(){this.lastEach||(this.lastEach=0),this.indexes||(this.indexes={}),this.lastEach+=1;let e=this.lastEach;return this.indexes[e]=0,e}};Ie.registerParse=t=>{Qp=t};Ie.registerRule=t=>{Ka=t};Ie.registerAtRule=t=>{Za=t};Ie.registerRoot=t=>{Jp=t};Zp.exports=Ie;Ie.default=Ie;Ie.rebuild=t=>{t.type==="atrule"?Object.setPrototypeOf(t,Za.prototype):t.type==="rule"?Object.setPrototypeOf(t,Ka.prototype):t.type==="decl"?Object.setPrototypeOf(t,Hp.prototype):t.type==="comment"?Object.setPrototypeOf(t,Yp.prototype):t.type==="root"&&Object.setPrototypeOf(t,Jp.prototype),t[Gp]=!0,t.nodes&&t.nodes.forEach(e=>{Ie.rebuild(e)})}});var jn=b((MM,rd)=>{u();"use strict";var Rk=Pt(),ed,td,nr=class extends Rk{constructor(e){super({type:"document",...e});this.nodes||(this.nodes=[])}toResult(e={}){return new ed(new td,this,e).stringify()}};nr.registerLazyResult=t=>{ed=t};nr.registerProcessor=t=>{td=t};rd.exports=nr;nr.default=nr});var eo=b((BM,nd)=>{u();"use strict";var id={};nd.exports=function(e){id[e]||(id[e]=!0,typeof console!="undefined"&&console.warn&&console.warn(e))}});var to=b((FM,sd)=>{u();"use strict";var Un=class{constructor(e,r={}){if(this.type="warning",this.text=e,r.node&&r.node.source){let i=r.node.rangeBy(r);this.line=i.start.line,this.column=i.start.column,this.endLine=i.end.line,this.endColumn=i.end.column}for(let i in r)this[i]=r[i]}toString(){return this.node?this.node.error(this.text,{plugin:this.plugin,index:this.index,word:this.word}).message:this.plugin?this.plugin+": "+this.text:this.text}};sd.exports=Un;Un.default=Un});var Wn=b((NM,ad)=>{u();"use strict";var Lk=to(),Vn=class{constructor(e,r,i){this.processor=e,this.messages=[],this.root=r,this.opts=i,this.css=void 0,this.map=void 0}toString(){return this.css}warn(e,r={}){r.plugin||this.lastPlugin&&this.lastPlugin.postcssPlugin&&(r.plugin=this.lastPlugin.postcssPlugin);let i=new Lk(e,r);return this.messages.push(i),i}warnings(){return this.messages.filter(e=>e.type==="warning")}get content(){return this.css}};ad.exports=Vn;Vn.default=Vn});var cd=b((zM,fd)=>{u();"use strict";var ro="'".charCodeAt(0),od='"'.charCodeAt(0),Gn="\\".charCodeAt(0),ld="/".charCodeAt(0),Hn=` -`.charCodeAt(0),si=" ".charCodeAt(0),Yn="\f".charCodeAt(0),Qn=" ".charCodeAt(0),Jn="\r".charCodeAt(0),Mk="[".charCodeAt(0),Bk="]".charCodeAt(0),Fk="(".charCodeAt(0),Nk=")".charCodeAt(0),zk="{".charCodeAt(0),$k="}".charCodeAt(0),jk=";".charCodeAt(0),Uk="*".charCodeAt(0),Vk=":".charCodeAt(0),Wk="@".charCodeAt(0),Xn=/[\t\n\f\r "#'()/;[\\\]{}]/g,Kn=/[\t\n\f\r !"#'():;@[\\\]{}]|\/(?=\*)/g,Gk=/.[\n"'(/\\]/,ud=/[\da-f]/i;fd.exports=function(e,r={}){let i=e.css.valueOf(),n=r.ignoreErrors,s,a,o,l,f,c,p,m,d,v,_=i.length,x=0,y=[],S=[];function T(){return x}function O(F){throw e.error("Unclosed "+F,x)}function P(){return S.length===0&&x>=_}function N(F){if(S.length)return S.pop();if(x>=_)return;let fe=F?F.ignoreUnclosed:!1;switch(s=i.charCodeAt(x),s){case Hn:case si:case Qn:case Jn:case Yn:{a=x;do a+=1,s=i.charCodeAt(a);while(s===si||s===Hn||s===Qn||s===Jn||s===Yn);v=["space",i.slice(x,a)],x=a-1;break}case Mk:case Bk:case zk:case $k:case Vk:case jk:case Nk:{let Te=String.fromCharCode(s);v=[Te,Te,x];break}case Fk:{if(m=y.length?y.pop()[1]:"",d=i.charCodeAt(x+1),m==="url"&&d!==ro&&d!==od&&d!==si&&d!==Hn&&d!==Qn&&d!==Yn&&d!==Jn){a=x;do{if(c=!1,a=i.indexOf(")",a+1),a===-1)if(n||fe){a=x;break}else O("bracket");for(p=a;i.charCodeAt(p-1)===Gn;)p-=1,c=!c}while(c);v=["brackets",i.slice(x,a+1),x,a],x=a}else a=i.indexOf(")",x+1),l=i.slice(x,a+1),a===-1||Gk.test(l)?v=["(","(",x]:(v=["brackets",l,x,a],x=a);break}case ro:case od:{o=s===ro?"'":'"',a=x;do{if(c=!1,a=i.indexOf(o,a+1),a===-1)if(n||fe){a=x+1;break}else O("string");for(p=a;i.charCodeAt(p-1)===Gn;)p-=1,c=!c}while(c);v=["string",i.slice(x,a+1),x,a],x=a;break}case Wk:{Xn.lastIndex=x+1,Xn.test(i),Xn.lastIndex===0?a=i.length-1:a=Xn.lastIndex-2,v=["at-word",i.slice(x,a+1),x,a],x=a;break}case Gn:{for(a=x,f=!0;i.charCodeAt(a+1)===Gn;)a+=1,f=!f;if(s=i.charCodeAt(a+1),f&&s!==ld&&s!==si&&s!==Hn&&s!==Qn&&s!==Jn&&s!==Yn&&(a+=1,ud.test(i.charAt(a)))){for(;ud.test(i.charAt(a+1));)a+=1;i.charCodeAt(a+1)===si&&(a+=1)}v=["word",i.slice(x,a+1),x,a],x=a;break}default:{s===ld&&i.charCodeAt(x+1)===Uk?(a=i.indexOf("*/",x+2)+1,a===0&&(n||fe?a=i.length:O("comment")),v=["comment",i.slice(x,a+1),x,a],x=a):(Kn.lastIndex=x+1,Kn.test(i),Kn.lastIndex===0?a=i.length-1:a=Kn.lastIndex-2,v=["word",i.slice(x,a+1),x,a],y.push(v),x=a);break}}return x++,v}function z(F){S.push(F)}return{back:z,nextToken:N,endOfFile:P,position:T}}});var Zn=b(($M,dd)=>{u();"use strict";var pd=Pt(),ai=class extends pd{constructor(e){super(e);this.type="atrule"}append(...e){return this.proxyOf.nodes||(this.nodes=[]),super.append(...e)}prepend(...e){return this.proxyOf.nodes||(this.nodes=[]),super.prepend(...e)}};dd.exports=ai;ai.default=ai;pd.registerAtRule(ai)});var sr=b((jM,wd)=>{u();"use strict";var hd=Pt(),md,gd,Ut=class extends hd{constructor(e){super(e);this.type="root",this.nodes||(this.nodes=[])}removeChild(e,r){let i=this.index(e);return!r&&i===0&&this.nodes.length>1&&(this.nodes[1].raws.before=this.nodes[i].raws.before),super.removeChild(e)}normalize(e,r,i){let n=super.normalize(e);if(r){if(i==="prepend")this.nodes.length>1?r.raws.before=this.nodes[1].raws.before:delete r.raws.before;else if(this.first!==r)for(let s of n)s.raws.before=r.raws.before}return n}toResult(e={}){return new md(new gd,this,e).stringify()}};Ut.registerLazyResult=t=>{md=t};Ut.registerProcessor=t=>{gd=t};wd.exports=Ut;Ut.default=Ut;hd.registerRoot(Ut)});var io=b((UM,yd)=>{u();"use strict";var oi={split(t,e,r){let i=[],n="",s=!1,a=0,o=!1,l="",f=!1;for(let c of t)f?f=!1:c==="\\"?f=!0:o?c===l&&(o=!1):c==='"'||c==="'"?(o=!0,l=c):c==="("?a+=1:c===")"?a>0&&(a-=1):a===0&&e.includes(c)&&(s=!0),s?(n!==""&&i.push(n.trim()),n="",s=!1):n+=c;return(r||n!=="")&&i.push(n.trim()),i},space(t){let e=[" ",` -`," "];return oi.split(t,e)},comma(t){return oi.split(t,[","],!0)}};yd.exports=oi;oi.default=oi});var es=b((VM,bd)=>{u();"use strict";var vd=Pt(),Hk=io(),li=class extends vd{constructor(e){super(e);this.type="rule",this.nodes||(this.nodes=[])}get selectors(){return Hk.comma(this.selector)}set selectors(e){let r=this.selector?this.selector.match(/,\s*/):null,i=r?r[0]:","+this.raw("between","beforeOpen");this.selector=e.join(i)}};bd.exports=li;li.default=li;vd.registerRule(li)});var Td=b((WM,_d)=>{u();"use strict";var Yk=ii(),Qk=cd(),Jk=ni(),Xk=Zn(),Kk=sr(),xd=es(),kd={empty:!0,space:!0};function Zk(t){for(let e=t.length-1;e>=0;e--){let r=t[e],i=r[3]||r[2];if(i)return i}}var Sd=class{constructor(e){this.input=e,this.root=new Kk,this.current=this.root,this.spaces="",this.semicolon=!1,this.customProperty=!1,this.createTokenizer(),this.root.source={input:e,start:{offset:0,line:1,column:1}}}createTokenizer(){this.tokenizer=Qk(this.input)}parse(){let e;for(;!this.tokenizer.endOfFile();)switch(e=this.tokenizer.nextToken(),e[0]){case"space":this.spaces+=e[1];break;case";":this.freeSemicolon(e);break;case"}":this.end(e);break;case"comment":this.comment(e);break;case"at-word":this.atrule(e);break;case"{":this.emptyRule(e);break;default:this.other(e);break}this.endFile()}comment(e){let r=new Jk;this.init(r,e[2]),r.source.end=this.getPosition(e[3]||e[2]);let i=e[1].slice(2,-2);if(/^\s*$/.test(i))r.text="",r.raws.left=i,r.raws.right="";else{let n=i.match(/^(\s*)([^]*\S)(\s*)$/);r.text=n[2],r.raws.left=n[1],r.raws.right=n[3]}}emptyRule(e){let r=new xd;this.init(r,e[2]),r.selector="",r.raws.between="",this.current=r}other(e){let r=!1,i=null,n=!1,s=null,a=[],o=e[1].startsWith("--"),l=[],f=e;for(;f;){if(i=f[0],l.push(f),i==="("||i==="[")s||(s=f),a.push(i==="("?")":"]");else if(o&&n&&i==="{")s||(s=f),a.push("}");else if(a.length===0)if(i===";")if(n){this.decl(l,o);return}else break;else if(i==="{"){this.rule(l);return}else if(i==="}"){this.tokenizer.back(l.pop()),r=!0;break}else i===":"&&(n=!0);else i===a[a.length-1]&&(a.pop(),a.length===0&&(s=null));f=this.tokenizer.nextToken()}if(this.tokenizer.endOfFile()&&(r=!0),a.length>0&&this.unclosedBracket(s),r&&n){if(!o)for(;l.length&&(f=l[l.length-1][0],!(f!=="space"&&f!=="comment"));)this.tokenizer.back(l.pop());this.decl(l,o)}else this.unknownWord(l)}rule(e){e.pop();let r=new xd;this.init(r,e[0][2]),r.raws.between=this.spacesAndCommentsFromEnd(e),this.raw(r,"selector",e),this.current=r}decl(e,r){let i=new Yk;this.init(i,e[0][2]);let n=e[e.length-1];for(n[0]===";"&&(this.semicolon=!0,e.pop()),i.source.end=this.getPosition(n[3]||n[2]||Zk(e));e[0][0]!=="word";)e.length===1&&this.unknownWord(e),i.raws.before+=e.shift()[1];for(i.source.start=this.getPosition(e[0][2]),i.prop="";e.length;){let f=e[0][0];if(f===":"||f==="space"||f==="comment")break;i.prop+=e.shift()[1]}i.raws.between="";let s;for(;e.length;)if(s=e.shift(),s[0]===":"){i.raws.between+=s[1];break}else s[0]==="word"&&/\w/.test(s[1])&&this.unknownWord([s]),i.raws.between+=s[1];(i.prop[0]==="_"||i.prop[0]==="*")&&(i.raws.before+=i.prop[0],i.prop=i.prop.slice(1));let a=[],o;for(;e.length&&(o=e[0][0],!(o!=="space"&&o!=="comment"));)a.push(e.shift());this.precheckMissedSemicolon(e);for(let f=e.length-1;f>=0;f--){if(s=e[f],s[1].toLowerCase()==="!important"){i.important=!0;let c=this.stringFrom(e,f);c=this.spacesFromEnd(e)+c,c!==" !important"&&(i.raws.important=c);break}else if(s[1].toLowerCase()==="important"){let c=e.slice(0),p="";for(let m=f;m>0;m--){let d=c[m][0];if(p.trim().indexOf("!")===0&&d!=="space")break;p=c.pop()[1]+p}p.trim().indexOf("!")===0&&(i.important=!0,i.raws.important=p,e=c)}if(s[0]!=="space"&&s[0]!=="comment")break}e.some(f=>f[0]!=="space"&&f[0]!=="comment")&&(i.raws.between+=a.map(f=>f[1]).join(""),a=[]),this.raw(i,"value",a.concat(e),r),i.value.includes(":")&&!r&&this.checkMissedSemicolon(e)}atrule(e){let r=new Xk;r.name=e[1].slice(1),r.name===""&&this.unnamedAtrule(r,e),this.init(r,e[2]);let i,n,s,a=!1,o=!1,l=[],f=[];for(;!this.tokenizer.endOfFile();){if(e=this.tokenizer.nextToken(),i=e[0],i==="("||i==="["?f.push(i==="("?")":"]"):i==="{"&&f.length>0?f.push("}"):i===f[f.length-1]&&f.pop(),f.length===0)if(i===";"){r.source.end=this.getPosition(e[2]),this.semicolon=!0;break}else if(i==="{"){o=!0;break}else if(i==="}"){if(l.length>0){for(s=l.length-1,n=l[s];n&&n[0]==="space";)n=l[--s];n&&(r.source.end=this.getPosition(n[3]||n[2]))}this.end(e);break}else l.push(e);else l.push(e);if(this.tokenizer.endOfFile()){a=!0;break}}r.raws.between=this.spacesAndCommentsFromEnd(l),l.length?(r.raws.afterName=this.spacesAndCommentsFromStart(l),this.raw(r,"params",l),a&&(e=l[l.length-1],r.source.end=this.getPosition(e[3]||e[2]),this.spaces=r.raws.between,r.raws.between="")):(r.raws.afterName="",r.params=""),o&&(r.nodes=[],this.current=r)}end(e){this.current.nodes&&this.current.nodes.length&&(this.current.raws.semicolon=this.semicolon),this.semicolon=!1,this.current.raws.after=(this.current.raws.after||"")+this.spaces,this.spaces="",this.current.parent?(this.current.source.end=this.getPosition(e[2]),this.current=this.current.parent):this.unexpectedClose(e)}endFile(){this.current.parent&&this.unclosedBlock(),this.current.nodes&&this.current.nodes.length&&(this.current.raws.semicolon=this.semicolon),this.current.raws.after=(this.current.raws.after||"")+this.spaces}freeSemicolon(e){if(this.spaces+=e[1],this.current.nodes){let r=this.current.nodes[this.current.nodes.length-1];r&&r.type==="rule"&&!r.raws.ownSemicolon&&(r.raws.ownSemicolon=this.spaces,this.spaces="")}}getPosition(e){let r=this.input.fromOffset(e);return{offset:e,line:r.line,column:r.col}}init(e,r){this.current.push(e),e.source={start:this.getPosition(r),input:this.input},e.raws.before=this.spaces,this.spaces="",e.type!=="comment"&&(this.semicolon=!1)}raw(e,r,i,n){let s,a,o=i.length,l="",f=!0,c,p;for(let m=0;md+v[1],"");e.raws[r]={value:l,raw:m}}e[r]=l}spacesAndCommentsFromEnd(e){let r,i="";for(;e.length&&(r=e[e.length-1][0],!(r!=="space"&&r!=="comment"));)i=e.pop()[1]+i;return i}spacesAndCommentsFromStart(e){let r,i="";for(;e.length&&(r=e[0][0],!(r!=="space"&&r!=="comment"));)i+=e.shift()[1];return i}spacesFromEnd(e){let r,i="";for(;e.length&&(r=e[e.length-1][0],r==="space");)i=e.pop()[1]+i;return i}stringFrom(e,r){let i="";for(let n=r;n=0&&(n=e[s],!(n[0]!=="space"&&(i+=1,i===2)));s--);throw this.input.error("Missed semicolon",n[0]==="word"?n[3]+1:n[2])}};_d.exports=Sd});var Od=b(()=>{u()});var Ad=b((YM,Ed)=>{u();var eS="useandom-26T198340PX75pxJACKVERYMINDBUSHWOLF_GQZbfghjklqvwyzrict",tS=(t,e=21)=>(r=e)=>{let i="",n=r;for(;n--;)i+=t[Math.random()*t.length|0];return i},rS=(t=21)=>{let e="",r=t;for(;r--;)e+=eS[Math.random()*64|0];return e};Ed.exports={nanoid:rS,customAlphabet:tS}});var no=b((QM,Cd)=>{u();Cd.exports={}});var rs=b((JM,Id)=>{u();"use strict";var{SourceMapConsumer:iS,SourceMapGenerator:nS}=Od(),{fileURLToPath:Pd,pathToFileURL:ts}=(Wa(),Rp),{resolve:so,isAbsolute:ao}=(jt(),qp),{nanoid:sS}=Ad(),oo=Ga(),qd=Ln(),aS=no(),lo=Symbol("fromOffsetCache"),oS=Boolean(iS&&nS),Dd=Boolean(so&&ao),ui=class{constructor(e,r={}){if(e===null||typeof e=="undefined"||typeof e=="object"&&!e.toString)throw new Error(`PostCSS received ${e} instead of CSS string`);if(this.css=e.toString(),this.css[0]==="\uFEFF"||this.css[0]==="\uFFFE"?(this.hasBOM=!0,this.css=this.css.slice(1)):this.hasBOM=!1,r.from&&(!Dd||/^\w+:\/\//.test(r.from)||ao(r.from)?this.file=r.from:this.file=so(r.from)),Dd&&oS){let i=new aS(this.css,r);if(i.text){this.map=i;let n=i.consumer().file;!this.file&&n&&(this.file=this.mapResolve(n))}}this.file||(this.id=""),this.map&&(this.map.file=this.from)}fromOffset(e){let r,i;if(this[lo])i=this[lo];else{let s=this.css.split(` -`);i=new Array(s.length);let a=0;for(let o=0,l=s.length;o=r)n=i.length-1;else{let s=i.length-2,a;for(;n>1),e=i[a+1])n=a+1;else{n=a;break}}return{line:n+1,col:e-i[n]+1}}error(e,r,i,n={}){let s,a,o;if(r&&typeof r=="object"){let f=r,c=i;if(typeof f.offset=="number"){let p=this.fromOffset(f.offset);r=p.line,i=p.col}else r=f.line,i=f.column;if(typeof c.offset=="number"){let p=this.fromOffset(c.offset);a=p.line,o=p.col}else a=c.line,o=c.column}else if(!i){let f=this.fromOffset(r);r=f.line,i=f.col}let l=this.origin(r,i,a,o);return l?s=new qd(e,l.endLine===void 0?l.line:{line:l.line,column:l.column},l.endLine===void 0?l.column:{line:l.endLine,column:l.endColumn},l.source,l.file,n.plugin):s=new qd(e,a===void 0?r:{line:r,column:i},a===void 0?i:{line:a,column:o},this.css,this.file,n.plugin),s.input={line:r,column:i,endLine:a,endColumn:o,source:this.css},this.file&&(ts&&(s.input.url=ts(this.file).toString()),s.input.file=this.file),s}origin(e,r,i,n){if(!this.map)return!1;let s=this.map.consumer(),a=s.originalPositionFor({line:e,column:r});if(!a.source)return!1;let o;typeof i=="number"&&(o=s.originalPositionFor({line:i,column:n}));let l;ao(a.source)?l=ts(a.source):l=new URL(a.source,this.map.consumer().sourceRoot||ts(this.map.mapFile));let f={url:l.toString(),line:a.line,column:a.column,endLine:o&&o.line,endColumn:o&&o.column};if(l.protocol==="file:")if(Pd)f.file=Pd(l);else throw new Error("file: protocol is not available in this PostCSS build");let c=s.sourceContentFor(a.source);return c&&(f.source=c),f}mapResolve(e){return/^\w+:\/\//.test(e)?e:so(this.map.consumer().sourceRoot||this.map.root||".",e)}get from(){return this.file||this.id}toJSON(){let e={};for(let r of["hasBOM","css","file","id"])this[r]!=null&&(e[r]=this[r]);return this.map&&(e.map={...this.map},e.map.consumerCache&&(e.map.consumerCache=void 0)),e}};Id.exports=ui;ui.default=ui;oo&&oo.registerInput&&oo.registerInput(ui)});var ns=b((XM,Rd)=>{u();"use strict";var lS=Pt(),uS=Td(),fS=rs();function is(t,e){let r=new fS(t,e),i=new uS(r);try{i.parse()}catch(n){throw n}return i.root}Rd.exports=is;is.default=is;lS.registerParse(is)});var co=b((ZM,Fd)=>{u();"use strict";var{isClean:Ze,my:cS}=Mn(),pS=Xa(),dS=ti(),hS=Pt(),mS=jn(),KM=eo(),Ld=Wn(),gS=ns(),wS=sr(),yS={document:"Document",root:"Root",atrule:"AtRule",rule:"Rule",decl:"Declaration",comment:"Comment"},vS={postcssPlugin:!0,prepare:!0,Once:!0,Document:!0,Root:!0,Declaration:!0,Rule:!0,AtRule:!0,Comment:!0,DeclarationExit:!0,RuleExit:!0,AtRuleExit:!0,CommentExit:!0,RootExit:!0,DocumentExit:!0,OnceExit:!0},bS={postcssPlugin:!0,prepare:!0,Once:!0},ar=0;function fi(t){return typeof t=="object"&&typeof t.then=="function"}function Md(t){let e=!1,r=yS[t.type];return t.type==="decl"?e=t.prop.toLowerCase():t.type==="atrule"&&(e=t.name.toLowerCase()),e&&t.append?[r,r+"-"+e,ar,r+"Exit",r+"Exit-"+e]:e?[r,r+"-"+e,r+"Exit",r+"Exit-"+e]:t.append?[r,ar,r+"Exit"]:[r,r+"Exit"]}function Bd(t){let e;return t.type==="document"?e=["Document",ar,"DocumentExit"]:t.type==="root"?e=["Root",ar,"RootExit"]:e=Md(t),{node:t,events:e,eventIndex:0,visitors:[],visitorIndex:0,iterator:0}}function uo(t){return t[Ze]=!1,t.nodes&&t.nodes.forEach(e=>uo(e)),t}var fo={},ft=class{constructor(e,r,i){this.stringified=!1,this.processed=!1;let n;if(typeof r=="object"&&r!==null&&(r.type==="root"||r.type==="document"))n=uo(r);else if(r instanceof ft||r instanceof Ld)n=uo(r.root),r.map&&(typeof i.map=="undefined"&&(i.map={}),i.map.inline||(i.map.inline=!1),i.map.prev=r.map);else{let s=gS;i.syntax&&(s=i.syntax.parse),i.parser&&(s=i.parser),s.parse&&(s=s.parse);try{n=s(r,i)}catch(a){this.processed=!0,this.error=a}n&&!n[cS]&&hS.rebuild(n)}this.result=new Ld(e,n,i),this.helpers={...fo,result:this.result,postcss:fo},this.plugins=this.processor.plugins.map(s=>typeof s=="object"&&s.prepare?{...s,...s.prepare(this.result)}:s)}get[Symbol.toStringTag](){return"LazyResult"}get processor(){return this.result.processor}get opts(){return this.result.opts}get css(){return this.stringify().css}get content(){return this.stringify().content}get map(){return this.stringify().map}get root(){return this.sync().root}get messages(){return this.sync().messages}warnings(){return this.sync().warnings()}toString(){return this.css}then(e,r){return this.async().then(e,r)}catch(e){return this.async().catch(e)}finally(e){return this.async().then(e,e)}async(){return this.error?Promise.reject(this.error):this.processed?Promise.resolve(this.result):(this.processing||(this.processing=this.runAsync()),this.processing)}sync(){if(this.error)throw this.error;if(this.processed)return this.result;if(this.processed=!0,this.processing)throw this.getAsyncError();for(let e of this.plugins){let r=this.runOnRoot(e);if(fi(r))throw this.getAsyncError()}if(this.prepareVisitors(),this.hasListener){let e=this.result.root;for(;!e[Ze];)e[Ze]=!0,this.walkSync(e);if(this.listeners.OnceExit)if(e.type==="document")for(let r of e.nodes)this.visitSync(this.listeners.OnceExit,r);else this.visitSync(this.listeners.OnceExit,e)}return this.result}stringify(){if(this.error)throw this.error;if(this.stringified)return this.result;this.stringified=!0,this.sync();let e=this.result.opts,r=dS;e.syntax&&(r=e.syntax.stringify),e.stringifier&&(r=e.stringifier),r.stringify&&(r=r.stringify);let n=new pS(r,this.result.root,this.result.opts).generate();return this.result.css=n[0],this.result.map=n[1],this.result}walkSync(e){e[Ze]=!0;let r=Md(e);for(let i of r)if(i===ar)e.nodes&&e.each(n=>{n[Ze]||this.walkSync(n)});else{let n=this.listeners[i];if(n&&this.visitSync(n,e.toProxy()))return}}visitSync(e,r){for(let[i,n]of e){this.result.lastPlugin=i;let s;try{s=n(r,this.helpers)}catch(a){throw this.handleError(a,r.proxyOf)}if(r.type!=="root"&&r.type!=="document"&&!r.parent)return!0;if(fi(s))throw this.getAsyncError()}}runOnRoot(e){this.result.lastPlugin=e;try{if(typeof e=="object"&&e.Once){if(this.result.root.type==="document"){let r=this.result.root.nodes.map(i=>e.Once(i,this.helpers));return fi(r[0])?Promise.all(r):r}return e.Once(this.result.root,this.helpers)}else if(typeof e=="function")return e(this.result.root,this.result)}catch(r){throw this.handleError(r)}}getAsyncError(){throw new Error("Use process(css).then(cb) to work with async plugins")}handleError(e,r){let i=this.result.lastPlugin;try{r&&r.addToError(e),this.error=e,e.name==="CssSyntaxError"&&!e.plugin?(e.plugin=i.postcssPlugin,e.setMessage()):i.postcssVersion}catch(n){console&&console.error&&console.error(n)}return e}async runAsync(){this.plugin=0;for(let e=0;e0;){let i=this.visitTick(r);if(fi(i))try{await i}catch(n){let s=r[r.length-1].node;throw this.handleError(n,s)}}}if(this.listeners.OnceExit)for(let[r,i]of this.listeners.OnceExit){this.result.lastPlugin=r;try{if(e.type==="document"){let n=e.nodes.map(s=>i(s,this.helpers));await Promise.all(n)}else await i(e,this.helpers)}catch(n){throw this.handleError(n)}}}return this.processed=!0,this.stringify()}prepareVisitors(){this.listeners={};let e=(r,i,n)=>{this.listeners[i]||(this.listeners[i]=[]),this.listeners[i].push([r,n])};for(let r of this.plugins)if(typeof r=="object")for(let i in r){if(!vS[i]&&/^[A-Z]/.test(i))throw new Error(`Unknown event ${i} in ${r.postcssPlugin}. Try to update PostCSS (${this.processor.version} now).`);if(!bS[i])if(typeof r[i]=="object")for(let n in r[i])n==="*"?e(r,i,r[i][n]):e(r,i+"-"+n.toLowerCase(),r[i][n]);else typeof r[i]=="function"&&e(r,i,r[i])}this.hasListener=Object.keys(this.listeners).length>0}visitTick(e){let r=e[e.length-1],{node:i,visitors:n}=r;if(i.type!=="root"&&i.type!=="document"&&!i.parent){e.pop();return}if(n.length>0&&r.visitorIndex{fo=t};Fd.exports=ft;ft.default=ft;wS.registerLazyResult(ft);mS.registerLazyResult(ft)});var zd=b((tB,Nd)=>{u();"use strict";var xS=Xa(),kS=ti(),eB=eo(),SS=ns(),_S=Wn(),ss=class{constructor(e,r,i){r=r.toString(),this.stringified=!1,this._processor=e,this._css=r,this._opts=i,this._map=void 0;let n,s=kS;this.result=new _S(this._processor,n,this._opts),this.result.css=r;let a=this;Object.defineProperty(this.result,"root",{get(){return a.root}});let o=new xS(s,n,this._opts,r);if(o.isMap()){let[l,f]=o.generate();l&&(this.result.css=l),f&&(this.result.map=f)}}get[Symbol.toStringTag](){return"NoWorkResult"}get processor(){return this.result.processor}get opts(){return this.result.opts}get css(){return this.result.css}get content(){return this.result.css}get map(){return this.result.map}get root(){if(this._root)return this._root;let e,r=SS;try{e=r(this._css,this._opts)}catch(i){this.error=i}if(this.error)throw this.error;return this._root=e,e}get messages(){return[]}warnings(){return[]}toString(){return this._css}then(e,r){return this.async().then(e,r)}catch(e){return this.async().catch(e)}finally(e){return this.async().then(e,e)}async(){return this.error?Promise.reject(this.error):Promise.resolve(this.result)}sync(){if(this.error)throw this.error;return this.result}};Nd.exports=ss;ss.default=ss});var jd=b((rB,$d)=>{u();"use strict";var TS=zd(),OS=co(),ES=jn(),AS=sr(),or=class{constructor(e=[]){this.version="8.4.24",this.plugins=this.normalize(e)}use(e){return this.plugins=this.plugins.concat(this.normalize([e])),this}process(e,r={}){return this.plugins.length===0&&typeof r.parser=="undefined"&&typeof r.stringifier=="undefined"&&typeof r.syntax=="undefined"?new TS(this,e,r):new OS(this,e,r)}normalize(e){let r=[];for(let i of e)if(i.postcss===!0?i=i():i.postcss&&(i=i.postcss),typeof i=="object"&&Array.isArray(i.plugins))r=r.concat(i.plugins);else if(typeof i=="object"&&i.postcssPlugin)r.push(i);else if(typeof i=="function")r.push(i);else if(!(typeof i=="object"&&(i.parse||i.stringify)))throw new Error(i+" is not a PostCSS plugin");return r}};$d.exports=or;or.default=or;AS.registerProcessor(or);ES.registerProcessor(or)});var Vd=b((iB,Ud)=>{u();"use strict";var CS=ii(),PS=no(),qS=ni(),DS=Zn(),IS=rs(),RS=sr(),LS=es();function ci(t,e){if(Array.isArray(t))return t.map(n=>ci(n));let{inputs:r,...i}=t;if(r){e=[];for(let n of r){let s={...n,__proto__:IS.prototype};s.map&&(s.map={...s.map,__proto__:PS.prototype}),e.push(s)}}if(i.nodes&&(i.nodes=t.nodes.map(n=>ci(n,e))),i.source){let{inputId:n,...s}=i.source;i.source=s,n!=null&&(i.source.input=e[n])}if(i.type==="root")return new RS(i);if(i.type==="decl")return new CS(i);if(i.type==="rule")return new LS(i);if(i.type==="comment")return new qS(i);if(i.type==="atrule")return new DS(i);throw new Error("Unknown node type: "+t.type)}Ud.exports=ci;ci.default=ci});var De=b((nB,Xd)=>{u();"use strict";var MS=Ln(),Wd=ii(),BS=co(),FS=Pt(),po=jd(),NS=ti(),zS=Vd(),Gd=jn(),$S=to(),Hd=ni(),Yd=Zn(),jS=Wn(),US=rs(),VS=ns(),WS=io(),Qd=es(),Jd=sr(),GS=ri();function Y(...t){return t.length===1&&Array.isArray(t[0])&&(t=t[0]),new po(t)}Y.plugin=function(e,r){let i=!1;function n(...a){console&&console.warn&&!i&&(i=!0,console.warn(e+`: postcss.plugin was deprecated. Migration guide: -https://evilmartians.com/chronicles/postcss-8-plugin-migration`),g.env.LANG&&g.env.LANG.startsWith("cn")&&console.warn(e+`: \u91CC\u9762 postcss.plugin \u88AB\u5F03\u7528. \u8FC1\u79FB\u6307\u5357: -https://www.w3ctech.com/topic/2226`));let o=r(...a);return o.postcssPlugin=e,o.postcssVersion=new po().version,o}let s;return Object.defineProperty(n,"postcss",{get(){return s||(s=n()),s}}),n.process=function(a,o,l){return Y([n(l)]).process(a,o)},n};Y.stringify=NS;Y.parse=VS;Y.fromJSON=zS;Y.list=WS;Y.comment=t=>new Hd(t);Y.atRule=t=>new Yd(t);Y.decl=t=>new Wd(t);Y.rule=t=>new Qd(t);Y.root=t=>new Jd(t);Y.document=t=>new Gd(t);Y.CssSyntaxError=MS;Y.Declaration=Wd;Y.Container=FS;Y.Processor=po;Y.Document=Gd;Y.Comment=Hd;Y.Warning=$S;Y.AtRule=Yd;Y.Result=jS;Y.Input=US;Y.Rule=Qd;Y.Root=Jd;Y.Node=GS;BS.registerPostcss(Y);Xd.exports=Y;Y.default=Y});var Z,Q,sB,aB,oB,lB,uB,fB,cB,pB,dB,hB,mB,gB,wB,yB,vB,bB,xB,kB,SB,_B,TB,OB,EB,AB,qt=E(()=>{u();Z=he(De()),Q=Z.default,sB=Z.default.stringify,aB=Z.default.fromJSON,oB=Z.default.plugin,lB=Z.default.parse,uB=Z.default.list,fB=Z.default.document,cB=Z.default.comment,pB=Z.default.atRule,dB=Z.default.rule,hB=Z.default.decl,mB=Z.default.root,gB=Z.default.CssSyntaxError,wB=Z.default.Declaration,yB=Z.default.Container,vB=Z.default.Processor,bB=Z.default.Document,xB=Z.default.Comment,kB=Z.default.Warning,SB=Z.default.AtRule,_B=Z.default.Result,TB=Z.default.Input,OB=Z.default.Rule,EB=Z.default.Root,AB=Z.default.Node});var ho=b((PB,Kd)=>{u();Kd.exports=function(t,e,r,i,n){for(e=e.split?e.split("."):e,i=0;i{u();"use strict";as.__esModule=!0;as.default=QS;function HS(t){for(var e=t.toLowerCase(),r="",i=!1,n=0;n<6&&e[n]!==void 0;n++){var s=e.charCodeAt(n),a=s>=97&&s<=102||s>=48&&s<=57;if(i=s===32,!a)break;r+=e[n]}if(r.length!==0){var o=parseInt(r,16),l=o>=55296&&o<=57343;return l||o===0||o>1114111?["\uFFFD",r.length+(i?1:0)]:[String.fromCodePoint(o),r.length+(i?1:0)]}}var YS=/\\/;function QS(t){var e=YS.test(t);if(!e)return t;for(var r="",i=0;i{u();"use strict";ls.__esModule=!0;ls.default=JS;function JS(t){for(var e=arguments.length,r=new Array(e>1?e-1:0),i=1;i0;){var n=r.shift();if(!t[n])return;t=t[n]}return t}eh.exports=ls.default});var ih=b((us,rh)=>{u();"use strict";us.__esModule=!0;us.default=XS;function XS(t){for(var e=arguments.length,r=new Array(e>1?e-1:0),i=1;i0;){var n=r.shift();t[n]||(t[n]={}),t=t[n]}}rh.exports=us.default});var sh=b((fs,nh)=>{u();"use strict";fs.__esModule=!0;fs.default=KS;function KS(t){for(var e="",r=t.indexOf("/*"),i=0;r>=0;){e=e+t.slice(i,r);var n=t.indexOf("*/",r+2);if(n<0)return e;i=n+2,r=t.indexOf("/*",i)}return e=e+t.slice(i),e}nh.exports=fs.default});var pi=b(et=>{u();"use strict";et.__esModule=!0;et.unesc=et.stripComments=et.getProp=et.ensureObject=void 0;var ZS=cs(os());et.unesc=ZS.default;var e_=cs(th());et.getProp=e_.default;var t_=cs(ih());et.ensureObject=t_.default;var r_=cs(sh());et.stripComments=r_.default;function cs(t){return t&&t.__esModule?t:{default:t}}});var ct=b((di,lh)=>{u();"use strict";di.__esModule=!0;di.default=void 0;var ah=pi();function oh(t,e){for(var r=0;ri||this.source.end.linen||this.source.end.line===i&&this.source.end.column{u();"use strict";ee.__esModule=!0;ee.UNIVERSAL=ee.TAG=ee.STRING=ee.SELECTOR=ee.ROOT=ee.PSEUDO=ee.NESTING=ee.ID=ee.COMMENT=ee.COMBINATOR=ee.CLASS=ee.ATTRIBUTE=void 0;var a_="tag";ee.TAG=a_;var o_="string";ee.STRING=o_;var l_="selector";ee.SELECTOR=l_;var u_="root";ee.ROOT=u_;var f_="pseudo";ee.PSEUDO=f_;var c_="nesting";ee.NESTING=c_;var p_="id";ee.ID=p_;var d_="comment";ee.COMMENT=d_;var h_="combinator";ee.COMBINATOR=h_;var m_="class";ee.CLASS=m_;var g_="attribute";ee.ATTRIBUTE=g_;var w_="universal";ee.UNIVERSAL=w_});var ps=b((hi,ph)=>{u();"use strict";hi.__esModule=!0;hi.default=void 0;var y_=b_(ct()),pt=v_(be());function uh(t){if(typeof WeakMap!="function")return null;var e=new WeakMap,r=new WeakMap;return(uh=function(n){return n?r:e})(t)}function v_(t,e){if(!e&&t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var r=uh(e);if(r&&r.has(t))return r.get(t);var i={},n=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var s in t)if(s!=="default"&&Object.prototype.hasOwnProperty.call(t,s)){var a=n?Object.getOwnPropertyDescriptor(t,s):null;a&&(a.get||a.set)?Object.defineProperty(i,s,a):i[s]=t[s]}return i.default=t,r&&r.set(t,i),i}function b_(t){return t&&t.__esModule?t:{default:t}}function x_(t,e){var r=typeof Symbol!="undefined"&&t[Symbol.iterator]||t["@@iterator"];if(r)return(r=r.call(t)).next.bind(r);if(Array.isArray(t)||(r=k_(t))||e&&t&&typeof t.length=="number"){r&&(t=r);var i=0;return function(){return i>=t.length?{done:!0}:{done:!1,value:t[i++]}}}throw new TypeError(`Invalid attempt to iterate non-iterable instance. -In order to be iterable, non-array objects must have a [Symbol.iterator]() method.`)}function k_(t,e){if(!!t){if(typeof t=="string")return fh(t,e);var r=Object.prototype.toString.call(t).slice(8,-1);if(r==="Object"&&t.constructor&&(r=t.constructor.name),r==="Map"||r==="Set")return Array.from(t);if(r==="Arguments"||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r))return fh(t,e)}}function fh(t,e){(e==null||e>t.length)&&(e=t.length);for(var r=0,i=new Array(e);r=n&&(this.indexes[a]=s-1);return this},r.removeAll=function(){for(var n=x_(this.nodes),s;!(s=n()).done;){var a=s.value;a.parent=void 0}return this.nodes=[],this},r.empty=function(){return this.removeAll()},r.insertAfter=function(n,s){s.parent=this;var a=this.index(n);this.nodes.splice(a+1,0,s),s.parent=this;var o;for(var l in this.indexes)o=this.indexes[l],a<=o&&(this.indexes[l]=o+1);return this},r.insertBefore=function(n,s){s.parent=this;var a=this.index(n);this.nodes.splice(a,0,s),s.parent=this;var o;for(var l in this.indexes)o=this.indexes[l],o<=a&&(this.indexes[l]=o+1);return this},r._findChildAtPosition=function(n,s){var a=void 0;return this.each(function(o){if(o.atPosition){var l=o.atPosition(n,s);if(l)return a=l,!1}else if(o.isAtPosition(n,s))return a=o,!1}),a},r.atPosition=function(n,s){if(this.isAtPosition(n,s))return this._findChildAtPosition(n,s)||this},r._inferEndPosition=function(){this.last&&this.last.source&&this.last.source.end&&(this.source=this.source||{},this.source.end=this.source.end||{},Object.assign(this.source.end,this.last.source.end))},r.each=function(n){this.lastEach||(this.lastEach=0),this.indexes||(this.indexes={}),this.lastEach++;var s=this.lastEach;if(this.indexes[s]=0,!!this.length){for(var a,o;this.indexes[s]{u();"use strict";mi.__esModule=!0;mi.default=void 0;var O_=A_(ps()),E_=be();function A_(t){return t&&t.__esModule?t:{default:t}}function dh(t,e){for(var r=0;r{u();"use strict";gi.__esModule=!0;gi.default=void 0;var D_=R_(ps()),I_=be();function R_(t){return t&&t.__esModule?t:{default:t}}function L_(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,yo(t,e)}function yo(t,e){return yo=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},yo(t,e)}var M_=function(t){L_(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=I_.SELECTOR,i}return e}(D_.default);gi.default=M_;mh.exports=gi.default});var Vt=b((IB,gh)=>{u();"use strict";var B_={},F_=B_.hasOwnProperty,N_=function(e,r){if(!e)return r;var i={};for(var n in r)i[n]=F_.call(e,n)?e[n]:r[n];return i},z_=/[ -,\.\/:-@\[-\^`\{-~]/,$_=/[ -,\.\/:-@\[\]\^`\{-~]/,j_=/(^|\\+)?(\\[A-F0-9]{1,6})\x20(?![a-fA-F0-9\x20])/g,bo=function t(e,r){r=N_(r,t.options),r.quotes!="single"&&r.quotes!="double"&&(r.quotes="single");for(var i=r.quotes=="double"?'"':"'",n=r.isIdentifier,s=e.charAt(0),a="",o=0,l=e.length;o126){if(c>=55296&&c<=56319&&o{u();"use strict";wi.__esModule=!0;wi.default=void 0;var U_=wh(Vt()),V_=pi(),W_=wh(ct()),G_=be();function wh(t){return t&&t.__esModule?t:{default:t}}function yh(t,e){for(var r=0;r{u();"use strict";yi.__esModule=!0;yi.default=void 0;var J_=K_(ct()),X_=be();function K_(t){return t&&t.__esModule?t:{default:t}}function Z_(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,So(t,e)}function So(t,e){return So=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},So(t,e)}var e2=function(t){Z_(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=X_.COMMENT,i}return e}(J_.default);yi.default=e2;bh.exports=yi.default});var Oo=b((vi,xh)=>{u();"use strict";vi.__esModule=!0;vi.default=void 0;var t2=i2(ct()),r2=be();function i2(t){return t&&t.__esModule?t:{default:t}}function n2(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,To(t,e)}function To(t,e){return To=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},To(t,e)}var s2=function(t){n2(e,t);function e(i){var n;return n=t.call(this,i)||this,n.type=r2.ID,n}var r=e.prototype;return r.valueToString=function(){return"#"+t.prototype.valueToString.call(this)},e}(t2.default);vi.default=s2;xh.exports=vi.default});var ds=b((bi,_h)=>{u();"use strict";bi.__esModule=!0;bi.default=void 0;var a2=kh(Vt()),o2=pi(),l2=kh(ct());function kh(t){return t&&t.__esModule?t:{default:t}}function Sh(t,e){for(var r=0;r{u();"use strict";xi.__esModule=!0;xi.default=void 0;var p2=h2(ds()),d2=be();function h2(t){return t&&t.__esModule?t:{default:t}}function m2(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Ao(t,e)}function Ao(t,e){return Ao=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},Ao(t,e)}var g2=function(t){m2(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=d2.TAG,i}return e}(p2.default);xi.default=g2;Th.exports=xi.default});var qo=b((ki,Oh)=>{u();"use strict";ki.__esModule=!0;ki.default=void 0;var w2=v2(ct()),y2=be();function v2(t){return t&&t.__esModule?t:{default:t}}function b2(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Po(t,e)}function Po(t,e){return Po=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},Po(t,e)}var x2=function(t){b2(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=y2.STRING,i}return e}(w2.default);ki.default=x2;Oh.exports=ki.default});var Io=b((Si,Eh)=>{u();"use strict";Si.__esModule=!0;Si.default=void 0;var k2=_2(ps()),S2=be();function _2(t){return t&&t.__esModule?t:{default:t}}function T2(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Do(t,e)}function Do(t,e){return Do=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},Do(t,e)}var O2=function(t){T2(e,t);function e(i){var n;return n=t.call(this,i)||this,n.type=S2.PSEUDO,n}var r=e.prototype;return r.toString=function(){var n=this.length?"("+this.map(String).join(",")+")":"";return[this.rawSpaceBefore,this.stringifyProperty("value"),n,this.rawSpaceAfter].join("")},e}(k2.default);Si.default=O2;Eh.exports=Si.default});var Ah={};Ve(Ah,{deprecate:()=>E2});function E2(t){return t}var Ch=E(()=>{u()});var Ro=b((RB,Ph)=>{u();Ph.exports=(Ch(),Ah).deprecate});var zo=b(Oi=>{u();"use strict";Oi.__esModule=!0;Oi.default=void 0;Oi.unescapeValue=Fo;var _i=Mo(Vt()),A2=Mo(os()),C2=Mo(ds()),P2=be(),Lo;function Mo(t){return t&&t.__esModule?t:{default:t}}function qh(t,e){for(var r=0;r0&&!n.quoted&&o.before.length===0&&!(n.spaces.value&&n.spaces.value.after)&&(o.before=" "),Dh(a,o)}))),s.push("]"),s.push(this.rawSpaceAfter),s.join("")},q2(e,[{key:"quoted",get:function(){var n=this.quoteMark;return n==="'"||n==='"'},set:function(n){L2()}},{key:"quoteMark",get:function(){return this._quoteMark},set:function(n){if(!this._constructed){this._quoteMark=n;return}this._quoteMark!==n&&(this._quoteMark=n,this._syncRawValue())}},{key:"qualifiedAttribute",get:function(){return this.qualifiedName(this.raws.attribute||this.attribute)}},{key:"insensitiveFlag",get:function(){return this.insensitive?"i":""}},{key:"value",get:function(){return this._value},set:function(n){if(this._constructed){var s=Fo(n),a=s.deprecatedUsage,o=s.unescaped,l=s.quoteMark;if(a&&R2(),o===this._value&&l===this._quoteMark)return;this._value=o,this._quoteMark=l,this._syncRawValue()}else this._value=n}},{key:"insensitive",get:function(){return this._insensitive},set:function(n){n||(this._insensitive=!1,this.raws&&(this.raws.insensitiveFlag==="I"||this.raws.insensitiveFlag==="i")&&(this.raws.insensitiveFlag=void 0)),this._insensitive=n}},{key:"attribute",get:function(){return this._attribute},set:function(n){this._handleEscapes("attribute",n),this._attribute=n}}]),e}(C2.default);Oi.default=hs;hs.NO_QUOTE=null;hs.SINGLE_QUOTE="'";hs.DOUBLE_QUOTE='"';var No=(Lo={"'":{quotes:"single",wrap:!0},'"':{quotes:"double",wrap:!0}},Lo[null]={isIdentifier:!0},Lo);function Dh(t,e){return""+e.before+t+e.after}});var jo=b((Ei,Ih)=>{u();"use strict";Ei.__esModule=!0;Ei.default=void 0;var F2=z2(ds()),N2=be();function z2(t){return t&&t.__esModule?t:{default:t}}function $2(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,$o(t,e)}function $o(t,e){return $o=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},$o(t,e)}var j2=function(t){$2(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=N2.UNIVERSAL,i.value="*",i}return e}(F2.default);Ei.default=j2;Ih.exports=Ei.default});var Vo=b((Ai,Rh)=>{u();"use strict";Ai.__esModule=!0;Ai.default=void 0;var U2=W2(ct()),V2=be();function W2(t){return t&&t.__esModule?t:{default:t}}function G2(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Uo(t,e)}function Uo(t,e){return Uo=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},Uo(t,e)}var H2=function(t){G2(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=V2.COMBINATOR,i}return e}(U2.default);Ai.default=H2;Rh.exports=Ai.default});var Go=b((Ci,Lh)=>{u();"use strict";Ci.__esModule=!0;Ci.default=void 0;var Y2=J2(ct()),Q2=be();function J2(t){return t&&t.__esModule?t:{default:t}}function X2(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Wo(t,e)}function Wo(t,e){return Wo=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(i,n){return i.__proto__=n,i},Wo(t,e)}var K2=function(t){X2(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=Q2.NESTING,i.value="&",i}return e}(Y2.default);Ci.default=K2;Lh.exports=Ci.default});var Bh=b((ms,Mh)=>{u();"use strict";ms.__esModule=!0;ms.default=Z2;function Z2(t){return t.sort(function(e,r){return e-r})}Mh.exports=ms.default});var Ho=b(M=>{u();"use strict";M.__esModule=!0;M.word=M.tilde=M.tab=M.str=M.space=M.slash=M.singleQuote=M.semicolon=M.plus=M.pipe=M.openSquare=M.openParenthesis=M.newline=M.greaterThan=M.feed=M.equals=M.doubleQuote=M.dollar=M.cr=M.comment=M.comma=M.combinator=M.colon=M.closeSquare=M.closeParenthesis=M.caret=M.bang=M.backslash=M.at=M.asterisk=M.ampersand=void 0;var eT=38;M.ampersand=eT;var tT=42;M.asterisk=tT;var rT=64;M.at=rT;var iT=44;M.comma=iT;var nT=58;M.colon=nT;var sT=59;M.semicolon=sT;var aT=40;M.openParenthesis=aT;var oT=41;M.closeParenthesis=oT;var lT=91;M.openSquare=lT;var uT=93;M.closeSquare=uT;var fT=36;M.dollar=fT;var cT=126;M.tilde=cT;var pT=94;M.caret=pT;var dT=43;M.plus=dT;var hT=61;M.equals=hT;var mT=124;M.pipe=mT;var gT=62;M.greaterThan=gT;var wT=32;M.space=wT;var Fh=39;M.singleQuote=Fh;var yT=34;M.doubleQuote=yT;var vT=47;M.slash=vT;var bT=33;M.bang=bT;var xT=92;M.backslash=xT;var kT=13;M.cr=kT;var ST=12;M.feed=ST;var _T=10;M.newline=_T;var TT=9;M.tab=TT;var OT=Fh;M.str=OT;var ET=-1;M.comment=ET;var AT=-2;M.word=AT;var CT=-3;M.combinator=CT});var $h=b(Pi=>{u();"use strict";Pi.__esModule=!0;Pi.FIELDS=void 0;Pi.default=MT;var q=PT(Ho()),lr,J;function Nh(t){if(typeof WeakMap!="function")return null;var e=new WeakMap,r=new WeakMap;return(Nh=function(n){return n?r:e})(t)}function PT(t,e){if(!e&&t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var r=Nh(e);if(r&&r.has(t))return r.get(t);var i={},n=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var s in t)if(s!=="default"&&Object.prototype.hasOwnProperty.call(t,s)){var a=n?Object.getOwnPropertyDescriptor(t,s):null;a&&(a.get||a.set)?Object.defineProperty(i,s,a):i[s]=t[s]}return i.default=t,r&&r.set(t,i),i}var qT=(lr={},lr[q.tab]=!0,lr[q.newline]=!0,lr[q.cr]=!0,lr[q.feed]=!0,lr),DT=(J={},J[q.space]=!0,J[q.tab]=!0,J[q.newline]=!0,J[q.cr]=!0,J[q.feed]=!0,J[q.ampersand]=!0,J[q.asterisk]=!0,J[q.bang]=!0,J[q.comma]=!0,J[q.colon]=!0,J[q.semicolon]=!0,J[q.openParenthesis]=!0,J[q.closeParenthesis]=!0,J[q.openSquare]=!0,J[q.closeSquare]=!0,J[q.singleQuote]=!0,J[q.doubleQuote]=!0,J[q.plus]=!0,J[q.pipe]=!0,J[q.tilde]=!0,J[q.greaterThan]=!0,J[q.equals]=!0,J[q.dollar]=!0,J[q.caret]=!0,J[q.slash]=!0,J),Yo={},zh="0123456789abcdefABCDEF";for(gs=0;gs0?(S=a+_,T=y-x[_].length):(S=a,T=s),P=q.comment,a=S,m=S,p=y-T):f===q.slash?(y=o,P=f,m=a,p=o-s,l=y+1):(y=IT(r,o),P=q.word,m=a,p=y-s),l=y+1;break}e.push([P,a,o-s,m,p,o,l]),T&&(s=T,T=null),o=l}return e}});var Qh=b((qi,Yh)=>{u();"use strict";qi.__esModule=!0;qi.default=void 0;var BT=Re(wo()),Qo=Re(vo()),FT=Re(ko()),jh=Re(_o()),NT=Re(Oo()),zT=Re(Co()),Jo=Re(qo()),$T=Re(Io()),Uh=ws(zo()),jT=Re(jo()),Xo=Re(Vo()),UT=Re(Go()),VT=Re(Bh()),A=ws($h()),I=ws(Ho()),WT=ws(be()),ae=pi(),Wt,Ko;function Vh(t){if(typeof WeakMap!="function")return null;var e=new WeakMap,r=new WeakMap;return(Vh=function(n){return n?r:e})(t)}function ws(t,e){if(!e&&t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var r=Vh(e);if(r&&r.has(t))return r.get(t);var i={},n=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var s in t)if(s!=="default"&&Object.prototype.hasOwnProperty.call(t,s)){var a=n?Object.getOwnPropertyDescriptor(t,s):null;a&&(a.get||a.set)?Object.defineProperty(i,s,a):i[s]=t[s]}return i.default=t,r&&r.set(t,i),i}function Re(t){return t&&t.__esModule?t:{default:t}}function Wh(t,e){for(var r=0;r0){var a=this.current.last;if(a){var o=this.convertWhitespaceNodesToSpace(s),l=o.space,f=o.rawSpace;f!==void 0&&(a.rawSpaceAfter+=f),a.spaces.after+=l}else s.forEach(function(P){return i.newNode(P)})}return}var c=this.currToken,p=void 0;n>this.position&&(p=this.parseWhitespaceEquivalentTokens(n));var m;if(this.isNamedCombinator()?m=this.namedCombinator():this.currToken[A.FIELDS.TYPE]===I.combinator?(m=new Xo.default({value:this.content(),source:ur(this.currToken),sourceIndex:this.currToken[A.FIELDS.START_POS]}),this.position++):Zo[this.currToken[A.FIELDS.TYPE]]||p||this.unexpected(),m){if(p){var d=this.convertWhitespaceNodesToSpace(p),v=d.space,_=d.rawSpace;m.spaces.before=v,m.rawSpaceBefore=_}}else{var x=this.convertWhitespaceNodesToSpace(p,!0),y=x.space,S=x.rawSpace;S||(S=y);var T={},O={spaces:{}};y.endsWith(" ")&&S.endsWith(" ")?(T.before=y.slice(0,y.length-1),O.spaces.before=S.slice(0,S.length-1)):y.startsWith(" ")&&S.startsWith(" ")?(T.after=y.slice(1),O.spaces.after=S.slice(1)):O.value=S,m=new Xo.default({value:" ",source:el(c,this.tokens[this.position-1]),sourceIndex:c[A.FIELDS.START_POS],spaces:T,raws:O})}return this.currToken&&this.currToken[A.FIELDS.TYPE]===I.space&&(m.spaces.after=this.optionalSpace(this.content()),this.position++),this.newNode(m)},e.comma=function(){if(this.position===this.tokens.length-1){this.root.trailingComma=!0,this.position++;return}this.current._inferEndPosition();var i=new Qo.default({source:{start:Gh(this.tokens[this.position+1])}});this.current.parent.append(i),this.current=i,this.position++},e.comment=function(){var i=this.currToken;this.newNode(new jh.default({value:this.content(),source:ur(i),sourceIndex:i[A.FIELDS.START_POS]})),this.position++},e.error=function(i,n){throw this.root.error(i,n)},e.missingBackslash=function(){return this.error("Expected a backslash preceding the semicolon.",{index:this.currToken[A.FIELDS.START_POS]})},e.missingParenthesis=function(){return this.expected("opening parenthesis",this.currToken[A.FIELDS.START_POS])},e.missingSquareBracket=function(){return this.expected("opening square bracket",this.currToken[A.FIELDS.START_POS])},e.unexpected=function(){return this.error("Unexpected '"+this.content()+"'. Escaping special characters with \\ may help.",this.currToken[A.FIELDS.START_POS])},e.unexpectedPipe=function(){return this.error("Unexpected '|'.",this.currToken[A.FIELDS.START_POS])},e.namespace=function(){var i=this.prevToken&&this.content(this.prevToken)||!0;if(this.nextToken[A.FIELDS.TYPE]===I.word)return this.position++,this.word(i);if(this.nextToken[A.FIELDS.TYPE]===I.asterisk)return this.position++,this.universal(i);this.unexpectedPipe()},e.nesting=function(){if(this.nextToken){var i=this.content(this.nextToken);if(i==="|"){this.position++;return}}var n=this.currToken;this.newNode(new UT.default({value:this.content(),source:ur(n),sourceIndex:n[A.FIELDS.START_POS]})),this.position++},e.parentheses=function(){var i=this.current.last,n=1;if(this.position++,i&&i.type===WT.PSEUDO){var s=new Qo.default({source:{start:Gh(this.tokens[this.position-1])}}),a=this.current;for(i.append(s),this.current=s;this.position1&&i.nextToken&&i.nextToken[A.FIELDS.TYPE]===I.openParenthesis&&i.error("Misplaced parenthesis.",{index:i.nextToken[A.FIELDS.START_POS]})});else return this.expected(["pseudo-class","pseudo-element"],this.currToken[A.FIELDS.START_POS])},e.space=function(){var i=this.content();this.position===0||this.prevToken[A.FIELDS.TYPE]===I.comma||this.prevToken[A.FIELDS.TYPE]===I.openParenthesis||this.current.nodes.every(function(n){return n.type==="comment"})?(this.spaces=this.optionalSpace(i),this.position++):this.position===this.tokens.length-1||this.nextToken[A.FIELDS.TYPE]===I.comma||this.nextToken[A.FIELDS.TYPE]===I.closeParenthesis?(this.current.last.spaces.after=this.optionalSpace(i),this.position++):this.combinator()},e.string=function(){var i=this.currToken;this.newNode(new Jo.default({value:this.content(),source:ur(i),sourceIndex:i[A.FIELDS.START_POS]})),this.position++},e.universal=function(i){var n=this.nextToken;if(n&&this.content(n)==="|")return this.position++,this.namespace();var s=this.currToken;this.newNode(new jT.default({value:this.content(),source:ur(s),sourceIndex:s[A.FIELDS.START_POS]}),i),this.position++},e.splitWord=function(i,n){for(var s=this,a=this.nextToken,o=this.content();a&&~[I.dollar,I.caret,I.equals,I.word].indexOf(a[A.FIELDS.TYPE]);){this.position++;var l=this.content();if(o+=l,l.lastIndexOf("\\")===l.length-1){var f=this.nextToken;f&&f[A.FIELDS.TYPE]===I.space&&(o+=this.requiredSpace(this.content(f)),this.position++)}a=this.nextToken}var c=tl(o,".").filter(function(v){var _=o[v-1]==="\\",x=/^\d+\.\d+%$/.test(o);return!_&&!x}),p=tl(o,"#").filter(function(v){return o[v-1]!=="\\"}),m=tl(o,"#{");m.length&&(p=p.filter(function(v){return!~m.indexOf(v)}));var d=(0,VT.default)(YT([0].concat(c,p)));d.forEach(function(v,_){var x=d[_+1]||o.length,y=o.slice(v,x);if(_===0&&n)return n.call(s,y,d.length);var S,T=s.currToken,O=T[A.FIELDS.START_POS]+d[_],P=Gt(T[1],T[2]+v,T[3],T[2]+(x-1));if(~c.indexOf(v)){var N={value:y.slice(1),source:P,sourceIndex:O};S=new FT.default(fr(N,"value"))}else if(~p.indexOf(v)){var z={value:y.slice(1),source:P,sourceIndex:O};S=new NT.default(fr(z,"value"))}else{var F={value:y,source:P,sourceIndex:O};fr(F,"value"),S=new zT.default(F)}s.newNode(S,i),i=null}),this.position++},e.word=function(i){var n=this.nextToken;return n&&this.content(n)==="|"?(this.position++,this.namespace()):this.splitWord(i)},e.loop=function(){for(;this.position{u();"use strict";Di.__esModule=!0;Di.default=void 0;var JT=XT(Qh());function XT(t){return t&&t.__esModule?t:{default:t}}var KT=function(){function t(r,i){this.func=r||function(){},this.funcRes=null,this.options=i}var e=t.prototype;return e._shouldUpdateSelector=function(i,n){n===void 0&&(n={});var s=Object.assign({},this.options,n);return s.updateSelector===!1?!1:typeof i!="string"},e._isLossy=function(i){i===void 0&&(i={});var n=Object.assign({},this.options,i);return n.lossless===!1},e._root=function(i,n){n===void 0&&(n={});var s=new JT.default(i,this._parseOptions(n));return s.root},e._parseOptions=function(i){return{lossy:this._isLossy(i)}},e._run=function(i,n){var s=this;return n===void 0&&(n={}),new Promise(function(a,o){try{var l=s._root(i,n);Promise.resolve(s.func(l)).then(function(f){var c=void 0;return s._shouldUpdateSelector(i,n)&&(c=l.toString(),i.selector=c),{transform:f,root:l,string:c}}).then(a,o)}catch(f){o(f);return}})},e._runSync=function(i,n){n===void 0&&(n={});var s=this._root(i,n),a=this.func(s);if(a&&typeof a.then=="function")throw new Error("Selector processor returned a promise to a synchronous call.");var o=void 0;return n.updateSelector&&typeof i!="string"&&(o=s.toString(),i.selector=o),{transform:a,root:s,string:o}},e.ast=function(i,n){return this._run(i,n).then(function(s){return s.root})},e.astSync=function(i,n){return this._runSync(i,n).root},e.transform=function(i,n){return this._run(i,n).then(function(s){return s.transform})},e.transformSync=function(i,n){return this._runSync(i,n).transform},e.process=function(i,n){return this._run(i,n).then(function(s){return s.string||s.root.toString()})},e.processSync=function(i,n){var s=this._runSync(i,n);return s.string||s.root.toString()},t}();Di.default=KT;Jh.exports=Di.default});var Kh=b(te=>{u();"use strict";te.__esModule=!0;te.universal=te.tag=te.string=te.selector=te.root=te.pseudo=te.nesting=te.id=te.comment=te.combinator=te.className=te.attribute=void 0;var ZT=Le(zo()),eO=Le(ko()),tO=Le(Vo()),rO=Le(_o()),iO=Le(Oo()),nO=Le(Go()),sO=Le(Io()),aO=Le(wo()),oO=Le(vo()),lO=Le(qo()),uO=Le(Co()),fO=Le(jo());function Le(t){return t&&t.__esModule?t:{default:t}}var cO=function(e){return new ZT.default(e)};te.attribute=cO;var pO=function(e){return new eO.default(e)};te.className=pO;var dO=function(e){return new tO.default(e)};te.combinator=dO;var hO=function(e){return new rO.default(e)};te.comment=hO;var mO=function(e){return new iO.default(e)};te.id=mO;var gO=function(e){return new nO.default(e)};te.nesting=gO;var wO=function(e){return new sO.default(e)};te.pseudo=wO;var yO=function(e){return new aO.default(e)};te.root=yO;var vO=function(e){return new oO.default(e)};te.selector=vO;var bO=function(e){return new lO.default(e)};te.string=bO;var xO=function(e){return new uO.default(e)};te.tag=xO;var kO=function(e){return new fO.default(e)};te.universal=kO});var rm=b(G=>{u();"use strict";G.__esModule=!0;G.isComment=G.isCombinator=G.isClassName=G.isAttribute=void 0;G.isContainer=RO;G.isIdentifier=void 0;G.isNamespace=LO;G.isNesting=void 0;G.isNode=rl;G.isPseudo=void 0;G.isPseudoClass=IO;G.isPseudoElement=tm;G.isUniversal=G.isTag=G.isString=G.isSelector=G.isRoot=void 0;var oe=be(),Oe,SO=(Oe={},Oe[oe.ATTRIBUTE]=!0,Oe[oe.CLASS]=!0,Oe[oe.COMBINATOR]=!0,Oe[oe.COMMENT]=!0,Oe[oe.ID]=!0,Oe[oe.NESTING]=!0,Oe[oe.PSEUDO]=!0,Oe[oe.ROOT]=!0,Oe[oe.SELECTOR]=!0,Oe[oe.STRING]=!0,Oe[oe.TAG]=!0,Oe[oe.UNIVERSAL]=!0,Oe);function rl(t){return typeof t=="object"&&SO[t.type]}function Me(t,e){return rl(e)&&e.type===t}var Zh=Me.bind(null,oe.ATTRIBUTE);G.isAttribute=Zh;var _O=Me.bind(null,oe.CLASS);G.isClassName=_O;var TO=Me.bind(null,oe.COMBINATOR);G.isCombinator=TO;var OO=Me.bind(null,oe.COMMENT);G.isComment=OO;var EO=Me.bind(null,oe.ID);G.isIdentifier=EO;var AO=Me.bind(null,oe.NESTING);G.isNesting=AO;var il=Me.bind(null,oe.PSEUDO);G.isPseudo=il;var CO=Me.bind(null,oe.ROOT);G.isRoot=CO;var PO=Me.bind(null,oe.SELECTOR);G.isSelector=PO;var qO=Me.bind(null,oe.STRING);G.isString=qO;var em=Me.bind(null,oe.TAG);G.isTag=em;var DO=Me.bind(null,oe.UNIVERSAL);G.isUniversal=DO;function tm(t){return il(t)&&t.value&&(t.value.startsWith("::")||t.value.toLowerCase()===":before"||t.value.toLowerCase()===":after"||t.value.toLowerCase()===":first-letter"||t.value.toLowerCase()===":first-line")}function IO(t){return il(t)&&!tm(t)}function RO(t){return!!(rl(t)&&t.walk)}function LO(t){return Zh(t)||em(t)}});var im=b(He=>{u();"use strict";He.__esModule=!0;var nl=be();Object.keys(nl).forEach(function(t){t==="default"||t==="__esModule"||t in He&&He[t]===nl[t]||(He[t]=nl[t])});var sl=Kh();Object.keys(sl).forEach(function(t){t==="default"||t==="__esModule"||t in He&&He[t]===sl[t]||(He[t]=sl[t])});var al=rm();Object.keys(al).forEach(function(t){t==="default"||t==="__esModule"||t in He&&He[t]===al[t]||(He[t]=al[t])})});var tt=b((Ii,sm)=>{u();"use strict";Ii.__esModule=!0;Ii.default=void 0;var MO=NO(Xh()),BO=FO(im());function nm(t){if(typeof WeakMap!="function")return null;var e=new WeakMap,r=new WeakMap;return(nm=function(n){return n?r:e})(t)}function FO(t,e){if(!e&&t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var r=nm(e);if(r&&r.has(t))return r.get(t);var i={},n=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var s in t)if(s!=="default"&&Object.prototype.hasOwnProperty.call(t,s)){var a=n?Object.getOwnPropertyDescriptor(t,s):null;a&&(a.get||a.set)?Object.defineProperty(i,s,a):i[s]=t[s]}return i.default=t,r&&r.set(t,i),i}function NO(t){return t&&t.__esModule?t:{default:t}}var ol=function(e){return new MO.default(e)};Object.assign(ol,BO);delete ol.__esModule;var zO=ol;Ii.default=zO;sm.exports=Ii.default});function dt(t){return["fontSize","outline"].includes(t)?e=>(typeof e=="function"&&(e=e({})),Array.isArray(e)&&(e=e[0]),e):t==="fontFamily"?e=>{typeof e=="function"&&(e=e({}));let r=Array.isArray(e)&&ve(e[1])?e[0]:e;return Array.isArray(r)?r.join(", "):r}:["boxShadow","transitionProperty","transitionDuration","transitionDelay","transitionTimingFunction","backgroundImage","backgroundSize","backgroundColor","cursor","animation"].includes(t)?e=>(typeof e=="function"&&(e=e({})),Array.isArray(e)&&(e=e.join(", ")),e):["gridTemplateColumns","gridTemplateRows","objectPosition"].includes(t)?e=>(typeof e=="function"&&(e=e({})),typeof e=="string"&&(e=Q.list.comma(e).join(" ")),e):(e,r={})=>(typeof e=="function"&&(e=e(r)),e)}var Ri=E(()=>{u();qt();er()});var pm=b((VB,pl)=>{u();var{Rule:am,AtRule:$O}=De(),om=tt();function ll(t,e){let r;try{om(i=>{r=i}).processSync(t)}catch(i){throw t.includes(":")?e?e.error("Missed semicolon"):i:e?e.error(i.message):i}return r.at(0)}function lm(t,e){let r=!1;return t.each(i=>{if(i.type==="nesting"){let n=e.clone({});i.value!=="&"?i.replaceWith(ll(i.value.replace("&",n.toString()))):i.replaceWith(n),r=!0}else"nodes"in i&&i.nodes&&lm(i,e)&&(r=!0)}),r}function um(t,e){let r=[];return t.selectors.forEach(i=>{let n=ll(i,t);e.selectors.forEach(s=>{if(!s)return;let a=ll(s,e);lm(a,n)||(a.prepend(om.combinator({value:" "})),a.prepend(n.clone({}))),r.push(a.toString())})}),r}function ys(t,e){let r=t.prev();for(e.after(t);r&&r.type==="comment";){let i=r.prev();e.after(r),r=i}return t}function jO(t){return function e(r,i,n,s=n){let a=[];if(i.each(o=>{o.type==="rule"&&n?s&&(o.selectors=um(r,o)):o.type==="atrule"&&o.nodes?t[o.name]?e(r,o,s):i[fl]!==!1&&a.push(o):a.push(o)}),n&&a.length){let o=r.clone({nodes:[]});for(let l of a)o.append(l);i.prepend(o)}}}function ul(t,e,r){let i=new am({selector:t,nodes:[]});return i.append(e),r.after(i),i}function fm(t,e){let r={};for(let i of t)r[i]=!0;if(e)for(let i of e)r[i.replace(/^@/,"")]=!0;return r}function UO(t){t=t.trim();let e=t.match(/^\((.*)\)$/);if(!e)return{type:"basic",selector:t};let r=e[1].match(/^(with(?:out)?):(.+)$/);if(r){let i=r[1]==="with",n=Object.fromEntries(r[2].trim().split(/\s+/).map(a=>[a,!0]));if(i&&n.all)return{type:"noop"};let s=a=>!!n[a];return n.all?s=()=>!0:i&&(s=a=>a==="all"?!1:!n[a]),{type:"withrules",escapes:s}}return{type:"unknown"}}function VO(t){let e=[],r=t.parent;for(;r&&r instanceof $O;)e.push(r),r=r.parent;return e}function WO(t){let e=t[cm];if(!e)t.after(t.nodes);else{let r=t.nodes,i,n=-1,s,a,o,l=VO(t);if(l.forEach((f,c)=>{if(e(f.name))i=f,n=c,a=o;else{let p=o;o=f.clone({nodes:[]}),p&&o.append(p),s=s||o}}),i?a?(s.append(r),i.after(a)):i.after(r):t.after(r),t.next()&&i){let f;l.slice(0,n+1).forEach((c,p,m)=>{let d=f;f=c.clone({nodes:[]}),d&&f.append(d);let v=[],x=(m[p-1]||t).next();for(;x;)v.push(x),x=x.next();f.append(v)}),f&&(a||r[r.length-1]).after(f)}}t.remove()}var fl=Symbol("rootRuleMergeSel"),cm=Symbol("rootRuleEscapes");function GO(t){let{params:e}=t,{type:r,selector:i,escapes:n}=UO(e);if(r==="unknown")throw t.error(`Unknown @${t.name} parameter ${JSON.stringify(e)}`);if(r==="basic"&&i){let s=new am({selector:i,nodes:t.nodes});t.removeAll(),t.append(s)}t[cm]=n,t[fl]=n?!n("all"):r==="noop"}var cl=Symbol("hasRootRule");pl.exports=(t={})=>{let e=fm(["media","supports","layer","container"],t.bubble),r=jO(e),i=fm(["document","font-face","keyframes","-webkit-keyframes","-moz-keyframes"],t.unwrap),n=(t.rootRuleName||"at-root").replace(/^@/,""),s=t.preserveEmpty;return{postcssPlugin:"postcss-nested",Once(a){a.walkAtRules(n,o=>{GO(o),a[cl]=!0})},Rule(a){let o=!1,l=a,f=!1,c=[];a.each(p=>{p.type==="rule"?(c.length&&(l=ul(a.selector,c,l),c=[]),f=!0,o=!0,p.selectors=um(a,p),l=ys(p,l)):p.type==="atrule"?(c.length&&(l=ul(a.selector,c,l),c=[]),p.name===n?(o=!0,r(a,p,!0,p[fl]),l=ys(p,l)):e[p.name]?(f=!0,o=!0,r(a,p,!0),l=ys(p,l)):i[p.name]?(f=!0,o=!0,r(a,p,!1),l=ys(p,l)):f&&c.push(p)):p.type==="decl"&&f&&c.push(p)}),c.length&&(l=ul(a.selector,c,l)),o&&s!==!0&&(a.raws.semicolon=!0,a.nodes.length===0&&a.remove())},RootExit(a){a[cl]&&(a.walkAtRules(n,WO),a[cl]=!1)}}};pl.exports.postcss=!0});var gm=b((WB,mm)=>{u();"use strict";var dm=/-(\w|$)/g,hm=(t,e)=>e.toUpperCase(),HO=t=>(t=t.toLowerCase(),t==="float"?"cssFloat":t.startsWith("-ms-")?t.substr(1).replace(dm,hm):t.replace(dm,hm));mm.exports=HO});var ml=b((GB,wm)=>{u();var YO=gm(),QO={boxFlex:!0,boxFlexGroup:!0,columnCount:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,strokeDashoffset:!0,strokeOpacity:!0,strokeWidth:!0};function dl(t){return typeof t.nodes=="undefined"?!0:hl(t)}function hl(t){let e,r={};return t.each(i=>{if(i.type==="atrule")e="@"+i.name,i.params&&(e+=" "+i.params),typeof r[e]=="undefined"?r[e]=dl(i):Array.isArray(r[e])?r[e].push(dl(i)):r[e]=[r[e],dl(i)];else if(i.type==="rule"){let n=hl(i);if(r[i.selector])for(let s in n)r[i.selector][s]=n[s];else r[i.selector]=n}else if(i.type==="decl"){i.prop[0]==="-"&&i.prop[1]==="-"||i.parent&&i.parent.selector===":export"?e=i.prop:e=YO(i.prop);let n=i.value;!isNaN(i.value)&&QO[e]&&(n=parseFloat(i.value)),i.important&&(n+=" !important"),typeof r[e]=="undefined"?r[e]=n:Array.isArray(r[e])?r[e].push(n):r[e]=[r[e],n]}}),r}wm.exports=hl});var vs=b((HB,xm)=>{u();var Li=De(),ym=/\s*!important\s*$/i,JO={"box-flex":!0,"box-flex-group":!0,"column-count":!0,flex:!0,"flex-grow":!0,"flex-positive":!0,"flex-shrink":!0,"flex-negative":!0,"font-weight":!0,"line-clamp":!0,"line-height":!0,opacity:!0,order:!0,orphans:!0,"tab-size":!0,widows:!0,"z-index":!0,zoom:!0,"fill-opacity":!0,"stroke-dashoffset":!0,"stroke-opacity":!0,"stroke-width":!0};function XO(t){return t.replace(/([A-Z])/g,"-$1").replace(/^ms-/,"-ms-").toLowerCase()}function vm(t,e,r){r===!1||r===null||(e.startsWith("--")||(e=XO(e)),typeof r=="number"&&(r===0||JO[e]?r=r.toString():r+="px"),e==="css-float"&&(e="float"),ym.test(r)?(r=r.replace(ym,""),t.push(Li.decl({prop:e,value:r,important:!0}))):t.push(Li.decl({prop:e,value:r})))}function bm(t,e,r){let i=Li.atRule({name:e[1],params:e[3]||""});typeof r=="object"&&(i.nodes=[],gl(r,i)),t.push(i)}function gl(t,e){let r,i,n;for(r in t)if(i=t[r],!(i===null||typeof i=="undefined"))if(r[0]==="@"){let s=r.match(/@(\S+)(\s+([\W\w]*)\s*)?/);if(Array.isArray(i))for(let a of i)bm(e,s,a);else bm(e,s,i)}else if(Array.isArray(i))for(let s of i)vm(e,r,s);else typeof i=="object"?(n=Li.rule({selector:r}),gl(i,n),e.push(n)):vm(e,r,i)}xm.exports=function(t){let e=Li.root();return gl(t,e),e}});var wl=b((YB,km)=>{u();var KO=ml();km.exports=function(e){return console&&console.warn&&e.warnings().forEach(r=>{let i=r.plugin||"PostCSS";console.warn(i+": "+r.text)}),KO(e.root)}});var _m=b((QB,Sm)=>{u();var ZO=De(),eE=wl(),tE=vs();Sm.exports=function(e){let r=ZO(e);return async i=>{let n=await r.process(i,{parser:tE,from:void 0});return eE(n)}}});var Om=b((JB,Tm)=>{u();var rE=De(),iE=wl(),nE=vs();Tm.exports=function(t){let e=rE(t);return r=>{let i=e.process(r,{parser:nE,from:void 0});return iE(i)}}});var Am=b((XB,Em)=>{u();var sE=ml(),aE=vs(),oE=_m(),lE=Om();Em.exports={objectify:sE,parse:aE,async:oE,sync:lE}});var cr,Cm,KB,ZB,eF,tF,Pm=E(()=>{u();cr=he(Am()),Cm=cr.default,KB=cr.default.objectify,ZB=cr.default.parse,eF=cr.default.async,tF=cr.default.sync});function pr(t){return Array.isArray(t)?t.flatMap(e=>Q([(0,qm.default)({bubble:["screen"]})]).process(e,{parser:Cm}).root.nodes):pr([t])}var qm,yl=E(()=>{u();qt();qm=he(pm());Pm()});function dr(t,e,r=!1){if(t==="")return e;let i=typeof e=="string"?(0,Dm.default)().astSync(e):e;return i.walkClasses(n=>{let s=n.value,a=r&&s.startsWith("-");n.value=a?`-${t}${s.slice(1)}`:`${t}${s}`}),typeof e=="string"?i.toString():i}var Dm,bs=E(()=>{u();Dm=he(tt())});function Ee(t){let e=Im.default.className();return e.value=t,$t(e?.raws?.value??e.value)}var Im,hr=E(()=>{u();Im=he(tt());An()});function vl(t){return $t(`.${Ee(t)}`)}function xs(t,e){return vl(Mi(t,e))}function Mi(t,e){return e==="DEFAULT"?t:e==="-"||e==="-DEFAULT"?`-${t}`:e.startsWith("-")?`-${t}${e}`:e.startsWith("/")?`${t}${e}`:`${t}-${e}`}var bl=E(()=>{u();hr();An()});function L(t,e=[[t,[t]]],{filterDefault:r=!1,...i}={}){let n=dt(t);return function({matchUtilities:s,theme:a}){for(let o of e){let l=Array.isArray(o[0])?o:[o];s(l.reduce((f,[c,p])=>Object.assign(f,{[c]:m=>p.reduce((d,v)=>Array.isArray(v)?Object.assign(d,{[v[0]]:v[1]}):Object.assign(d,{[v]:n(m)}),{})}),{}),{...i,values:r?Object.fromEntries(Object.entries(a(t)??{}).filter(([f])=>f!=="DEFAULT")):a(t)})}}}var Rm=E(()=>{u();Ri()});function Dt(t){return t=Array.isArray(t)?t:[t],t.map(e=>{let r=e.values.map(i=>i.raw!==void 0?i.raw:[i.min&&`(min-width: ${i.min})`,i.max&&`(max-width: ${i.max})`].filter(Boolean).join(" and "));return e.not?`not all and ${r}`:r}).join(", ")}var ks=E(()=>{u()});function xl(t){return t.split(mE).map(r=>{let i=r.trim(),n={value:i},s=i.split(gE),a=new Set;for(let o of s)!a.has("DIRECTIONS")&&uE.has(o)?(n.direction=o,a.add("DIRECTIONS")):!a.has("PLAY_STATES")&&fE.has(o)?(n.playState=o,a.add("PLAY_STATES")):!a.has("FILL_MODES")&&cE.has(o)?(n.fillMode=o,a.add("FILL_MODES")):!a.has("ITERATION_COUNTS")&&(pE.has(o)||wE.test(o))?(n.iterationCount=o,a.add("ITERATION_COUNTS")):!a.has("TIMING_FUNCTION")&&dE.has(o)||!a.has("TIMING_FUNCTION")&&hE.some(l=>o.startsWith(`${l}(`))?(n.timingFunction=o,a.add("TIMING_FUNCTION")):!a.has("DURATION")&&Lm.test(o)?(n.duration=o,a.add("DURATION")):!a.has("DELAY")&&Lm.test(o)?(n.delay=o,a.add("DELAY")):a.has("NAME")?(n.unknown||(n.unknown=[]),n.unknown.push(o)):(n.name=o,a.add("NAME"));return n})}var uE,fE,cE,pE,dE,hE,mE,gE,Lm,wE,Mm=E(()=>{u();uE=new Set(["normal","reverse","alternate","alternate-reverse"]),fE=new Set(["running","paused"]),cE=new Set(["none","forwards","backwards","both"]),pE=new Set(["infinite"]),dE=new Set(["linear","ease","ease-in","ease-out","ease-in-out","step-start","step-end"]),hE=["cubic-bezier","steps"],mE=/\,(?![^(]*\))/g,gE=/\ +(?![^(]*\))/g,Lm=/^(-?[\d.]+m?s)$/,wE=/^(\d+)$/});var Bm,ye,Fm=E(()=>{u();Bm=t=>Object.assign({},...Object.entries(t??{}).flatMap(([e,r])=>typeof r=="object"?Object.entries(Bm(r)).map(([i,n])=>({[e+(i==="DEFAULT"?"":`-${i}`)]:n})):[{[`${e}`]:r}])),ye=Bm});var yE,Sl,vE,bE,xE,kE,SE,_E,TE,OE,EE,AE,CE,PE,qE,DE,IE,RE,_l,kl=E(()=>{yE="tailwindcss",Sl="3.3.2",vE="A utility-first CSS framework for rapidly building custom user interfaces.",bE="MIT",xE="lib/index.js",kE="types/index.d.ts",SE="https://github.com/tailwindlabs/tailwindcss.git",_E="https://github.com/tailwindlabs/tailwindcss/issues",TE="https://tailwindcss.com",OE={tailwind:"lib/cli.js",tailwindcss:"lib/cli.js"},EE={engine:"stable"},AE={prebuild:"npm run generate && rimraf lib",build:`swc src --out-dir lib --copy-files --config jsc.transform.optimizer.globals.vars.__OXIDE__='"false"'`,postbuild:"esbuild lib/cli-peer-dependencies.js --bundle --platform=node --outfile=peers/index.js --define:process.env.CSS_TRANSFORMER_WASM=false","rebuild-fixtures":"npm run build && node -r @swc/register scripts/rebuildFixtures.js",style:"eslint .",pretest:"npm run generate",test:"jest","test:integrations":"npm run test --prefix ./integrations","install:integrations":"node scripts/install-integrations.js","generate:plugin-list":"node -r @swc/register scripts/create-plugin-list.js","generate:types":"node -r @swc/register scripts/generate-types.js",generate:"npm run generate:plugin-list && npm run generate:types","release-channel":"node ./scripts/release-channel.js","release-notes":"node ./scripts/release-notes.js",prepublishOnly:"npm install --force && npm run build"},CE=["src/*","cli/*","lib/*","peers/*","scripts/*.js","stubs/*","nesting/*","types/**/*","*.d.ts","*.css","*.js"],PE={"@swc/cli":"^0.1.62","@swc/core":"^1.3.55","@swc/jest":"^0.2.26","@swc/register":"^0.1.10",autoprefixer:"^10.4.14",browserslist:"^4.21.5",concurrently:"^8.0.1",cssnano:"^6.0.0",esbuild:"^0.17.18",eslint:"^8.39.0","eslint-config-prettier":"^8.8.0","eslint-plugin-prettier":"^4.2.1",jest:"^29.5.0","jest-diff":"^29.5.0",lightningcss:"1.18.0",prettier:"^2.8.8",rimraf:"^5.0.0","source-map-js":"^1.0.2",turbo:"^1.9.3"},qE={"@alloc/quick-lru":"^5.2.0",arg:"^5.0.2",chokidar:"^3.5.3",didyoumean:"^1.2.2",dlv:"^1.1.3","fast-glob":"^3.2.12","glob-parent":"^6.0.2","is-glob":"^4.0.3",jiti:"^1.18.2",lilconfig:"^2.1.0",micromatch:"^4.0.5","normalize-path":"^3.0.0","object-hash":"^3.0.0",picocolors:"^1.0.0",postcss:"^8.4.23","postcss-import":"^15.1.0","postcss-js":"^4.0.1","postcss-load-config":"^4.0.1","postcss-nested":"^6.0.1","postcss-selector-parser":"^6.0.11","postcss-value-parser":"^4.2.0",resolve:"^1.22.2",sucrase:"^3.32.0"},DE=["> 1%","not edge <= 18","not ie 11","not op_mini all"],IE={testTimeout:3e4,setupFilesAfterEnv:["/jest/customMatchers.js"],testPathIgnorePatterns:["/node_modules/","/integrations/","/standalone-cli/","\\.test\\.skip\\.js$"],transformIgnorePatterns:["node_modules/(?!lightningcss)"],transform:{"\\.js$":"@swc/jest","\\.ts$":"@swc/jest"}},RE={node:">=14.0.0"},_l={name:yE,version:Sl,description:vE,license:bE,main:xE,types:kE,repository:SE,bugs:_E,homepage:TE,bin:OE,tailwindcss:EE,scripts:AE,files:CE,devDependencies:PE,dependencies:qE,browserslist:DE,jest:IE,engines:RE}});function It(t,e=!0){return Array.isArray(t)?t.map(r=>{if(e&&Array.isArray(r))throw new Error("The tuple syntax is not supported for `screens`.");if(typeof r=="string")return{name:r.toString(),not:!1,values:[{min:r,max:void 0}]};let[i,n]=r;return i=i.toString(),typeof n=="string"?{name:i,not:!1,values:[{min:n,max:void 0}]}:Array.isArray(n)?{name:i,not:!1,values:n.map(s=>zm(s))}:{name:i,not:!1,values:[zm(n)]}}):It(Object.entries(t??{}),!1)}function Ss(t){return t.values.length!==1?{result:!1,reason:"multiple-values"}:t.values[0].raw!==void 0?{result:!1,reason:"raw-values"}:t.values[0].min!==void 0&&t.values[0].max!==void 0?{result:!1,reason:"min-and-max"}:{result:!0,reason:null}}function Nm(t,e,r){let i=_s(e,t),n=_s(r,t),s=Ss(i),a=Ss(n);if(s.reason==="multiple-values"||a.reason==="multiple-values")throw new Error("Attempted to sort a screen with multiple values. This should never happen. Please open a bug report.");if(s.reason==="raw-values"||a.reason==="raw-values")throw new Error("Attempted to sort a screen with raw values. This should never happen. Please open a bug report.");if(s.reason==="min-and-max"||a.reason==="min-and-max")throw new Error("Attempted to sort a screen with both min and max values. This should never happen. Please open a bug report.");let{min:o,max:l}=i.values[0],{min:f,max:c}=n.values[0];e.not&&([o,l]=[l,o]),r.not&&([f,c]=[c,f]),o=o===void 0?o:parseFloat(o),l=l===void 0?l:parseFloat(l),f=f===void 0?f:parseFloat(f),c=c===void 0?c:parseFloat(c);let[p,m]=t==="min"?[o,f]:[c,l];return p-m}function _s(t,e){return typeof t=="object"?t:{name:"arbitrary-screen",values:[{[e]:t}]}}function zm({"min-width":t,min:e=t,max:r,raw:i}={}){return{min:e,max:r,raw:i}}var Ts=E(()=>{u()});function Os(t,e){t.walkDecls(r=>{if(e.includes(r.prop)){r.remove();return}for(let i of e)r.value.includes(`/ var(${i})`)&&(r.value=r.value.replace(`/ var(${i})`,""))})}var $m=E(()=>{u()});var Ae,Ye,rt,it,jm,Um=E(()=>{u();ut();jt();qt();Rm();ks();hr();Mm();Fm();Hr();Na();er();Ri();kl();Ge();Ts();Da();$m();Xe();Xr();Ae={pseudoElementVariants:({addVariant:t})=>{t("first-letter","&::first-letter"),t("first-line","&::first-line"),t("marker",[({container:e})=>(Os(e,["--tw-text-opacity"]),"& *::marker"),({container:e})=>(Os(e,["--tw-text-opacity"]),"&::marker")]),t("selection",["& *::selection","&::selection"]),t("file","&::file-selector-button"),t("placeholder","&::placeholder"),t("backdrop","&::backdrop"),t("before",({container:e})=>(e.walkRules(r=>{let i=!1;r.walkDecls("content",()=>{i=!0}),i||r.prepend(Q.decl({prop:"content",value:"var(--tw-content)"}))}),"&::before")),t("after",({container:e})=>(e.walkRules(r=>{let i=!1;r.walkDecls("content",()=>{i=!0}),i||r.prepend(Q.decl({prop:"content",value:"var(--tw-content)"}))}),"&::after"))},pseudoClassVariants:({addVariant:t,matchVariant:e,config:r})=>{let i=[["first","&:first-child"],["last","&:last-child"],["only","&:only-child"],["odd","&:nth-child(odd)"],["even","&:nth-child(even)"],"first-of-type","last-of-type","only-of-type",["visited",({container:s})=>(Os(s,["--tw-text-opacity","--tw-border-opacity","--tw-bg-opacity"]),"&:visited")],"target",["open","&[open]"],"default","checked","indeterminate","placeholder-shown","autofill","optional","required","valid","invalid","in-range","out-of-range","read-only","empty","focus-within",["hover",de(r(),"hoverOnlyWhenSupported")?"@media (hover: hover) and (pointer: fine) { &:hover }":"&:hover"],"focus","focus-visible","active","enabled","disabled"].map(s=>Array.isArray(s)?s:[s,`&:${s}`]);for(let[s,a]of i)t(s,o=>typeof a=="function"?a(o):a);let n={group:(s,{modifier:a})=>a?[`:merge(.group\\/${Ee(a)})`," &"]:[":merge(.group)"," &"],peer:(s,{modifier:a})=>a?[`:merge(.peer\\/${Ee(a)})`," ~ &"]:[":merge(.peer)"," ~ &"]};for(let[s,a]of Object.entries(n))e(s,(o="",l)=>{let f=K(typeof o=="function"?o(l):o);f.includes("&")||(f="&"+f);let[c,p]=a("",l),m=null,d=null,v=0;for(let _=0;_{t("ltr",':is([dir="ltr"] &)'),t("rtl",':is([dir="rtl"] &)')},reducedMotionVariants:({addVariant:t})=>{t("motion-safe","@media (prefers-reduced-motion: no-preference)"),t("motion-reduce","@media (prefers-reduced-motion: reduce)")},darkVariants:({config:t,addVariant:e})=>{let[r,i=".dark"]=[].concat(t("darkMode","media"));r===!1&&(r="media",V.warn("darkmode-false",["The `darkMode` option in your Tailwind CSS configuration is set to `false`, which now behaves the same as `media`.","Change `darkMode` to `media` or remove it entirely.","https://tailwindcss.com/docs/upgrade-guide#remove-dark-mode-configuration"])),r==="class"?e("dark",`:is(${i} &)`):r==="media"&&e("dark","@media (prefers-color-scheme: dark)")},printVariant:({addVariant:t})=>{t("print","@media print")},screenVariants:({theme:t,addVariant:e,matchVariant:r})=>{let i=t("screens")??{},n=Object.values(i).every(y=>typeof y=="string"),s=It(t("screens")),a=new Set([]);function o(y){return y.match(/(\D+)$/)?.[1]??"(none)"}function l(y){y!==void 0&&a.add(o(y))}function f(y){return l(y),a.size===1}for(let y of s)for(let S of y.values)l(S.min),l(S.max);let c=a.size<=1;function p(y){return Object.fromEntries(s.filter(S=>Ss(S).result).map(S=>{let{min:T,max:O}=S.values[0];if(y==="min"&&T!==void 0)return S;if(y==="min"&&O!==void 0)return{...S,not:!S.not};if(y==="max"&&O!==void 0)return S;if(y==="max"&&T!==void 0)return{...S,not:!S.not}}).map(S=>[S.name,S]))}function m(y){return(S,T)=>Nm(y,S.value,T.value)}let d=m("max"),v=m("min");function _(y){return S=>{if(n)if(c){if(typeof S=="string"&&!f(S))return V.warn("minmax-have-mixed-units",["The `min-*` and `max-*` variants are not supported with a `screens` configuration containing mixed units."]),[]}else return V.warn("mixed-screen-units",["The `min-*` and `max-*` variants are not supported with a `screens` configuration containing mixed units."]),[];else return V.warn("complex-screen-config",["The `min-*` and `max-*` variants are not supported with a `screens` configuration containing objects."]),[];return[`@media ${Dt(_s(S,y))}`]}}r("max",_("max"),{sort:d,values:n?p("max"):{}});let x="min-screens";for(let y of s)e(y.name,`@media ${Dt(y)}`,{id:x,sort:n&&c?v:void 0,value:y});r("min",_("min"),{id:x,sort:v})},supportsVariants:({matchVariant:t,theme:e})=>{t("supports",(r="")=>{let i=K(r),n=/^\w*\s*\(/.test(i);return i=n?i.replace(/\b(and|or|not)\b/g," $1 "):i,n?`@supports ${i}`:(i.includes(":")||(i=`${i}: var(--tw)`),i.startsWith("(")&&i.endsWith(")")||(i=`(${i})`),`@supports ${i}`)},{values:e("supports")??{}})},ariaVariants:({matchVariant:t,theme:e})=>{t("aria",r=>`&[aria-${K(r)}]`,{values:e("aria")??{}}),t("group-aria",(r,{modifier:i})=>i?`:merge(.group\\/${i})[aria-${K(r)}] &`:`:merge(.group)[aria-${K(r)}] &`,{values:e("aria")??{}}),t("peer-aria",(r,{modifier:i})=>i?`:merge(.peer\\/${i})[aria-${K(r)}] ~ &`:`:merge(.peer)[aria-${K(r)}] ~ &`,{values:e("aria")??{}})},dataVariants:({matchVariant:t,theme:e})=>{t("data",r=>`&[data-${K(r)}]`,{values:e("data")??{}}),t("group-data",(r,{modifier:i})=>i?`:merge(.group\\/${i})[data-${K(r)}] &`:`:merge(.group)[data-${K(r)}] &`,{values:e("data")??{}}),t("peer-data",(r,{modifier:i})=>i?`:merge(.peer\\/${i})[data-${K(r)}] ~ &`:`:merge(.peer)[data-${K(r)}] ~ &`,{values:e("data")??{}})},orientationVariants:({addVariant:t})=>{t("portrait","@media (orientation: portrait)"),t("landscape","@media (orientation: landscape)")},prefersContrastVariants:({addVariant:t})=>{t("contrast-more","@media (prefers-contrast: more)"),t("contrast-less","@media (prefers-contrast: less)")}},Ye=["translate(var(--tw-translate-x), var(--tw-translate-y))","rotate(var(--tw-rotate))","skewX(var(--tw-skew-x))","skewY(var(--tw-skew-y))","scaleX(var(--tw-scale-x))","scaleY(var(--tw-scale-y))"].join(" "),rt=["var(--tw-blur)","var(--tw-brightness)","var(--tw-contrast)","var(--tw-grayscale)","var(--tw-hue-rotate)","var(--tw-invert)","var(--tw-saturate)","var(--tw-sepia)","var(--tw-drop-shadow)"].join(" "),it=["var(--tw-backdrop-blur)","var(--tw-backdrop-brightness)","var(--tw-backdrop-contrast)","var(--tw-backdrop-grayscale)","var(--tw-backdrop-hue-rotate)","var(--tw-backdrop-invert)","var(--tw-backdrop-opacity)","var(--tw-backdrop-saturate)","var(--tw-backdrop-sepia)"].join(" "),jm={preflight:({addBase:t})=>{let e=Q.parse(`*,::after,::before{box-sizing:border-box;border-width:0;border-style:solid;border-color:theme('borderColor.DEFAULT', currentColor)}::after,::before{--tw-content:''}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;tab-size:4;font-family:theme('fontFamily.sans', ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji");font-feature-settings:theme('fontFamily.sans[1].fontFeatureSettings', normal);font-variation-settings:theme('fontFamily.sans[1].fontVariationSettings', normal)}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,pre,samp{font-family:theme('fontFamily.mono', ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace);font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dd,dl,figure,h1,h2,h3,h4,h5,h6,hr,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}menu,ol,ul{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::placeholder,textarea::placeholder{opacity:1;color:theme('colors.gray.4', #9ca3af)}[role=button],button{cursor:pointer}:disabled{cursor:default}audio,canvas,embed,iframe,img,object,svg,video{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}`);t([Q.comment({text:`! tailwindcss v${Sl} | MIT License | https://tailwindcss.com`}),...e.nodes])},container:(()=>{function t(r=[]){return r.flatMap(i=>i.values.map(n=>n.min)).filter(i=>i!==void 0)}function e(r,i,n){if(typeof n=="undefined")return[];if(!(typeof n=="object"&&n!==null))return[{screen:"DEFAULT",minWidth:0,padding:n}];let s=[];n.DEFAULT&&s.push({screen:"DEFAULT",minWidth:0,padding:n.DEFAULT});for(let a of r)for(let o of i)for(let{min:l}of o.values)l===a&&s.push({minWidth:a,padding:n[o.name]});return s}return function({addComponents:r,theme:i}){let n=It(i("container.screens",i("screens"))),s=t(n),a=e(s,n,i("container.padding")),o=f=>{let c=a.find(p=>p.minWidth===f);return c?{paddingRight:c.padding,paddingLeft:c.padding}:{}},l=Array.from(new Set(s.slice().sort((f,c)=>parseInt(f)-parseInt(c)))).map(f=>({[`@media (min-width: ${f})`]:{".container":{"max-width":f,...o(f)}}}));r([{".container":Object.assign({width:"100%"},i("container.center",!1)?{marginRight:"auto",marginLeft:"auto"}:{},o(0))},...l])}})(),accessibility:({addUtilities:t})=>{t({".sr-only":{position:"absolute",width:"1px",height:"1px",padding:"0",margin:"-1px",overflow:"hidden",clip:"rect(0, 0, 0, 0)",whiteSpace:"nowrap",borderWidth:"0"},".not-sr-only":{position:"static",width:"auto",height:"auto",padding:"0",margin:"0",overflow:"visible",clip:"auto",whiteSpace:"normal"}})},pointerEvents:({addUtilities:t})=>{t({".pointer-events-none":{"pointer-events":"none"},".pointer-events-auto":{"pointer-events":"auto"}})},visibility:({addUtilities:t})=>{t({".visible":{visibility:"visible"},".invisible":{visibility:"hidden"},".collapse":{visibility:"collapse"}})},position:({addUtilities:t})=>{t({".static":{position:"static"},".fixed":{position:"fixed"},".absolute":{position:"absolute"},".relative":{position:"relative"},".sticky":{position:"sticky"}})},inset:L("inset",[["inset",["inset"]],[["inset-x",["left","right"]],["inset-y",["top","bottom"]]],[["start",["inset-inline-start"]],["end",["inset-inline-end"]],["top",["top"]],["right",["right"]],["bottom",["bottom"]],["left",["left"]]]],{supportsNegativeValues:!0}),isolation:({addUtilities:t})=>{t({".isolate":{isolation:"isolate"},".isolation-auto":{isolation:"auto"}})},zIndex:L("zIndex",[["z",["zIndex"]]],{supportsNegativeValues:!0}),order:L("order",void 0,{supportsNegativeValues:!0}),gridColumn:L("gridColumn",[["col",["gridColumn"]]]),gridColumnStart:L("gridColumnStart",[["col-start",["gridColumnStart"]]]),gridColumnEnd:L("gridColumnEnd",[["col-end",["gridColumnEnd"]]]),gridRow:L("gridRow",[["row",["gridRow"]]]),gridRowStart:L("gridRowStart",[["row-start",["gridRowStart"]]]),gridRowEnd:L("gridRowEnd",[["row-end",["gridRowEnd"]]]),float:({addUtilities:t})=>{t({".float-right":{float:"right"},".float-left":{float:"left"},".float-none":{float:"none"}})},clear:({addUtilities:t})=>{t({".clear-left":{clear:"left"},".clear-right":{clear:"right"},".clear-both":{clear:"both"},".clear-none":{clear:"none"}})},margin:L("margin",[["m",["margin"]],[["mx",["margin-left","margin-right"]],["my",["margin-top","margin-bottom"]]],[["ms",["margin-inline-start"]],["me",["margin-inline-end"]],["mt",["margin-top"]],["mr",["margin-right"]],["mb",["margin-bottom"]],["ml",["margin-left"]]]],{supportsNegativeValues:!0}),boxSizing:({addUtilities:t})=>{t({".box-border":{"box-sizing":"border-box"},".box-content":{"box-sizing":"content-box"}})},lineClamp:({matchUtilities:t,addUtilities:e,theme:r})=>{t({"line-clamp":i=>({overflow:"hidden",display:"-webkit-box","-webkit-box-orient":"vertical","-webkit-line-clamp":`${i}`})},{values:r("lineClamp")}),e({".line-clamp-none":{overflow:"visible",display:"block","-webkit-box-orient":"horizontal","-webkit-line-clamp":"none"}})},display:({addUtilities:t})=>{t({".block":{display:"block"},".inline-block":{display:"inline-block"},".inline":{display:"inline"},".flex":{display:"flex"},".inline-flex":{display:"inline-flex"},".table":{display:"table"},".inline-table":{display:"inline-table"},".table-caption":{display:"table-caption"},".table-cell":{display:"table-cell"},".table-column":{display:"table-column"},".table-column-group":{display:"table-column-group"},".table-footer-group":{display:"table-footer-group"},".table-header-group":{display:"table-header-group"},".table-row-group":{display:"table-row-group"},".table-row":{display:"table-row"},".flow-root":{display:"flow-root"},".grid":{display:"grid"},".inline-grid":{display:"inline-grid"},".contents":{display:"contents"},".list-item":{display:"list-item"},".hidden":{display:"none"}})},aspectRatio:L("aspectRatio",[["aspect",["aspect-ratio"]]]),height:L("height",[["h",["height"]]]),maxHeight:L("maxHeight",[["max-h",["maxHeight"]]]),minHeight:L("minHeight",[["min-h",["minHeight"]]]),width:L("width",[["w",["width"]]]),minWidth:L("minWidth",[["min-w",["minWidth"]]]),maxWidth:L("maxWidth",[["max-w",["maxWidth"]]]),flex:L("flex"),flexShrink:L("flexShrink",[["flex-shrink",["flex-shrink"]],["shrink",["flex-shrink"]]]),flexGrow:L("flexGrow",[["flex-grow",["flex-grow"]],["grow",["flex-grow"]]]),flexBasis:L("flexBasis",[["basis",["flex-basis"]]]),tableLayout:({addUtilities:t})=>{t({".table-auto":{"table-layout":"auto"},".table-fixed":{"table-layout":"fixed"}})},captionSide:({addUtilities:t})=>{t({".caption-top":{"caption-side":"top"},".caption-bottom":{"caption-side":"bottom"}})},borderCollapse:({addUtilities:t})=>{t({".border-collapse":{"border-collapse":"collapse"},".border-separate":{"border-collapse":"separate"}})},borderSpacing:({addDefaults:t,matchUtilities:e,theme:r})=>{t("border-spacing",{"--tw-border-spacing-x":0,"--tw-border-spacing-y":0}),e({"border-spacing":i=>({"--tw-border-spacing-x":i,"--tw-border-spacing-y":i,"@defaults border-spacing":{},"border-spacing":"var(--tw-border-spacing-x) var(--tw-border-spacing-y)"}),"border-spacing-x":i=>({"--tw-border-spacing-x":i,"@defaults border-spacing":{},"border-spacing":"var(--tw-border-spacing-x) var(--tw-border-spacing-y)"}),"border-spacing-y":i=>({"--tw-border-spacing-y":i,"@defaults border-spacing":{},"border-spacing":"var(--tw-border-spacing-x) var(--tw-border-spacing-y)"})},{values:r("borderSpacing")})},transformOrigin:L("transformOrigin",[["origin",["transformOrigin"]]]),translate:L("translate",[[["translate-x",[["@defaults transform",{}],"--tw-translate-x",["transform",Ye]]],["translate-y",[["@defaults transform",{}],"--tw-translate-y",["transform",Ye]]]]],{supportsNegativeValues:!0}),rotate:L("rotate",[["rotate",[["@defaults transform",{}],"--tw-rotate",["transform",Ye]]]],{supportsNegativeValues:!0}),skew:L("skew",[[["skew-x",[["@defaults transform",{}],"--tw-skew-x",["transform",Ye]]],["skew-y",[["@defaults transform",{}],"--tw-skew-y",["transform",Ye]]]]],{supportsNegativeValues:!0}),scale:L("scale",[["scale",[["@defaults transform",{}],"--tw-scale-x","--tw-scale-y",["transform",Ye]]],[["scale-x",[["@defaults transform",{}],"--tw-scale-x",["transform",Ye]]],["scale-y",[["@defaults transform",{}],"--tw-scale-y",["transform",Ye]]]]],{supportsNegativeValues:!0}),transform:({addDefaults:t,addUtilities:e})=>{t("transform",{"--tw-translate-x":"0","--tw-translate-y":"0","--tw-rotate":"0","--tw-skew-x":"0","--tw-skew-y":"0","--tw-scale-x":"1","--tw-scale-y":"1"}),e({".transform":{"@defaults transform":{},transform:Ye},".transform-cpu":{transform:Ye},".transform-gpu":{transform:Ye.replace("translate(var(--tw-translate-x), var(--tw-translate-y))","translate3d(var(--tw-translate-x), var(--tw-translate-y), 0)")},".transform-none":{transform:"none"}})},animation:({matchUtilities:t,theme:e,config:r})=>{let i=s=>`${r("prefix")}${Ee(s)}`,n=Object.fromEntries(Object.entries(e("keyframes")??{}).map(([s,a])=>[s,{[`@keyframes ${i(s)}`]:a}]));t({animate:s=>{let a=xl(s);return[...a.flatMap(o=>n[o.name]),{animation:a.map(({name:o,value:l})=>o===void 0||n[o]===void 0?l:l.replace(o,i(o))).join(", ")}]}},{values:e("animation")})},cursor:L("cursor"),touchAction:({addDefaults:t,addUtilities:e})=>{t("touch-action",{"--tw-pan-x":" ","--tw-pan-y":" ","--tw-pinch-zoom":" "});let r="var(--tw-pan-x) var(--tw-pan-y) var(--tw-pinch-zoom)";e({".touch-auto":{"touch-action":"auto"},".touch-none":{"touch-action":"none"},".touch-pan-x":{"@defaults touch-action":{},"--tw-pan-x":"pan-x","touch-action":r},".touch-pan-left":{"@defaults touch-action":{},"--tw-pan-x":"pan-left","touch-action":r},".touch-pan-right":{"@defaults touch-action":{},"--tw-pan-x":"pan-right","touch-action":r},".touch-pan-y":{"@defaults touch-action":{},"--tw-pan-y":"pan-y","touch-action":r},".touch-pan-up":{"@defaults touch-action":{},"--tw-pan-y":"pan-up","touch-action":r},".touch-pan-down":{"@defaults touch-action":{},"--tw-pan-y":"pan-down","touch-action":r},".touch-pinch-zoom":{"@defaults touch-action":{},"--tw-pinch-zoom":"pinch-zoom","touch-action":r},".touch-manipulation":{"touch-action":"manipulation"}})},userSelect:({addUtilities:t})=>{t({".select-none":{"user-select":"none"},".select-text":{"user-select":"text"},".select-all":{"user-select":"all"},".select-auto":{"user-select":"auto"}})},resize:({addUtilities:t})=>{t({".resize-none":{resize:"none"},".resize-y":{resize:"vertical"},".resize-x":{resize:"horizontal"},".resize":{resize:"both"}})},scrollSnapType:({addDefaults:t,addUtilities:e})=>{t("scroll-snap-type",{"--tw-scroll-snap-strictness":"proximity"}),e({".snap-none":{"scroll-snap-type":"none"},".snap-x":{"@defaults scroll-snap-type":{},"scroll-snap-type":"x var(--tw-scroll-snap-strictness)"},".snap-y":{"@defaults scroll-snap-type":{},"scroll-snap-type":"y var(--tw-scroll-snap-strictness)"},".snap-both":{"@defaults scroll-snap-type":{},"scroll-snap-type":"both var(--tw-scroll-snap-strictness)"},".snap-mandatory":{"--tw-scroll-snap-strictness":"mandatory"},".snap-proximity":{"--tw-scroll-snap-strictness":"proximity"}})},scrollSnapAlign:({addUtilities:t})=>{t({".snap-start":{"scroll-snap-align":"start"},".snap-end":{"scroll-snap-align":"end"},".snap-center":{"scroll-snap-align":"center"},".snap-align-none":{"scroll-snap-align":"none"}})},scrollSnapStop:({addUtilities:t})=>{t({".snap-normal":{"scroll-snap-stop":"normal"},".snap-always":{"scroll-snap-stop":"always"}})},scrollMargin:L("scrollMargin",[["scroll-m",["scroll-margin"]],[["scroll-mx",["scroll-margin-left","scroll-margin-right"]],["scroll-my",["scroll-margin-top","scroll-margin-bottom"]]],[["scroll-ms",["scroll-margin-inline-start"]],["scroll-me",["scroll-margin-inline-end"]],["scroll-mt",["scroll-margin-top"]],["scroll-mr",["scroll-margin-right"]],["scroll-mb",["scroll-margin-bottom"]],["scroll-ml",["scroll-margin-left"]]]],{supportsNegativeValues:!0}),scrollPadding:L("scrollPadding",[["scroll-p",["scroll-padding"]],[["scroll-px",["scroll-padding-left","scroll-padding-right"]],["scroll-py",["scroll-padding-top","scroll-padding-bottom"]]],[["scroll-ps",["scroll-padding-inline-start"]],["scroll-pe",["scroll-padding-inline-end"]],["scroll-pt",["scroll-padding-top"]],["scroll-pr",["scroll-padding-right"]],["scroll-pb",["scroll-padding-bottom"]],["scroll-pl",["scroll-padding-left"]]]]),listStylePosition:({addUtilities:t})=>{t({".list-inside":{"list-style-position":"inside"},".list-outside":{"list-style-position":"outside"}})},listStyleType:L("listStyleType",[["list",["listStyleType"]]]),listStyleImage:L("listStyleImage",[["list-image",["listStyleImage"]]]),appearance:({addUtilities:t})=>{t({".appearance-none":{appearance:"none"}})},columns:L("columns",[["columns",["columns"]]]),breakBefore:({addUtilities:t})=>{t({".break-before-auto":{"break-before":"auto"},".break-before-avoid":{"break-before":"avoid"},".break-before-all":{"break-before":"all"},".break-before-avoid-page":{"break-before":"avoid-page"},".break-before-page":{"break-before":"page"},".break-before-left":{"break-before":"left"},".break-before-right":{"break-before":"right"},".break-before-column":{"break-before":"column"}})},breakInside:({addUtilities:t})=>{t({".break-inside-auto":{"break-inside":"auto"},".break-inside-avoid":{"break-inside":"avoid"},".break-inside-avoid-page":{"break-inside":"avoid-page"},".break-inside-avoid-column":{"break-inside":"avoid-column"}})},breakAfter:({addUtilities:t})=>{t({".break-after-auto":{"break-after":"auto"},".break-after-avoid":{"break-after":"avoid"},".break-after-all":{"break-after":"all"},".break-after-avoid-page":{"break-after":"avoid-page"},".break-after-page":{"break-after":"page"},".break-after-left":{"break-after":"left"},".break-after-right":{"break-after":"right"},".break-after-column":{"break-after":"column"}})},gridAutoColumns:L("gridAutoColumns",[["auto-cols",["gridAutoColumns"]]]),gridAutoFlow:({addUtilities:t})=>{t({".grid-flow-row":{gridAutoFlow:"row"},".grid-flow-col":{gridAutoFlow:"column"},".grid-flow-dense":{gridAutoFlow:"dense"},".grid-flow-row-dense":{gridAutoFlow:"row dense"},".grid-flow-col-dense":{gridAutoFlow:"column dense"}})},gridAutoRows:L("gridAutoRows",[["auto-rows",["gridAutoRows"]]]),gridTemplateColumns:L("gridTemplateColumns",[["grid-cols",["gridTemplateColumns"]]]),gridTemplateRows:L("gridTemplateRows",[["grid-rows",["gridTemplateRows"]]]),flexDirection:({addUtilities:t})=>{t({".flex-row":{"flex-direction":"row"},".flex-row-reverse":{"flex-direction":"row-reverse"},".flex-col":{"flex-direction":"column"},".flex-col-reverse":{"flex-direction":"column-reverse"}})},flexWrap:({addUtilities:t})=>{t({".flex-wrap":{"flex-wrap":"wrap"},".flex-wrap-reverse":{"flex-wrap":"wrap-reverse"},".flex-nowrap":{"flex-wrap":"nowrap"}})},placeContent:({addUtilities:t})=>{t({".place-content-center":{"place-content":"center"},".place-content-start":{"place-content":"start"},".place-content-end":{"place-content":"end"},".place-content-between":{"place-content":"space-between"},".place-content-around":{"place-content":"space-around"},".place-content-evenly":{"place-content":"space-evenly"},".place-content-baseline":{"place-content":"baseline"},".place-content-stretch":{"place-content":"stretch"}})},placeItems:({addUtilities:t})=>{t({".place-items-start":{"place-items":"start"},".place-items-end":{"place-items":"end"},".place-items-center":{"place-items":"center"},".place-items-baseline":{"place-items":"baseline"},".place-items-stretch":{"place-items":"stretch"}})},alignContent:({addUtilities:t})=>{t({".content-normal":{"align-content":"normal"},".content-center":{"align-content":"center"},".content-start":{"align-content":"flex-start"},".content-end":{"align-content":"flex-end"},".content-between":{"align-content":"space-between"},".content-around":{"align-content":"space-around"},".content-evenly":{"align-content":"space-evenly"},".content-baseline":{"align-content":"baseline"},".content-stretch":{"align-content":"stretch"}})},alignItems:({addUtilities:t})=>{t({".items-start":{"align-items":"flex-start"},".items-end":{"align-items":"flex-end"},".items-center":{"align-items":"center"},".items-baseline":{"align-items":"baseline"},".items-stretch":{"align-items":"stretch"}})},justifyContent:({addUtilities:t})=>{t({".justify-normal":{"justify-content":"normal"},".justify-start":{"justify-content":"flex-start"},".justify-end":{"justify-content":"flex-end"},".justify-center":{"justify-content":"center"},".justify-between":{"justify-content":"space-between"},".justify-around":{"justify-content":"space-around"},".justify-evenly":{"justify-content":"space-evenly"},".justify-stretch":{"justify-content":"stretch"}})},justifyItems:({addUtilities:t})=>{t({".justify-items-start":{"justify-items":"start"},".justify-items-end":{"justify-items":"end"},".justify-items-center":{"justify-items":"center"},".justify-items-stretch":{"justify-items":"stretch"}})},gap:L("gap",[["gap",["gap"]],[["gap-x",["columnGap"]],["gap-y",["rowGap"]]]]),space:({matchUtilities:t,addUtilities:e,theme:r})=>{t({"space-x":i=>(i=i==="0"?"0px":i,{"& > :not([hidden]) ~ :not([hidden])":{"--tw-space-x-reverse":"0","margin-right":`calc(${i} * var(--tw-space-x-reverse))`,"margin-left":`calc(${i} * calc(1 - var(--tw-space-x-reverse)))`}}),"space-y":i=>(i=i==="0"?"0px":i,{"& > :not([hidden]) ~ :not([hidden])":{"--tw-space-y-reverse":"0","margin-top":`calc(${i} * calc(1 - var(--tw-space-y-reverse)))`,"margin-bottom":`calc(${i} * var(--tw-space-y-reverse))`}})},{values:r("space"),supportsNegativeValues:!0}),e({".space-y-reverse > :not([hidden]) ~ :not([hidden])":{"--tw-space-y-reverse":"1"},".space-x-reverse > :not([hidden]) ~ :not([hidden])":{"--tw-space-x-reverse":"1"}})},divideWidth:({matchUtilities:t,addUtilities:e,theme:r})=>{t({"divide-x":i=>(i=i==="0"?"0px":i,{"& > :not([hidden]) ~ :not([hidden])":{"@defaults border-width":{},"--tw-divide-x-reverse":"0","border-right-width":`calc(${i} * var(--tw-divide-x-reverse))`,"border-left-width":`calc(${i} * calc(1 - var(--tw-divide-x-reverse)))`}}),"divide-y":i=>(i=i==="0"?"0px":i,{"& > :not([hidden]) ~ :not([hidden])":{"@defaults border-width":{},"--tw-divide-y-reverse":"0","border-top-width":`calc(${i} * calc(1 - var(--tw-divide-y-reverse)))`,"border-bottom-width":`calc(${i} * var(--tw-divide-y-reverse))`}})},{values:r("divideWidth"),type:["line-width","length","any"]}),e({".divide-y-reverse > :not([hidden]) ~ :not([hidden])":{"@defaults border-width":{},"--tw-divide-y-reverse":"1"},".divide-x-reverse > :not([hidden]) ~ :not([hidden])":{"@defaults border-width":{},"--tw-divide-x-reverse":"1"}})},divideStyle:({addUtilities:t})=>{t({".divide-solid > :not([hidden]) ~ :not([hidden])":{"border-style":"solid"},".divide-dashed > :not([hidden]) ~ :not([hidden])":{"border-style":"dashed"},".divide-dotted > :not([hidden]) ~ :not([hidden])":{"border-style":"dotted"},".divide-double > :not([hidden]) ~ :not([hidden])":{"border-style":"double"},".divide-none > :not([hidden]) ~ :not([hidden])":{"border-style":"none"}})},divideColor:({matchUtilities:t,theme:e,corePlugins:r})=>{t({divide:i=>r("divideOpacity")?{["& > :not([hidden]) ~ :not([hidden])"]:ke({color:i,property:"border-color",variable:"--tw-divide-opacity"})}:{["& > :not([hidden]) ~ :not([hidden])"]:{"border-color":W(i)}}},{values:(({DEFAULT:i,...n})=>n)(ye(e("divideColor"))),type:["color","any"]})},divideOpacity:({matchUtilities:t,theme:e})=>{t({"divide-opacity":r=>({["& > :not([hidden]) ~ :not([hidden])"]:{"--tw-divide-opacity":r}})},{values:e("divideOpacity")})},placeSelf:({addUtilities:t})=>{t({".place-self-auto":{"place-self":"auto"},".place-self-start":{"place-self":"start"},".place-self-end":{"place-self":"end"},".place-self-center":{"place-self":"center"},".place-self-stretch":{"place-self":"stretch"}})},alignSelf:({addUtilities:t})=>{t({".self-auto":{"align-self":"auto"},".self-start":{"align-self":"flex-start"},".self-end":{"align-self":"flex-end"},".self-center":{"align-self":"center"},".self-stretch":{"align-self":"stretch"},".self-baseline":{"align-self":"baseline"}})},justifySelf:({addUtilities:t})=>{t({".justify-self-auto":{"justify-self":"auto"},".justify-self-start":{"justify-self":"start"},".justify-self-end":{"justify-self":"end"},".justify-self-center":{"justify-self":"center"},".justify-self-stretch":{"justify-self":"stretch"}})},overflow:({addUtilities:t})=>{t({".overflow-auto":{overflow:"auto"},".overflow-hidden":{overflow:"hidden"},".overflow-clip":{overflow:"clip"},".overflow-visible":{overflow:"visible"},".overflow-scroll":{overflow:"scroll"},".overflow-x-auto":{"overflow-x":"auto"},".overflow-y-auto":{"overflow-y":"auto"},".overflow-x-hidden":{"overflow-x":"hidden"},".overflow-y-hidden":{"overflow-y":"hidden"},".overflow-x-clip":{"overflow-x":"clip"},".overflow-y-clip":{"overflow-y":"clip"},".overflow-x-visible":{"overflow-x":"visible"},".overflow-y-visible":{"overflow-y":"visible"},".overflow-x-scroll":{"overflow-x":"scroll"},".overflow-y-scroll":{"overflow-y":"scroll"}})},overscrollBehavior:({addUtilities:t})=>{t({".overscroll-auto":{"overscroll-behavior":"auto"},".overscroll-contain":{"overscroll-behavior":"contain"},".overscroll-none":{"overscroll-behavior":"none"},".overscroll-y-auto":{"overscroll-behavior-y":"auto"},".overscroll-y-contain":{"overscroll-behavior-y":"contain"},".overscroll-y-none":{"overscroll-behavior-y":"none"},".overscroll-x-auto":{"overscroll-behavior-x":"auto"},".overscroll-x-contain":{"overscroll-behavior-x":"contain"},".overscroll-x-none":{"overscroll-behavior-x":"none"}})},scrollBehavior:({addUtilities:t})=>{t({".scroll-auto":{"scroll-behavior":"auto"},".scroll-smooth":{"scroll-behavior":"smooth"}})},textOverflow:({addUtilities:t})=>{t({".truncate":{overflow:"hidden","text-overflow":"ellipsis","white-space":"nowrap"},".overflow-ellipsis":{"text-overflow":"ellipsis"},".text-ellipsis":{"text-overflow":"ellipsis"},".text-clip":{"text-overflow":"clip"}})},hyphens:({addUtilities:t})=>{t({".hyphens-none":{hyphens:"none"},".hyphens-manual":{hyphens:"manual"},".hyphens-auto":{hyphens:"auto"}})},whitespace:({addUtilities:t})=>{t({".whitespace-normal":{"white-space":"normal"},".whitespace-nowrap":{"white-space":"nowrap"},".whitespace-pre":{"white-space":"pre"},".whitespace-pre-line":{"white-space":"pre-line"},".whitespace-pre-wrap":{"white-space":"pre-wrap"},".whitespace-break-spaces":{"white-space":"break-spaces"}})},wordBreak:({addUtilities:t})=>{t({".break-normal":{"overflow-wrap":"normal","word-break":"normal"},".break-words":{"overflow-wrap":"break-word"},".break-all":{"word-break":"break-all"},".break-keep":{"word-break":"keep-all"}})},borderRadius:L("borderRadius",[["rounded",["border-radius"]],[["rounded-s",["border-start-start-radius","border-end-start-radius"]],["rounded-e",["border-start-end-radius","border-end-end-radius"]],["rounded-t",["border-top-left-radius","border-top-right-radius"]],["rounded-r",["border-top-right-radius","border-bottom-right-radius"]],["rounded-b",["border-bottom-right-radius","border-bottom-left-radius"]],["rounded-l",["border-top-left-radius","border-bottom-left-radius"]]],[["rounded-ss",["border-start-start-radius"]],["rounded-se",["border-start-end-radius"]],["rounded-ee",["border-end-end-radius"]],["rounded-es",["border-end-start-radius"]],["rounded-tl",["border-top-left-radius"]],["rounded-tr",["border-top-right-radius"]],["rounded-br",["border-bottom-right-radius"]],["rounded-bl",["border-bottom-left-radius"]]]]),borderWidth:L("borderWidth",[["border",[["@defaults border-width",{}],"border-width"]],[["border-x",[["@defaults border-width",{}],"border-left-width","border-right-width"]],["border-y",[["@defaults border-width",{}],"border-top-width","border-bottom-width"]]],[["border-s",[["@defaults border-width",{}],"border-inline-start-width"]],["border-e",[["@defaults border-width",{}],"border-inline-end-width"]],["border-t",[["@defaults border-width",{}],"border-top-width"]],["border-r",[["@defaults border-width",{}],"border-right-width"]],["border-b",[["@defaults border-width",{}],"border-bottom-width"]],["border-l",[["@defaults border-width",{}],"border-left-width"]]]],{type:["line-width","length"]}),borderStyle:({addUtilities:t})=>{t({".border-solid":{"border-style":"solid"},".border-dashed":{"border-style":"dashed"},".border-dotted":{"border-style":"dotted"},".border-double":{"border-style":"double"},".border-hidden":{"border-style":"hidden"},".border-none":{"border-style":"none"}})},borderColor:({matchUtilities:t,theme:e,corePlugins:r})=>{t({border:i=>r("borderOpacity")?ke({color:i,property:"border-color",variable:"--tw-border-opacity"}):{"border-color":W(i)}},{values:(({DEFAULT:i,...n})=>n)(ye(e("borderColor"))),type:["color","any"]}),t({"border-x":i=>r("borderOpacity")?ke({color:i,property:["border-left-color","border-right-color"],variable:"--tw-border-opacity"}):{"border-left-color":W(i),"border-right-color":W(i)},"border-y":i=>r("borderOpacity")?ke({color:i,property:["border-top-color","border-bottom-color"],variable:"--tw-border-opacity"}):{"border-top-color":W(i),"border-bottom-color":W(i)}},{values:(({DEFAULT:i,...n})=>n)(ye(e("borderColor"))),type:["color","any"]}),t({"border-s":i=>r("borderOpacity")?ke({color:i,property:"border-inline-start-color",variable:"--tw-border-opacity"}):{"border-inline-start-color":W(i)},"border-e":i=>r("borderOpacity")?ke({color:i,property:"border-inline-end-color",variable:"--tw-border-opacity"}):{"border-inline-end-color":W(i)},"border-t":i=>r("borderOpacity")?ke({color:i,property:"border-top-color",variable:"--tw-border-opacity"}):{"border-top-color":W(i)},"border-r":i=>r("borderOpacity")?ke({color:i,property:"border-right-color",variable:"--tw-border-opacity"}):{"border-right-color":W(i)},"border-b":i=>r("borderOpacity")?ke({color:i,property:"border-bottom-color",variable:"--tw-border-opacity"}):{"border-bottom-color":W(i)},"border-l":i=>r("borderOpacity")?ke({color:i,property:"border-left-color",variable:"--tw-border-opacity"}):{"border-left-color":W(i)}},{values:(({DEFAULT:i,...n})=>n)(ye(e("borderColor"))),type:["color","any"]})},borderOpacity:L("borderOpacity",[["border-opacity",["--tw-border-opacity"]]]),backgroundColor:({matchUtilities:t,theme:e,corePlugins:r})=>{t({bg:i=>r("backgroundOpacity")?ke({color:i,property:"background-color",variable:"--tw-bg-opacity"}):{"background-color":W(i)}},{values:ye(e("backgroundColor")),type:["color","any"]})},backgroundOpacity:L("backgroundOpacity",[["bg-opacity",["--tw-bg-opacity"]]]),backgroundImage:L("backgroundImage",[["bg",["background-image"]]],{type:["lookup","image","url"]}),gradientColorStops:(()=>{function t(e){return Ke(e,0,"rgb(255 255 255 / 0)")}return function({matchUtilities:e,theme:r,addDefaults:i}){i("gradient-color-stops",{"--tw-gradient-from-position":" ","--tw-gradient-via-position":" ","--tw-gradient-to-position":" "});let n={values:ye(r("gradientColorStops")),type:["color","any"]},s={values:r("gradientColorStopPositions"),type:["length","percentage"]};e({from:a=>{let o=t(a);return{"@defaults gradient-color-stops":{},"--tw-gradient-from":`${W(a)} var(--tw-gradient-from-position)`,"--tw-gradient-to":`${o} var(--tw-gradient-to-position)`,"--tw-gradient-stops":"var(--tw-gradient-from), var(--tw-gradient-to)"}}},n),e({from:a=>({"--tw-gradient-from-position":a})},s),e({via:a=>{let o=t(a);return{"@defaults gradient-color-stops":{},"--tw-gradient-to":`${o} var(--tw-gradient-to-position)`,"--tw-gradient-stops":`var(--tw-gradient-from), ${W(a)} var(--tw-gradient-via-position), var(--tw-gradient-to)`}}},n),e({via:a=>({"--tw-gradient-via-position":a})},s),e({to:a=>({"@defaults gradient-color-stops":{},"--tw-gradient-to":`${W(a)} var(--tw-gradient-to-position)`})},n),e({to:a=>({"--tw-gradient-to-position":a})},s)}})(),boxDecorationBreak:({addUtilities:t})=>{t({".decoration-slice":{"box-decoration-break":"slice"},".decoration-clone":{"box-decoration-break":"clone"},".box-decoration-slice":{"box-decoration-break":"slice"},".box-decoration-clone":{"box-decoration-break":"clone"}})},backgroundSize:L("backgroundSize",[["bg",["background-size"]]],{type:["lookup","length","percentage","size"]}),backgroundAttachment:({addUtilities:t})=>{t({".bg-fixed":{"background-attachment":"fixed"},".bg-local":{"background-attachment":"local"},".bg-scroll":{"background-attachment":"scroll"}})},backgroundClip:({addUtilities:t})=>{t({".bg-clip-border":{"background-clip":"border-box"},".bg-clip-padding":{"background-clip":"padding-box"},".bg-clip-content":{"background-clip":"content-box"},".bg-clip-text":{"background-clip":"text"}})},backgroundPosition:L("backgroundPosition",[["bg",["background-position"]]],{type:["lookup",["position",{preferOnConflict:!0}]]}),backgroundRepeat:({addUtilities:t})=>{t({".bg-repeat":{"background-repeat":"repeat"},".bg-no-repeat":{"background-repeat":"no-repeat"},".bg-repeat-x":{"background-repeat":"repeat-x"},".bg-repeat-y":{"background-repeat":"repeat-y"},".bg-repeat-round":{"background-repeat":"round"},".bg-repeat-space":{"background-repeat":"space"}})},backgroundOrigin:({addUtilities:t})=>{t({".bg-origin-border":{"background-origin":"border-box"},".bg-origin-padding":{"background-origin":"padding-box"},".bg-origin-content":{"background-origin":"content-box"}})},fill:({matchUtilities:t,theme:e})=>{t({fill:r=>({fill:W(r)})},{values:ye(e("fill")),type:["color","any"]})},stroke:({matchUtilities:t,theme:e})=>{t({stroke:r=>({stroke:W(r)})},{values:ye(e("stroke")),type:["color","url","any"]})},strokeWidth:L("strokeWidth",[["stroke",["stroke-width"]]],{type:["length","number","percentage"]}),objectFit:({addUtilities:t})=>{t({".object-contain":{"object-fit":"contain"},".object-cover":{"object-fit":"cover"},".object-fill":{"object-fit":"fill"},".object-none":{"object-fit":"none"},".object-scale-down":{"object-fit":"scale-down"}})},objectPosition:L("objectPosition",[["object",["object-position"]]]),padding:L("padding",[["p",["padding"]],[["px",["padding-left","padding-right"]],["py",["padding-top","padding-bottom"]]],[["ps",["padding-inline-start"]],["pe",["padding-inline-end"]],["pt",["padding-top"]],["pr",["padding-right"]],["pb",["padding-bottom"]],["pl",["padding-left"]]]]),textAlign:({addUtilities:t})=>{t({".text-left":{"text-align":"left"},".text-center":{"text-align":"center"},".text-right":{"text-align":"right"},".text-justify":{"text-align":"justify"},".text-start":{"text-align":"start"},".text-end":{"text-align":"end"}})},textIndent:L("textIndent",[["indent",["text-indent"]]],{supportsNegativeValues:!0}),verticalAlign:({addUtilities:t,matchUtilities:e})=>{t({".align-baseline":{"vertical-align":"baseline"},".align-top":{"vertical-align":"top"},".align-middle":{"vertical-align":"middle"},".align-bottom":{"vertical-align":"bottom"},".align-text-top":{"vertical-align":"text-top"},".align-text-bottom":{"vertical-align":"text-bottom"},".align-sub":{"vertical-align":"sub"},".align-super":{"vertical-align":"super"}}),e({align:r=>({"vertical-align":r})})},fontFamily:({matchUtilities:t,theme:e})=>{t({font:r=>{let[i,n={}]=Array.isArray(r)&&ve(r[1])?r:[r],{fontFeatureSettings:s,fontVariationSettings:a}=n;return{"font-family":Array.isArray(i)?i.join(", "):i,...s===void 0?{}:{"font-feature-settings":s},...a===void 0?{}:{"font-variation-settings":a}}}},{values:e("fontFamily"),type:["lookup","generic-name","family-name"]})},fontSize:({matchUtilities:t,theme:e})=>{t({text:(r,{modifier:i})=>{let[n,s]=Array.isArray(r)?r:[r];if(i)return{"font-size":n,"line-height":i};let{lineHeight:a,letterSpacing:o,fontWeight:l}=ve(s)?s:{lineHeight:s};return{"font-size":n,...a===void 0?{}:{"line-height":a},...o===void 0?{}:{"letter-spacing":o},...l===void 0?{}:{"font-weight":l}}}},{values:e("fontSize"),modifiers:e("lineHeight"),type:["absolute-size","relative-size","length","percentage"]})},fontWeight:L("fontWeight",[["font",["fontWeight"]]],{type:["lookup","number","any"]}),textTransform:({addUtilities:t})=>{t({".uppercase":{"text-transform":"uppercase"},".lowercase":{"text-transform":"lowercase"},".capitalize":{"text-transform":"capitalize"},".normal-case":{"text-transform":"none"}})},fontStyle:({addUtilities:t})=>{t({".italic":{"font-style":"italic"},".not-italic":{"font-style":"normal"}})},fontVariantNumeric:({addDefaults:t,addUtilities:e})=>{let r="var(--tw-ordinal) var(--tw-slashed-zero) var(--tw-numeric-figure) var(--tw-numeric-spacing) var(--tw-numeric-fraction)";t("font-variant-numeric",{"--tw-ordinal":" ","--tw-slashed-zero":" ","--tw-numeric-figure":" ","--tw-numeric-spacing":" ","--tw-numeric-fraction":" "}),e({".normal-nums":{"font-variant-numeric":"normal"},".ordinal":{"@defaults font-variant-numeric":{},"--tw-ordinal":"ordinal","font-variant-numeric":r},".slashed-zero":{"@defaults font-variant-numeric":{},"--tw-slashed-zero":"slashed-zero","font-variant-numeric":r},".lining-nums":{"@defaults font-variant-numeric":{},"--tw-numeric-figure":"lining-nums","font-variant-numeric":r},".oldstyle-nums":{"@defaults font-variant-numeric":{},"--tw-numeric-figure":"oldstyle-nums","font-variant-numeric":r},".proportional-nums":{"@defaults font-variant-numeric":{},"--tw-numeric-spacing":"proportional-nums","font-variant-numeric":r},".tabular-nums":{"@defaults font-variant-numeric":{},"--tw-numeric-spacing":"tabular-nums","font-variant-numeric":r},".diagonal-fractions":{"@defaults font-variant-numeric":{},"--tw-numeric-fraction":"diagonal-fractions","font-variant-numeric":r},".stacked-fractions":{"@defaults font-variant-numeric":{},"--tw-numeric-fraction":"stacked-fractions","font-variant-numeric":r}})},lineHeight:L("lineHeight",[["leading",["lineHeight"]]]),letterSpacing:L("letterSpacing",[["tracking",["letterSpacing"]]],{supportsNegativeValues:!0}),textColor:({matchUtilities:t,theme:e,corePlugins:r})=>{t({text:i=>r("textOpacity")?ke({color:i,property:"color",variable:"--tw-text-opacity"}):{color:W(i)}},{values:ye(e("textColor")),type:["color","any"]})},textOpacity:L("textOpacity",[["text-opacity",["--tw-text-opacity"]]]),textDecoration:({addUtilities:t})=>{t({".underline":{"text-decoration-line":"underline"},".overline":{"text-decoration-line":"overline"},".line-through":{"text-decoration-line":"line-through"},".no-underline":{"text-decoration-line":"none"}})},textDecorationColor:({matchUtilities:t,theme:e})=>{t({decoration:r=>({"text-decoration-color":W(r)})},{values:ye(e("textDecorationColor")),type:["color","any"]})},textDecorationStyle:({addUtilities:t})=>{t({".decoration-solid":{"text-decoration-style":"solid"},".decoration-double":{"text-decoration-style":"double"},".decoration-dotted":{"text-decoration-style":"dotted"},".decoration-dashed":{"text-decoration-style":"dashed"},".decoration-wavy":{"text-decoration-style":"wavy"}})},textDecorationThickness:L("textDecorationThickness",[["decoration",["text-decoration-thickness"]]],{type:["length","percentage"]}),textUnderlineOffset:L("textUnderlineOffset",[["underline-offset",["text-underline-offset"]]],{type:["length","percentage","any"]}),fontSmoothing:({addUtilities:t})=>{t({".antialiased":{"-webkit-font-smoothing":"antialiased","-moz-osx-font-smoothing":"grayscale"},".subpixel-antialiased":{"-webkit-font-smoothing":"auto","-moz-osx-font-smoothing":"auto"}})},placeholderColor:({matchUtilities:t,theme:e,corePlugins:r})=>{t({placeholder:i=>r("placeholderOpacity")?{"&::placeholder":ke({color:i,property:"color",variable:"--tw-placeholder-opacity"})}:{"&::placeholder":{color:W(i)}}},{values:ye(e("placeholderColor")),type:["color","any"]})},placeholderOpacity:({matchUtilities:t,theme:e})=>{t({"placeholder-opacity":r=>({["&::placeholder"]:{"--tw-placeholder-opacity":r}})},{values:e("placeholderOpacity")})},caretColor:({matchUtilities:t,theme:e})=>{t({caret:r=>({"caret-color":W(r)})},{values:ye(e("caretColor")),type:["color","any"]})},accentColor:({matchUtilities:t,theme:e})=>{t({accent:r=>({"accent-color":W(r)})},{values:ye(e("accentColor")),type:["color","any"]})},opacity:L("opacity",[["opacity",["opacity"]]]),backgroundBlendMode:({addUtilities:t})=>{t({".bg-blend-normal":{"background-blend-mode":"normal"},".bg-blend-multiply":{"background-blend-mode":"multiply"},".bg-blend-screen":{"background-blend-mode":"screen"},".bg-blend-overlay":{"background-blend-mode":"overlay"},".bg-blend-darken":{"background-blend-mode":"darken"},".bg-blend-lighten":{"background-blend-mode":"lighten"},".bg-blend-color-dodge":{"background-blend-mode":"color-dodge"},".bg-blend-color-burn":{"background-blend-mode":"color-burn"},".bg-blend-hard-light":{"background-blend-mode":"hard-light"},".bg-blend-soft-light":{"background-blend-mode":"soft-light"},".bg-blend-difference":{"background-blend-mode":"difference"},".bg-blend-exclusion":{"background-blend-mode":"exclusion"},".bg-blend-hue":{"background-blend-mode":"hue"},".bg-blend-saturation":{"background-blend-mode":"saturation"},".bg-blend-color":{"background-blend-mode":"color"},".bg-blend-luminosity":{"background-blend-mode":"luminosity"}})},mixBlendMode:({addUtilities:t})=>{t({".mix-blend-normal":{"mix-blend-mode":"normal"},".mix-blend-multiply":{"mix-blend-mode":"multiply"},".mix-blend-screen":{"mix-blend-mode":"screen"},".mix-blend-overlay":{"mix-blend-mode":"overlay"},".mix-blend-darken":{"mix-blend-mode":"darken"},".mix-blend-lighten":{"mix-blend-mode":"lighten"},".mix-blend-color-dodge":{"mix-blend-mode":"color-dodge"},".mix-blend-color-burn":{"mix-blend-mode":"color-burn"},".mix-blend-hard-light":{"mix-blend-mode":"hard-light"},".mix-blend-soft-light":{"mix-blend-mode":"soft-light"},".mix-blend-difference":{"mix-blend-mode":"difference"},".mix-blend-exclusion":{"mix-blend-mode":"exclusion"},".mix-blend-hue":{"mix-blend-mode":"hue"},".mix-blend-saturation":{"mix-blend-mode":"saturation"},".mix-blend-color":{"mix-blend-mode":"color"},".mix-blend-luminosity":{"mix-blend-mode":"luminosity"},".mix-blend-plus-lighter":{"mix-blend-mode":"plus-lighter"}})},boxShadow:(()=>{let t=dt("boxShadow"),e=["var(--tw-ring-offset-shadow, 0 0 #0000)","var(--tw-ring-shadow, 0 0 #0000)","var(--tw-shadow)"].join(", ");return function({matchUtilities:r,addDefaults:i,theme:n}){i(" box-shadow",{"--tw-ring-offset-shadow":"0 0 #0000","--tw-ring-shadow":"0 0 #0000","--tw-shadow":"0 0 #0000","--tw-shadow-colored":"0 0 #0000"}),r({shadow:s=>{s=t(s);let a=Pn(s);for(let o of a)!o.valid||(o.color="var(--tw-shadow-color)");return{"@defaults box-shadow":{},"--tw-shadow":s==="none"?"0 0 #0000":s,"--tw-shadow-colored":s==="none"?"0 0 #0000":ap(a),"box-shadow":e}}},{values:n("boxShadow"),type:["shadow"]})}})(),boxShadowColor:({matchUtilities:t,theme:e})=>{t({shadow:r=>({"--tw-shadow-color":W(r),"--tw-shadow":"var(--tw-shadow-colored)"})},{values:ye(e("boxShadowColor")),type:["color","any"]})},outlineStyle:({addUtilities:t})=>{t({".outline-none":{outline:"2px solid transparent","outline-offset":"2px"},".outline":{"outline-style":"solid"},".outline-dashed":{"outline-style":"dashed"},".outline-dotted":{"outline-style":"dotted"},".outline-double":{"outline-style":"double"}})},outlineWidth:L("outlineWidth",[["outline",["outline-width"]]],{type:["length","number","percentage"]}),outlineOffset:L("outlineOffset",[["outline-offset",["outline-offset"]]],{type:["length","number","percentage","any"],supportsNegativeValues:!0}),outlineColor:({matchUtilities:t,theme:e})=>{t({outline:r=>({"outline-color":W(r)})},{values:ye(e("outlineColor")),type:["color","any"]})},ringWidth:({matchUtilities:t,addDefaults:e,addUtilities:r,theme:i,config:n})=>{let s=(()=>{if(de(n(),"respectDefaultRingColorOpacity"))return i("ringColor.DEFAULT");let a=i("ringOpacity.DEFAULT","0.5");return i("ringColor")?.DEFAULT?Ke(i("ringColor")?.DEFAULT,a,`rgb(147 197 253 / ${a})`):`rgb(147 197 253 / ${a})`})();e("ring-width",{"--tw-ring-inset":" ","--tw-ring-offset-width":i("ringOffsetWidth.DEFAULT","0px"),"--tw-ring-offset-color":i("ringOffsetColor.DEFAULT","#fff"),"--tw-ring-color":s,"--tw-ring-offset-shadow":"0 0 #0000","--tw-ring-shadow":"0 0 #0000","--tw-shadow":"0 0 #0000","--tw-shadow-colored":"0 0 #0000"}),t({ring:a=>({"@defaults ring-width":{},"--tw-ring-offset-shadow":"var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color)","--tw-ring-shadow":`var(--tw-ring-inset) 0 0 0 calc(${a} + var(--tw-ring-offset-width)) var(--tw-ring-color)`,"box-shadow":["var(--tw-ring-offset-shadow)","var(--tw-ring-shadow)","var(--tw-shadow, 0 0 #0000)"].join(", ")})},{values:i("ringWidth"),type:"length"}),r({".ring-inset":{"@defaults ring-width":{},"--tw-ring-inset":"inset"}})},ringColor:({matchUtilities:t,theme:e,corePlugins:r})=>{t({ring:i=>r("ringOpacity")?ke({color:i,property:"--tw-ring-color",variable:"--tw-ring-opacity"}):{"--tw-ring-color":W(i)}},{values:Object.fromEntries(Object.entries(ye(e("ringColor"))).filter(([i])=>i!=="DEFAULT")),type:["color","any"]})},ringOpacity:t=>{let{config:e}=t;return L("ringOpacity",[["ring-opacity",["--tw-ring-opacity"]]],{filterDefault:!de(e(),"respectDefaultRingColorOpacity")})(t)},ringOffsetWidth:L("ringOffsetWidth",[["ring-offset",["--tw-ring-offset-width"]]],{type:"length"}),ringOffsetColor:({matchUtilities:t,theme:e})=>{t({"ring-offset":r=>({"--tw-ring-offset-color":W(r)})},{values:ye(e("ringOffsetColor")),type:["color","any"]})},blur:({matchUtilities:t,theme:e})=>{t({blur:r=>({"--tw-blur":`blur(${r})`,"@defaults filter":{},filter:rt})},{values:e("blur")})},brightness:({matchUtilities:t,theme:e})=>{t({brightness:r=>({"--tw-brightness":`brightness(${r})`,"@defaults filter":{},filter:rt})},{values:e("brightness")})},contrast:({matchUtilities:t,theme:e})=>{t({contrast:r=>({"--tw-contrast":`contrast(${r})`,"@defaults filter":{},filter:rt})},{values:e("contrast")})},dropShadow:({matchUtilities:t,theme:e})=>{t({"drop-shadow":r=>({"--tw-drop-shadow":Array.isArray(r)?r.map(i=>`drop-shadow(${i})`).join(" "):`drop-shadow(${r})`,"@defaults filter":{},filter:rt})},{values:e("dropShadow")})},grayscale:({matchUtilities:t,theme:e})=>{t({grayscale:r=>({"--tw-grayscale":`grayscale(${r})`,"@defaults filter":{},filter:rt})},{values:e("grayscale")})},hueRotate:({matchUtilities:t,theme:e})=>{t({"hue-rotate":r=>({"--tw-hue-rotate":`hue-rotate(${r})`,"@defaults filter":{},filter:rt})},{values:e("hueRotate"),supportsNegativeValues:!0})},invert:({matchUtilities:t,theme:e})=>{t({invert:r=>({"--tw-invert":`invert(${r})`,"@defaults filter":{},filter:rt})},{values:e("invert")})},saturate:({matchUtilities:t,theme:e})=>{t({saturate:r=>({"--tw-saturate":`saturate(${r})`,"@defaults filter":{},filter:rt})},{values:e("saturate")})},sepia:({matchUtilities:t,theme:e})=>{t({sepia:r=>({"--tw-sepia":`sepia(${r})`,"@defaults filter":{},filter:rt})},{values:e("sepia")})},filter:({addDefaults:t,addUtilities:e})=>{t("filter",{"--tw-blur":" ","--tw-brightness":" ","--tw-contrast":" ","--tw-grayscale":" ","--tw-hue-rotate":" ","--tw-invert":" ","--tw-saturate":" ","--tw-sepia":" ","--tw-drop-shadow":" "}),e({".filter":{"@defaults filter":{},filter:rt},".filter-none":{filter:"none"}})},backdropBlur:({matchUtilities:t,theme:e})=>{t({"backdrop-blur":r=>({"--tw-backdrop-blur":`blur(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropBlur")})},backdropBrightness:({matchUtilities:t,theme:e})=>{t({"backdrop-brightness":r=>({"--tw-backdrop-brightness":`brightness(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropBrightness")})},backdropContrast:({matchUtilities:t,theme:e})=>{t({"backdrop-contrast":r=>({"--tw-backdrop-contrast":`contrast(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropContrast")})},backdropGrayscale:({matchUtilities:t,theme:e})=>{t({"backdrop-grayscale":r=>({"--tw-backdrop-grayscale":`grayscale(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropGrayscale")})},backdropHueRotate:({matchUtilities:t,theme:e})=>{t({"backdrop-hue-rotate":r=>({"--tw-backdrop-hue-rotate":`hue-rotate(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropHueRotate"),supportsNegativeValues:!0})},backdropInvert:({matchUtilities:t,theme:e})=>{t({"backdrop-invert":r=>({"--tw-backdrop-invert":`invert(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropInvert")})},backdropOpacity:({matchUtilities:t,theme:e})=>{t({"backdrop-opacity":r=>({"--tw-backdrop-opacity":`opacity(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropOpacity")})},backdropSaturate:({matchUtilities:t,theme:e})=>{t({"backdrop-saturate":r=>({"--tw-backdrop-saturate":`saturate(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropSaturate")})},backdropSepia:({matchUtilities:t,theme:e})=>{t({"backdrop-sepia":r=>({"--tw-backdrop-sepia":`sepia(${r})`,"@defaults backdrop-filter":{},"backdrop-filter":it})},{values:e("backdropSepia")})},backdropFilter:({addDefaults:t,addUtilities:e})=>{t("backdrop-filter",{"--tw-backdrop-blur":" ","--tw-backdrop-brightness":" ","--tw-backdrop-contrast":" ","--tw-backdrop-grayscale":" ","--tw-backdrop-hue-rotate":" ","--tw-backdrop-invert":" ","--tw-backdrop-opacity":" ","--tw-backdrop-saturate":" ","--tw-backdrop-sepia":" "}),e({".backdrop-filter":{"@defaults backdrop-filter":{},"backdrop-filter":it},".backdrop-filter-none":{"backdrop-filter":"none"}})},transitionProperty:({matchUtilities:t,theme:e})=>{let r=e("transitionTimingFunction.DEFAULT"),i=e("transitionDuration.DEFAULT");t({transition:n=>({"transition-property":n,...n==="none"?{}:{"transition-timing-function":r,"transition-duration":i}})},{values:e("transitionProperty")})},transitionDelay:L("transitionDelay",[["delay",["transitionDelay"]]]),transitionDuration:L("transitionDuration",[["duration",["transitionDuration"]]],{filterDefault:!0}),transitionTimingFunction:L("transitionTimingFunction",[["ease",["transitionTimingFunction"]]],{filterDefault:!0}),willChange:L("willChange",[["will-change",["will-change"]]]),content:L("content",[["content",["--tw-content",["content","var(--tw-content)"]]]])}});function LE(t){if(t===void 0)return!1;if(t==="true"||t==="1")return!0;if(t==="false"||t==="0")return!1;if(t==="*")return!0;let e=t.split(",").map(r=>r.split(":")[0]);return e.includes("-tailwindcss")?!1:!!e.includes("tailwindcss")}var Qe,Vm,Wm,Es,Tl,ht,Bi,Rt=E(()=>{u();kl();Qe=typeof g!="undefined"?{NODE_ENV:"production",DEBUG:LE(g.env.DEBUG),ENGINE:_l.tailwindcss.engine}:{NODE_ENV:"production",DEBUG:!1,ENGINE:_l.tailwindcss.engine},Vm=new Map,Wm=new Map,Es=new Map,Tl=new Map,ht=new String("*"),Bi=Symbol("__NONE__")});function mr(t){let e=[],r=!1;for(let i=0;i0)}var Gm,Hm,ME,Ol=E(()=>{u();Gm=new Map([["{","}"],["[","]"],["(",")"]]),Hm=new Map(Array.from(Gm.entries()).map(([t,e])=>[e,t])),ME=new Set(['"',"'","`"])});function gr(t){let[e]=Ym(t);return e.forEach(([r,i])=>r.removeChild(i)),t.nodes.push(...e.map(([,r])=>r)),t}function Ym(t){let e=[],r=null;for(let i of t.nodes)if(i.type==="combinator")e=e.filter(([,n])=>Al(n).includes("jumpable")),r=null;else if(i.type==="pseudo"){BE(i)?(r=i,e.push([t,i,null])):r&&FE(i,r)?e.push([t,i,r]):r=null;for(let n of i.nodes??[]){let[s,a]=Ym(n);r=a||r,e.push(...s)}}return[e,r]}function Qm(t){return t.value.startsWith("::")||El[t.value]!==void 0}function BE(t){return Qm(t)&&Al(t).includes("terminal")}function FE(t,e){return t.type!=="pseudo"||Qm(t)?!1:Al(e).includes("actionable")}function Al(t){return El[t.value]??El.__default__}var El,As=E(()=>{u();El={"::after":["terminal","jumpable"],"::backdrop":["terminal"],"::before":["terminal","jumpable"],"::cue":["terminal"],"::cue-region":["terminal"],"::first-letter":["terminal","jumpable"],"::first-line":["terminal","jumpable"],"::grammar-error":["terminal"],"::marker":["terminal"],"::part":["terminal","actionable"],"::placeholder":["terminal"],"::selection":["terminal"],"::slotted":["terminal"],"::spelling-error":["terminal"],"::target-text":["terminal"],"::file-selector-button":["terminal","actionable"],"::-webkit-progress-bar":["terminal","actionable"],"::-webkit-scrollbar":["terminal","actionable"],"::-webkit-scrollbar-button":["terminal","actionable"],"::-webkit-scrollbar-thumb":["terminal","actionable"],"::-webkit-scrollbar-track":["terminal","actionable"],"::-webkit-scrollbar-track-piece":["terminal","actionable"],"::-webkit-scrollbar-corner":["terminal","actionable"],"::-webkit-resizer":["terminal","actionable"],":after":["terminal","jumpable"],":before":["terminal","jumpable"],":first-letter":["terminal","jumpable"],":first-line":["terminal","jumpable"],__default__:["actionable"]}});function wr(t,{context:e,candidate:r}){let i=e?.tailwindConfig.prefix??"",n=t.map(a=>{let o=(0,nt.default)().astSync(a.format);return{...a,ast:a.isArbitraryVariant?o:dr(i,o)}}),s=nt.default.root({nodes:[nt.default.selector({nodes:[nt.default.className({value:Ee(r)})]})]});for(let{ast:a}of n)[s,a]=zE(s,a),a.walkNesting(o=>o.replaceWith(...s.nodes[0].nodes)),s=a;return s}function Xm(t){let e=[];for(;t.prev()&&t.prev().type!=="combinator";)t=t.prev();for(;t&&t.type!=="combinator";)e.push(t),t=t.next();return e}function NE(t){return t.sort((e,r)=>e.type==="tag"&&r.type==="class"?-1:e.type==="class"&&r.type==="tag"?1:e.type==="class"&&r.type==="pseudo"&&r.value.startsWith("::")?-1:e.type==="pseudo"&&e.value.startsWith("::")&&r.type==="class"?1:t.index(e)-t.index(r)),t}function Pl(t,e){let r=!1;t.walk(i=>{if(i.type==="class"&&i.value===e)return r=!0,!1}),r||t.remove()}function Cs(t,e,{context:r,candidate:i,base:n}){let s=r?.tailwindConfig?.separator??":";n=n??i.split(new RegExp(`\\${s}(?![^[]*\\])`)).pop();let a=(0,nt.default)().astSync(t);a.walkClasses(c=>{c.raws&&c.value.includes(n)&&(c.raws.value=Ee((0,Jm.default)(c.raws.value)))}),a.each(c=>Pl(c,n));let o=Array.isArray(e)?wr(e,{context:r,candidate:i}):e;if(o===null)return a.toString();let l=nt.default.comment({value:"/*__simple__*/"}),f=nt.default.comment({value:"/*__simple__*/"});return a.walkClasses(c=>{if(c.value!==n)return;let p=c.parent,m=o.nodes[0].nodes;if(p.nodes.length===1){c.replaceWith(...m);return}let d=Xm(c);p.insertBefore(d[0],l),p.insertAfter(d[d.length-1],f);for(let _ of m)p.insertBefore(d[0],_.clone());c.remove(),d=Xm(l);let v=p.index(l);p.nodes.splice(v,d.length,...NE(nt.default.selector({nodes:d})).nodes),l.remove(),f.remove()}),a.walkPseudos(c=>{c.value===Cl&&c.replaceWith(c.nodes)}),a.each(c=>gr(c)),a.toString()}function zE(t,e){let r=[];return t.walkPseudos(i=>{i.value===Cl&&r.push({pseudo:i,value:i.nodes[0].toString()})}),e.walkPseudos(i=>{if(i.value!==Cl)return;let n=i.nodes[0].toString(),s=r.find(f=>f.value===n);if(!s)return;let a=[],o=i.next();for(;o&&o.type!=="combinator";)a.push(o),o=o.next();let l=o;s.pseudo.parent.insertAfter(s.pseudo,nt.default.selector({nodes:a.map(f=>f.clone())})),i.remove(),a.forEach(f=>f.remove()),l&&l.type==="combinator"&&l.remove()}),[t,e]}var nt,Jm,Cl,ql=E(()=>{u();nt=he(tt()),Jm=he(os());hr();bs();As();Cl=":merge"});function Ps(t,e){let r=(0,Dl.default)().astSync(t);return r.each(i=>{i.nodes[0].type==="pseudo"&&i.nodes[0].value===":is"&&i.nodes.every(s=>s.type!=="combinator")||(i.nodes=[Dl.default.pseudo({value:":is",nodes:[i.clone()]})]),gr(i)}),`${e} ${r.toString()}`}var Dl,Il=E(()=>{u();Dl=he(tt());As()});function Rl(t){return $E.transformSync(t)}function*jE(t){let e=1/0;for(;e>=0;){let r,i=!1;if(e===1/0&&t.endsWith("]")){let a=t.indexOf("[");t[a-1]==="-"?r=a-1:t[a-1]==="/"?(r=a-1,i=!0):r=-1}else e===1/0&&t.includes("/")?(r=t.lastIndexOf("/"),i=!0):r=t.lastIndexOf("-",e);if(r<0)break;let n=t.slice(0,r),s=t.slice(i?r:r+1);e=r-1,!(n===""||s==="/")&&(yield[n,s])}}function UE(t,e){if(t.length===0||e.tailwindConfig.prefix==="")return t;for(let r of t){let[i]=r;if(i.options.respectPrefix){let n=Q.root({nodes:[r[1].clone()]}),s=r[1].raws.tailwind.classCandidate;n.walkRules(a=>{let o=s.startsWith("-");a.selector=dr(e.tailwindConfig.prefix,a.selector,o)}),r[1]=n.nodes[0]}}return t}function VE(t,e){if(t.length===0)return t;let r=[];for(let[i,n]of t){let s=Q.root({nodes:[n.clone()]});s.walkRules(a=>{let o=(0,qs.default)().astSync(a.selector);o.each(l=>Pl(l,e)),vp(o,l=>l===e?`!${l}`:l),a.selector=o.toString(),a.walkDecls(l=>l.important=!0)}),r.push([{...i,important:!0},s.nodes[0]])}return r}function WE(t,e,r){if(e.length===0)return e;let i={modifier:null,value:Bi};{let[n,...s]=Se(t,"/");if(s.length>1&&(n=n+"/"+s.slice(0,-1).join("/"),s=s.slice(-1)),s.length&&!r.variantMap.has(t)&&(t=n,i.modifier=s[0],!de(r.tailwindConfig,"generalizedModifiers")))return[]}if(t.endsWith("]")&&!t.startsWith("[")){let n=/(.)(-?)\[(.*)\]/g.exec(t);if(n){let[,s,a,o]=n;if(s==="@"&&a==="-")return[];if(s!=="@"&&a==="")return[];t=t.replace(`${a}[${o}]`,""),i.value=o}}if(Ml(t)&&!r.variantMap.has(t)){let n=r.offsets.recordVariant(t),s=K(t.slice(1,-1)),a=Se(s,",");if(a.length>1)return[];if(!a.every(Ms))return[];let o=a.map((l,f)=>[r.offsets.applyParallelOffset(n,f),Fi(l.trim())]);r.variantMap.set(t,o)}if(r.variantMap.has(t)){let n=Ml(t),s=r.variantMap.get(t).slice(),a=[];for(let[o,l]of e){if(o.layer==="user")continue;let f=Q.root({nodes:[l.clone()]});for(let[c,p,m]of s){let _=function(){d.raws.neededBackup||(d.raws.neededBackup=!0,d.walkRules(T=>T.raws.originalSelector=T.selector))},x=function(T){return _(),d.each(O=>{O.type==="rule"&&(O.selectors=O.selectors.map(P=>T({get className(){return Rl(P)},selector:P})))}),d},d=(m??f).clone(),v=[],y=p({get container(){return _(),d},separator:r.tailwindConfig.separator,modifySelectors:x,wrap(T){let O=d.nodes;d.removeAll(),T.append(O),d.append(T)},format(T){v.push({format:T,isArbitraryVariant:n})},args:i});if(Array.isArray(y)){for(let[T,O]of y.entries())s.push([r.offsets.applyParallelOffset(c,T),O,d.clone()]);continue}if(typeof y=="string"&&v.push({format:y,isArbitraryVariant:n}),y===null)continue;d.raws.neededBackup&&(delete d.raws.neededBackup,d.walkRules(T=>{let O=T.raws.originalSelector;if(!O||(delete T.raws.originalSelector,O===T.selector))return;let P=T.selector,N=(0,qs.default)(z=>{z.walkClasses(F=>{F.value=`${t}${r.tailwindConfig.separator}${F.value}`})}).processSync(O);v.push({format:P.replace(N,"&"),isArbitraryVariant:n}),T.selector=O})),d.nodes[0].raws.tailwind={...d.nodes[0].raws.tailwind,parentLayer:o.layer};let S=[{...o,sort:r.offsets.applyVariantOffset(o.sort,c,Object.assign(i,r.variantOptions.get(t))),collectedFormats:(o.collectedFormats??[]).concat(v)},d.nodes[0]];a.push(S)}}return a}return[]}function Ll(t,e,r={}){return!ve(t)&&!Array.isArray(t)?[[t],r]:Array.isArray(t)?Ll(t[0],e,t[1]):(e.has(t)||e.set(t,pr(t)),[e.get(t),r])}function HE(t){return GE.test(t)}function YE(t){if(!t.includes("://"))return!1;try{let e=new URL(t);return e.scheme!==""&&e.host!==""}catch(e){return!1}}function Km(t){let e=!0;return t.walkDecls(r=>{if(!Zm(r.prop,r.value))return e=!1,!1}),e}function Zm(t,e){if(YE(`${t}:${e}`))return!1;try{return Q.parse(`a{${t}:${e}}`).toResult(),!0}catch(r){return!1}}function QE(t,e){let[,r,i]=t.match(/^\[([a-zA-Z0-9-_]+):(\S+)\]$/)??[];if(i===void 0||!HE(r)||!mr(i))return null;let n=K(i);return Zm(r,n)?[[{sort:e.offsets.arbitraryProperty(),layer:"utilities"},()=>({[vl(t)]:{[r]:n}})]]:null}function*JE(t,e){e.candidateRuleMap.has(t)&&(yield[e.candidateRuleMap.get(t),"DEFAULT"]),yield*function*(o){o!==null&&(yield[o,"DEFAULT"])}(QE(t,e));let r=t,i=!1,n=e.tailwindConfig.prefix,s=n.length,a=r.startsWith(n)||r.startsWith(`-${n}`);r[s]==="-"&&a&&(i=!0,r=n+r.slice(s+1)),i&&e.candidateRuleMap.has(r)&&(yield[e.candidateRuleMap.get(r),"-DEFAULT"]);for(let[o,l]of jE(r))e.candidateRuleMap.has(o)&&(yield[e.candidateRuleMap.get(o),i?`-${l}`:l])}function XE(t,e){return t===ht?[ht]:Se(t,e)}function*KE(t,e){for(let r of t)r[1].raws.tailwind={...r[1].raws.tailwind,classCandidate:e,preserveSource:r[0].options?.preserveSource??!1},yield r}function*Ds(t,e,r=t){let i=e.tailwindConfig.separator,[n,...s]=XE(t,i).reverse(),a=!1;if(n.startsWith("!")&&(a=!0,n=n.slice(1)),de(e.tailwindConfig,"variantGrouping")&&n.startsWith("(")&&n.endsWith(")")){let o=s.slice().reverse().join(i);for(let l of Se(n.slice(1,-1),","))yield*Ds(o+i+l,e,r)}for(let o of JE(n,e)){let l=[],f=new Map,[c,p]=o,m=c.length===1;for(let[d,v]of c){let _=[];if(typeof v=="function")for(let x of[].concat(v(p,{isOnlyPlugin:m}))){let[y,S]=Ll(x,e.postCssNodeCache);for(let T of y)_.push([{...d,options:{...d.options,...S}},T])}else if(p==="DEFAULT"||p==="-DEFAULT"){let x=v,[y,S]=Ll(x,e.postCssNodeCache);for(let T of y)_.push([{...d,options:{...d.options,...S}},T])}if(_.length>0){let x=Array.from(Fa(d.options?.types??[],p,d.options??{},e.tailwindConfig)).map(([y,S])=>S);x.length>0&&f.set(_,x),l.push(_)}}if(Ml(p)){if(l.length>1){let _=function(y){return y.length===1?y[0]:y.find(S=>{let T=f.get(S);return S.some(([{options:O},P])=>Km(P)?O.types.some(({type:N,preferOnConflict:z})=>T.includes(N)&&z):!1)})},[d,v]=l.reduce((y,S)=>(S.some(([{options:O}])=>O.types.some(({type:P})=>P==="any"))?y[0].push(S):y[1].push(S),y),[[],[]]),x=_(v)??_(d);if(x)l=[x];else{let y=l.map(T=>new Set([...f.get(T)??[]]));for(let T of y)for(let O of T){let P=!1;for(let N of y)T!==N&&N.has(O)&&(N.delete(O),P=!0);P&&T.delete(O)}let S=[];for(let[T,O]of y.entries())for(let P of O){let N=l[T].map(([,z])=>z).flat().map(z=>z.toString().split(` -`).slice(1,-1).map(F=>F.trim()).map(F=>` ${F}`).join(` -`)).join(` - -`);S.push(` Use \`${t.replace("[",`[${P}:`)}\` for \`${N.trim()}\``);break}V.warn([`The class \`${t}\` is ambiguous and matches multiple utilities.`,...S,`If this is content and not a class, replace it with \`${t.replace("[","[").replace("]","]")}\` to silence this warning.`]);continue}}l=l.map(d=>d.filter(v=>Km(v[1])))}l=l.flat(),l=Array.from(KE(l,n)),l=UE(l,e),a&&(l=VE(l,n));for(let d of s)l=WE(d,l,e);for(let d of l)d[1].raws.tailwind={...d[1].raws.tailwind,candidate:t},d=ZE(d,{context:e,candidate:t,original:r}),d!==null&&(yield d)}}function ZE(t,{context:e,candidate:r,original:i}){if(!t[0].collectedFormats)return t;let n=!0,s;try{s=wr(t[0].collectedFormats,{context:e,candidate:r})}catch{return null}let a=Q.root({nodes:[t[1].clone()]});return a.walkRules(o=>{if(!Is(o))try{o.selector=Cs(o.selector,s,{candidate:i,context:e})}catch{return n=!1,!1}}),n?(t[1]=a.nodes[0],t):null}function Is(t){return t.parent&&t.parent.type==="atrule"&&t.parent.name==="keyframes"}function eA(t){if(t===!0)return e=>{Is(e)||e.walkDecls(r=>{r.parent.type==="rule"&&!Is(r.parent)&&(r.important=!0)})};if(typeof t=="string")return e=>{Is(e)||(e.selectors=e.selectors.map(r=>Ps(r,t)))}}function Rs(t,e){let r=[],i=eA(e.tailwindConfig.important);for(let n of t){if(e.notClassCache.has(n))continue;if(e.candidateRuleCache.has(n)){r=r.concat(Array.from(e.candidateRuleCache.get(n)));continue}let s=Array.from(Ds(n,e));if(s.length===0){e.notClassCache.add(n);continue}e.classCache.set(n,s);let a=e.candidateRuleCache.get(n)??new Set;e.candidateRuleCache.set(n,a);for(let o of s){let[{sort:l,options:f},c]=o;if(f.respectImportant&&i){let m=Q.root({nodes:[c.clone()]});m.walkRules(i),c=m.nodes[0]}let p=[l,c];a.add(p),e.ruleCache.add(p),r.push(p)}}return r}function Ml(t){return t.startsWith("[")&&t.endsWith("]")}var qs,$E,GE,Ls=E(()=>{u();qt();qs=he(tt());yl();er();bs();Kr();Ge();Rt();ql();bl();Xr();Bs();Ol();Yr();Xe();Il();$E=(0,qs.default)(t=>t.first.filter(({type:e})=>e==="class").pop().value);GE=/^[a-z_-]/});var eg,tg=E(()=>{u();eg={}});function tA(t){try{return eg.createHash("md5").update(t,"utf-8").digest("binary")}catch(e){return""}}function rg(t,e){let r=e.toString();if(!r.includes("@tailwind"))return!1;let i=Tl.get(t),n=tA(r),s=i!==n;return Tl.set(t,n),s}var ig=E(()=>{u();tg();Rt()});function Fs(t){return(t>0n)-(t<0n)}var ng=E(()=>{u()});function sg(t,e){let r=0n,i=0n;for(let[n,s]of e)t&n&&(r=r|n,i=i|s);return t&~r|i}var ag=E(()=>{u()});function og(t){let e=null;for(let r of t)e=e??r,e=e>r?e:r;return e}function rA(t,e){let r=t.length,i=e.length,n=r{u();ng();ag();Bl=class{constructor(){this.offsets={defaults:0n,base:0n,components:0n,utilities:0n,variants:0n,user:0n},this.layerPositions={defaults:0n,base:1n,components:2n,utilities:3n,user:4n,variants:5n},this.reservedVariantBits=0n,this.variantOffsets=new Map}create(e){return{layer:e,parentLayer:e,arbitrary:0n,variants:0n,parallelIndex:0n,index:this.offsets[e]++,options:[]}}arbitraryProperty(){return{...this.create("utilities"),arbitrary:1n}}forVariant(e,r=0){let i=this.variantOffsets.get(e);if(i===void 0)throw new Error(`Cannot find offset for unknown variant ${e}`);return{...this.create("variants"),variants:i<n.startsWith("[")).sort(([n],[s])=>rA(n,s)),r=e.map(([,n])=>n).sort((n,s)=>Fs(n-s));return e.map(([,n],s)=>[n,r[s]]).filter(([n,s])=>n!==s)}remapArbitraryVariantOffsets(e){let r=this.recalculateVariantOffsets();return r.length===0?e:e.map(i=>{let[n,s]=i;return n={...n,variants:sg(n.variants,r)},[n,s]})}sort(e){return e=this.remapArbitraryVariantOffsets(e),e.sort(([r],[i])=>Fs(this.compare(r,i)))}}});function $l(t,e){let r=t.tailwindConfig.prefix;return typeof r=="function"?r(e):r+e}function fg({type:t="any",...e}){let r=[].concat(t);return{...e,types:r.map(i=>Array.isArray(i)?{type:i[0],...i[1]}:{type:i,preferOnConflict:!1})}}function iA(t){let e=[],r="",i=0;for(let n=0;n0&&e.push(r.trim()),e=e.filter(n=>n!==""),e}function nA(t,e,{before:r=[]}={}){if(r=[].concat(r),r.length<=0){t.push(e);return}let i=t.length-1;for(let n of r){let s=t.indexOf(n);s!==-1&&(i=Math.min(i,s))}t.splice(i,0,e)}function cg(t){return Array.isArray(t)?t.flatMap(e=>!Array.isArray(e)&&!ve(e)?e:pr(e)):cg([t])}function pg(t,e){return(0,Fl.default)(i=>{let n=[];return e&&e(i),i.walkClasses(s=>{n.push(s.value)}),n}).transformSync(t)}function sA(t,e={containsNonOnDemandable:!1},r=0){let i=[];if(t.type==="rule"){let n=function(s){s.walkPseudos(a=>{a.value===":not"&&a.remove()})};for(let s of t.selectors){let a=pg(s,n);a.length===0&&(e.containsNonOnDemandable=!0);for(let o of a)i.push(o)}}else t.type==="atrule"&&t.walkRules(n=>{for(let s of n.selectors.flatMap(a=>pg(a)))i.push(s)});return r===0?[e.containsNonOnDemandable||i.length===0,i]:i}function Ns(t){return cg(t).flatMap(e=>{let r=new Map,[i,n]=sA(e);return i&&n.unshift(ht),n.map(s=>(r.has(e)||r.set(e,e),[s,r.get(e)]))})}function Ms(t){return t.startsWith("@")||t.includes("&")}function Fi(t){t=t.replace(/\n+/g,"").replace(/\s{1,}/g," ").trim();let e=iA(t).map(r=>{if(!r.startsWith("@"))return({format:s})=>s(r);let[,i,n]=/@(.*?)( .+|[({].*)/g.exec(r);return({wrap:s})=>s(Q.atRule({name:i,params:n.trim()}))}).reverse();return r=>{for(let i of e)i(r)}}function aA(t,e,{variantList:r,variantMap:i,offsets:n,classList:s}){function a(m,d){return m?(0,ug.default)(t,m,d):t}function o(m){return dr(t.prefix,m)}function l(m,d){return m===ht?ht:d.respectPrefix?e.tailwindConfig.prefix+m:m}function f(m,d,v={}){let _=Tt(m),x=a(["theme",..._],d);return dt(_[0])(x,v)}let c=0,p={postcss:Q,prefix:o,e:Ee,config:a,theme:f,corePlugins:m=>Array.isArray(t.corePlugins)?t.corePlugins.includes(m):a(["corePlugins",m],!0),variants:()=>[],addBase(m){for(let[d,v]of Ns(m)){let _=l(d,{}),x=n.create("base");e.candidateRuleMap.has(_)||e.candidateRuleMap.set(_,[]),e.candidateRuleMap.get(_).push([{sort:x,layer:"base"},v])}},addDefaults(m,d){let v={[`@defaults ${m}`]:d};for(let[_,x]of Ns(v)){let y=l(_,{});e.candidateRuleMap.has(y)||e.candidateRuleMap.set(y,[]),e.candidateRuleMap.get(y).push([{sort:n.create("defaults"),layer:"defaults"},x])}},addComponents(m,d){d=Object.assign({},{preserveSource:!1,respectPrefix:!0,respectImportant:!1},Array.isArray(d)?{}:d);for(let[_,x]of Ns(m)){let y=l(_,d);s.add(y),e.candidateRuleMap.has(y)||e.candidateRuleMap.set(y,[]),e.candidateRuleMap.get(y).push([{sort:n.create("components"),layer:"components",options:d},x])}},addUtilities(m,d){d=Object.assign({},{preserveSource:!1,respectPrefix:!0,respectImportant:!0},Array.isArray(d)?{}:d);for(let[_,x]of Ns(m)){let y=l(_,d);s.add(y),e.candidateRuleMap.has(y)||e.candidateRuleMap.set(y,[]),e.candidateRuleMap.get(y).push([{sort:n.create("utilities"),layer:"utilities",options:d},x])}},matchUtilities:function(m,d){d=fg({...{respectPrefix:!0,respectImportant:!0,modifiers:!1},...d});let _=n.create("utilities");for(let x in m){let T=function(P,{isOnlyPlugin:N}){let[z,F,fe]=Ba(d.types,P,d,t);if(z===void 0)return[];if(!d.types.some(({type:pe})=>pe===F))if(N)V.warn([`Unnecessary typehint \`${F}\` in \`${x}-${P}\`.`,`You can safely update it to \`${x}-${P.replace(F+":","")}\`.`]);else return[];if(!mr(z))return[];let Te={get modifier(){return d.modifiers||V.warn(`modifier-used-without-options-for-${x}`,["Your plugin must set `modifiers: true` in its options to support modifiers."]),fe}},se=de(t,"generalizedModifiers");return[].concat(se?S(z,Te):S(z)).filter(Boolean).map(pe=>({[xs(x,P)]:pe}))},y=l(x,d),S=m[x];s.add([y,d]);let O=[{sort:_,layer:"utilities",options:d},T];e.candidateRuleMap.has(y)||e.candidateRuleMap.set(y,[]),e.candidateRuleMap.get(y).push(O)}},matchComponents:function(m,d){d=fg({...{respectPrefix:!0,respectImportant:!1,modifiers:!1},...d});let _=n.create("components");for(let x in m){let T=function(P,{isOnlyPlugin:N}){let[z,F,fe]=Ba(d.types,P,d,t);if(z===void 0)return[];if(!d.types.some(({type:pe})=>pe===F))if(N)V.warn([`Unnecessary typehint \`${F}\` in \`${x}-${P}\`.`,`You can safely update it to \`${x}-${P.replace(F+":","")}\`.`]);else return[];if(!mr(z))return[];let Te={get modifier(){return d.modifiers||V.warn(`modifier-used-without-options-for-${x}`,["Your plugin must set `modifiers: true` in its options to support modifiers."]),fe}},se=de(t,"generalizedModifiers");return[].concat(se?S(z,Te):S(z)).filter(Boolean).map(pe=>({[xs(x,P)]:pe}))},y=l(x,d),S=m[x];s.add([y,d]);let O=[{sort:_,layer:"components",options:d},T];e.candidateRuleMap.has(y)||e.candidateRuleMap.set(y,[]),e.candidateRuleMap.get(y).push(O)}},addVariant(m,d,v={}){d=[].concat(d).map(_=>{if(typeof _!="string")return(x={})=>{let{args:y,modifySelectors:S,container:T,separator:O,wrap:P,format:N}=x,z=_(Object.assign({modifySelectors:S,container:T,separator:O},v.type===Nl.MatchVariant&&{args:y,wrap:P,format:N}));if(typeof z=="string"&&!Ms(z))throw new Error(`Your custom variant \`${m}\` has an invalid format string. Make sure it's an at-rule or contains a \`&\` placeholder.`);return Array.isArray(z)?z.filter(F=>typeof F=="string").map(F=>Fi(F)):z&&typeof z=="string"&&Fi(z)(x)};if(!Ms(_))throw new Error(`Your custom variant \`${m}\` has an invalid format string. Make sure it's an at-rule or contains a \`&\` placeholder.`);return Fi(_)}),nA(r,m,v),i.set(m,d),e.variantOptions.set(m,v)},matchVariant(m,d,v){let _=v?.id??++c,x=m==="@",y=de(t,"generalizedModifiers");for(let[T,O]of Object.entries(v?.values??{}))T!=="DEFAULT"&&p.addVariant(x?`${m}${T}`:`${m}-${T}`,({args:P,container:N})=>d(O,y?{modifier:P?.modifier,container:N}:{container:N}),{...v,value:O,id:_,type:Nl.MatchVariant,variantInfo:zl.Base});let S="DEFAULT"in(v?.values??{});p.addVariant(m,({args:T,container:O})=>T?.value===Bi&&!S?null:d(T?.value===Bi?v.values.DEFAULT:T?.value??(typeof T=="string"?T:""),y?{modifier:T?.modifier,container:O}:{container:O}),{...v,id:_,type:Nl.MatchVariant,variantInfo:zl.Dynamic})}};return p}function zs(t){return jl.has(t)||jl.set(t,new Map),jl.get(t)}function dg(t,e){let r=!1,i=new Map;for(let n of t){if(!n)continue;let s=Va.parse(n),a=s.hash?s.href.replace(s.hash,""):s.href;a=s.search?a.replace(s.search,""):a;let o=we.statSync(decodeURIComponent(a),{throwIfNoEntry:!1})?.mtimeMs;!o||((!e.has(n)||o>e.get(n))&&(r=!0),i.set(n,o))}return[r,i]}function hg(t){t.walkAtRules(e=>{["responsive","variants"].includes(e.name)&&(hg(e),e.before(e.nodes),e.remove())})}function oA(t){let e=[];return t.each(r=>{r.type==="atrule"&&["responsive","variants"].includes(r.name)&&(r.name="layer",r.params="utilities")}),t.walkAtRules("layer",r=>{if(hg(r),r.params==="base"){for(let i of r.nodes)e.push(function({addBase:n}){n(i,{respectPrefix:!1})});r.remove()}else if(r.params==="components"){for(let i of r.nodes)e.push(function({addComponents:n}){n(i,{respectPrefix:!1,preserveSource:!0})});r.remove()}else if(r.params==="utilities"){for(let i of r.nodes)e.push(function({addUtilities:n}){n(i,{respectPrefix:!1,preserveSource:!0})});r.remove()}}),e}function lA(t,e){let r=Object.entries({...Ae,...jm}).map(([o,l])=>t.tailwindConfig.corePlugins.includes(o)?l:null).filter(Boolean),i=t.tailwindConfig.plugins.map(o=>(o.__isOptionsFunction&&(o=o()),typeof o=="function"?o:o.handler)),n=oA(e),s=[Ae.pseudoElementVariants,Ae.pseudoClassVariants,Ae.ariaVariants,Ae.dataVariants],a=[Ae.supportsVariants,Ae.directionVariants,Ae.reducedMotionVariants,Ae.prefersContrastVariants,Ae.darkVariants,Ae.printVariant,Ae.screenVariants,Ae.orientationVariants];return[...r,...s,...i,...a,...n]}function uA(t,e){let r=[],i=new Map;e.variantMap=i;let n=new Bl;e.offsets=n;let s=new Set,a=aA(e.tailwindConfig,e,{variantList:r,variantMap:i,offsets:n,classList:s});for(let c of t)if(Array.isArray(c))for(let p of c)p(a);else c?.(a);n.recordVariants(r,c=>i.get(c).length);for(let[c,p]of i.entries())e.variantMap.set(c,p.map((m,d)=>[n.forVariant(c,d),m]));let o=(e.tailwindConfig.safelist??[]).filter(Boolean);if(o.length>0){let c=[];for(let p of o){if(typeof p=="string"){e.changedContent.push({content:p,extension:"html"});continue}if(p instanceof RegExp){V.warn("root-regex",["Regular expressions in `safelist` work differently in Tailwind CSS v3.0.","Update your `safelist` configuration to eliminate this warning.","https://tailwindcss.com/docs/content-configuration#safelisting-classes"]);continue}c.push(p)}if(c.length>0){let p=new Map,m=e.tailwindConfig.prefix.length,d=c.some(v=>v.pattern.source.includes("!"));for(let v of s){let _=Array.isArray(v)?(()=>{let[x,y]=v,T=Object.keys(y?.values??{}).map(O=>Mi(x,O));return y?.supportsNegativeValues&&(T=[...T,...T.map(O=>"-"+O)],T=[...T,...T.map(O=>O.slice(0,m)+"-"+O.slice(m))]),y.types.some(({type:O})=>O==="color")&&(T=[...T,...T.flatMap(O=>Object.keys(e.tailwindConfig.theme.opacity).map(P=>`${O}/${P}`))]),d&&y?.respectImportant&&(T=[...T,...T.map(O=>"!"+O)]),T})():[v];for(let x of _)for(let{pattern:y,variants:S=[]}of c)if(y.lastIndex=0,p.has(y)||p.set(y,0),!!y.test(x)){p.set(y,p.get(y)+1),e.changedContent.push({content:x,extension:"html"});for(let T of S)e.changedContent.push({content:T+e.tailwindConfig.separator+x,extension:"html"})}}for(let[v,_]of p.entries())_===0&&V.warn([`The safelist pattern \`${v}\` doesn't match any Tailwind CSS classes.`,"Fix this pattern or remove it from your `safelist` configuration.","https://tailwindcss.com/docs/content-configuration#safelisting-classes"])}}let l=[].concat(e.tailwindConfig.darkMode??"media")[1]??"dark",f=[$l(e,l),$l(e,"group"),$l(e,"peer")];e.getClassOrder=function(p){let m=[...p].sort((x,y)=>x===y?0:x[x,null])),v=Rs(new Set(m),e);v=e.offsets.sort(v);let _=BigInt(f.length);for(let[,x]of v)d.set(x.raws.tailwind.candidate,_++);return p.map(x=>{let y=d.get(x)??null,S=f.indexOf(x);return y===null&&S!==-1&&(y=BigInt(S)),[x,y]})},e.getClassList=function(p={}){let m=[];for(let d of s)if(Array.isArray(d)){let[v,_]=d,x=[],y=Object.keys(_?.modifiers??{});_?.types?.some(({type:O})=>O==="color")&&y.push(...Object.keys(e.tailwindConfig.theme.opacity??{}));let S={modifiers:y},T=p.includeMetadata&&y.length>0;for(let[O,P]of Object.entries(_?.values??{})){if(P==null)continue;let N=Mi(v,O);if(m.push(T?[N,S]:N),_?.supportsNegativeValues&&_t(P)){let z=Mi(v,`-${O}`);x.push(T?[z,S]:z)}}m.push(...x)}else m.push(d);return m},e.getVariants=function(){let p=[];for(let[m,d]of e.variantOptions.entries())d.variantInfo!==zl.Base&&p.push({name:m,isArbitrary:d.type===Symbol.for("MATCH_VARIANT"),values:Object.keys(d.values??{}),hasDash:m!=="@",selectors({modifier:v,value:_}={}){let x="__TAILWIND_PLACEHOLDER__",y=Q.rule({selector:`.${x}`}),S=Q.root({nodes:[y.clone()]}),T=S.toString(),O=(e.variantMap.get(m)??[]).flatMap(([se,ce])=>ce),P=[];for(let se of O){let ce=[],pe={args:{modifier:v,value:d.values?.[_]??_},separator:e.tailwindConfig.separator,modifySelectors(Ue){return S.each(Sa=>{Sa.type==="rule"&&(Sa.selectors=Sa.selectors.map(Bc=>Ue({get className(){return Rl(Bc)},selector:Bc})))}),S},format(Ue){ce.push(Ue)},wrap(Ue){ce.push(`@${Ue.name} ${Ue.params} { & }`)},container:S},St=se(pe);if(ce.length>0&&P.push(ce),Array.isArray(St))for(let Ue of St)ce=[],Ue(pe),P.push(ce)}let N=[],z=S.toString();T!==z&&(S.walkRules(se=>{let ce=se.selector,pe=(0,Fl.default)(St=>{St.walkClasses(Ue=>{Ue.value=`${m}${e.tailwindConfig.separator}${Ue.value}`})}).processSync(ce);N.push(ce.replace(pe,"&").replace(x,"&"))}),S.walkAtRules(se=>{N.push(`@${se.name} (${se.params}) { & }`)}));let F=!(_ in(d.values??{}));P=P.map(se=>se.map(ce=>({format:ce,isArbitraryVariant:F}))),N=N.map(se=>({format:se,isArbitraryVariant:F}));let fe={candidate:x,context:e},Te=P.map(se=>Cs(`.${x}`,wr(se,fe),fe).replace(`.${x}`,"&").replace("{ & }","").trim());return N.length>0&&Te.push(wr(N,fe).toString().replace(`.${x}`,"&")),Te}});return p}}function mg(t,e){!t.classCache.has(e)||(t.notClassCache.add(e),t.classCache.delete(e),t.applyClassCache.delete(e),t.candidateRuleMap.delete(e),t.candidateRuleCache.delete(e),t.stylesheetCache=null)}function fA(t,e){let r=e.raws.tailwind.candidate;if(!!r){for(let i of t.ruleCache)i[1].raws.tailwind.candidate===r&&t.ruleCache.delete(i);mg(t,r)}}function Ul(t,e=[],r=Q.root()){let i={disposables:[],ruleCache:new Set,candidateRuleCache:new Map,classCache:new Map,applyClassCache:new Map,notClassCache:new Set(t.blocklist??[]),postCssNodeCache:new Map,candidateRuleMap:new Map,tailwindConfig:t,changedContent:e,variantMap:new Map,stylesheetCache:null,variantOptions:new Map,markInvalidUtilityCandidate:s=>mg(i,s),markInvalidUtilityNode:s=>fA(i,s)},n=lA(i,r);return uA(n,i),i}function gg(t,e,r,i,n,s){let a=e.opts.from,o=i!==null;Qe.DEBUG&&console.log("Source path:",a);let l;if(o&&yr.has(a))l=yr.get(a);else if(Ni.has(n)){let m=Ni.get(n);Lt.get(m).add(a),yr.set(a,m),l=m}let f=rg(a,t);if(l){let[m,d]=dg([...s],zs(l));if(!m&&!f)return[l,!1,d]}if(yr.has(a)){let m=yr.get(a);if(Lt.has(m)&&(Lt.get(m).delete(a),Lt.get(m).size===0)){Lt.delete(m);for(let[d,v]of Ni)v===m&&Ni.delete(d);for(let d of m.disposables.splice(0))d(m)}}Qe.DEBUG&&console.log("Setting up new context...");let c=Ul(r,[],t);Object.assign(c,{userConfigPath:i});let[,p]=dg([...s],zs(c));return Ni.set(n,c),yr.set(a,c),Lt.has(c)||Lt.set(c,new Set),Lt.get(c).add(a),[c,!0,p]}var ug,Fl,Nl,zl,jl,yr,Ni,Lt,Bs=E(()=>{u();ut();Wa();qt();ug=he(ho()),Fl=he(tt());Ri();yl();bs();er();hr();bl();Kr();Um();Rt();Rt();Tn();Ge();kn();Ol();Ls();ig();lg();Xe();ql();Nl={AddVariant:Symbol.for("ADD_VARIANT"),MatchVariant:Symbol.for("MATCH_VARIANT")},zl={Base:1<<0,Dynamic:1<<1};jl=new WeakMap;yr=Vm,Ni=Wm,Lt=Es});function Vl(t){return t.ignore?[]:t.glob?g.env.ROLLUP_WATCH==="true"?[{type:"dependency",file:t.base}]:[{type:"dir-dependency",dir:t.base,glob:t.glob}]:[{type:"dependency",file:t.base}]}var wg=E(()=>{u()});function yg(t,e){return{handler:t,config:e}}var vg,bg=E(()=>{u();yg.withOptions=function(t,e=()=>({})){let r=function(i){return{__options:i,handler:t(i),config:e(i)}};return r.__isOptionsFunction=!0,r.__pluginFunction=t,r.__configFunction=e,r};vg=yg});var vr={};Ve(vr,{default:()=>cA});var cA,br=E(()=>{u();bg();cA=vg});var Wl=b((YN,xg)=>{u();var pA=(br(),vr).default,dA={overflow:"hidden",display:"-webkit-box","-webkit-box-orient":"vertical"},hA=pA(function({matchUtilities:t,addUtilities:e,theme:r,variants:i}){let n=r("lineClamp");t({"line-clamp":s=>({...dA,"-webkit-line-clamp":`${s}`})},{values:n}),e([{".line-clamp-none":{"-webkit-line-clamp":"unset"}}],i("lineClamp"))},{theme:{lineClamp:{1:"1",2:"2",3:"3",4:"4",5:"5",6:"6"}},variants:{lineClamp:["responsive"]}});xg.exports=hA});function Gl(t){t.content.files.length===0&&V.warn("content-problems",["The `content` option in your Tailwind CSS configuration is missing or empty.","Configure your content sources or your generated CSS will be missing styles.","https://tailwindcss.com/docs/content-configuration"]);try{let e=Wl();t.plugins.includes(e)&&(V.warn("line-clamp-in-core",["As of Tailwind CSS v3.3, the `@tailwindcss/line-clamp` plugin is now included by default.","Remove it from the `plugins` array in your configuration to eliminate this warning."]),t.plugins=t.plugins.filter(r=>r!==e))}catch{}return t}var kg=E(()=>{u();Ge()});var Sg,_g=E(()=>{u();Sg=()=>!1});var $s,Tg=E(()=>{u();$s={sync:t=>[].concat(t),generateTasks:t=>[{dynamic:!1,base:".",negative:[],positive:[].concat(t),patterns:[].concat(t)}],escapePath:t=>t}});var Hl,Og=E(()=>{u();Hl=t=>t});var Eg,Ag=E(()=>{u();Eg=()=>""});function Cg(t){let e=t,r=Eg(t);return r!=="."&&(e=t.substr(r.length),e.charAt(0)==="/"&&(e=e.substr(1))),e.substr(0,2)==="./"&&(e=e.substr(2)),e.charAt(0)==="/"&&(e=e.substr(1)),{base:r,glob:e}}var Pg=E(()=>{u();Ag()});function qg(t,e){let r=e.content.files;r=r.filter(o=>typeof o=="string"),r=r.map(Hl);let i=$s.generateTasks(r),n=[],s=[];for(let o of i)n.push(...o.positive.map(l=>Dg(l,!1))),s.push(...o.negative.map(l=>Dg(l,!0)));let a=[...n,...s];return a=gA(t,a),a=a.flatMap(wA),a=a.map(mA),a}function Dg(t,e){let r={original:t,base:t,ignore:e,pattern:t,glob:null};return Sg(t)&&Object.assign(r,Cg(t)),r}function mA(t){let e=Hl(t.base);return e=$s.escapePath(e),t.pattern=t.glob?`${e}/${t.glob}`:e,t.pattern=t.ignore?`!${t.pattern}`:t.pattern,t}function gA(t,e){let r=[];return t.userConfigPath&&t.tailwindConfig.content.relative&&(r=[me.dirname(t.userConfigPath)]),e.map(i=>(i.base=me.resolve(...r,i.base),i))}function wA(t){let e=[t];try{let r=we.realpathSync(t.base);r!==t.base&&e.push({...t,base:r})}catch{}return e}function Ig(t,e,r){let i=t.tailwindConfig.content.files.filter(a=>typeof a.raw=="string").map(({raw:a,extension:o="html"})=>({content:a,extension:o})),[n,s]=yA(e,r);for(let a of n){let o=me.extname(a).slice(1);i.push({file:a,extension:o})}return[i,s]}function yA(t,e){let r=t.map(a=>a.pattern),i=new Map,n=new Set;Qe.DEBUG&&console.time("Finding changed files");let s=$s.sync(r,{absolute:!0});for(let a of s){let o=e.get(a)||-1/0,l=we.statSync(a).mtimeMs;l>o&&(n.add(a),i.set(a,l))}return Qe.DEBUG&&console.timeEnd("Finding changed files"),[n,i]}var Rg=E(()=>{u();ut();jt();_g();Tg();Og();Pg();Rt()});function Lg(){}var Mg=E(()=>{u()});function kA(t,e){for(let r of e){let i=`${t}${r}`;if(we.existsSync(i)&&we.statSync(i).isFile())return i}for(let r of e){let i=`${t}/index${r}`;if(we.existsSync(i))return i}return null}function*Bg(t,e,r,i=me.extname(t)){let n=kA(me.resolve(e,t),vA.includes(i)?bA:xA);if(n===null||r.has(n))return;r.add(n),yield n,e=me.dirname(n),i=me.extname(n);let s=we.readFileSync(n,"utf-8");for(let a of[...s.matchAll(/import[\s\S]*?['"](.{3,}?)['"]/gi),...s.matchAll(/import[\s\S]*from[\s\S]*?['"](.{3,}?)['"]/gi),...s.matchAll(/require\(['"`](.+)['"`]\)/gi)])!a[1].startsWith(".")||(yield*Bg(a[1],e,r,i))}function Yl(t){return t===null?new Set:new Set(Bg(t,me.dirname(t),new Set))}var vA,bA,xA,Fg=E(()=>{u();ut();jt();vA=[".js",".cjs",".mjs"],bA=["",".js",".cjs",".mjs",".ts",".cts",".mts",".jsx",".tsx"],xA=["",".ts",".cts",".mts",".tsx",".js",".cjs",".mjs",".jsx"]});function SA(t,e){if(Ql.has(t))return Ql.get(t);let r=qg(t,e);return Ql.set(t,r).get(t)}function _A(t){let e=Ua(t);if(e!==null){let[i,n,s,a]=zg.get(e)||[],o=Yl(e),l=!1,f=new Map;for(let m of o){let d=we.statSync(m).mtimeMs;f.set(m,d),(!a||!a.has(m)||d>a.get(m))&&(l=!0)}if(!l)return[i,e,n,s];for(let m of o)delete Nc.cache[m];let c=Gl(ei(Lg(e))),p=xn(c);return zg.set(e,[c,p,o,f]),[c,e,p,o]}let r=ei(t.config===void 0?t:t.config);return r=Gl(r),[r,null,xn(r),[]]}function Jl(t){return({tailwindDirectives:e,registerDependency:r})=>(i,n)=>{let[s,a,o,l]=_A(t),f=new Set(l);if(e.size>0){f.add(n.opts.from);for(let v of n.messages)v.type==="dependency"&&f.add(v.file)}let[c,,p]=gg(i,n,s,a,o,f),m=zs(c),d=SA(c,s);if(e.size>0){for(let x of d)for(let y of Vl(x))r(y);let[v,_]=Ig(c,d,m);for(let x of v)c.changedContent.push(x);for(let[x,y]of _.entries())p.set(x,y)}for(let v of l)r({type:"dependency",file:v});for(let[v,_]of p.entries())m.set(v,_);return c}}var Ng,zg,Ql,$g=E(()=>{u();ut();Ng=he(_a());Vc();ja();Ip();Bs();wg();kg();Rg();Mg();Fg();zg=new Ng.default({maxSize:100}),Ql=new WeakMap});function Xl(t){let e=new Set,r=new Set,i=new Set;if(t.walkAtRules(n=>{n.name==="apply"&&i.add(n),n.name==="import"&&(n.params==='"tailwindcss/base"'||n.params==="'tailwindcss/base'"?(n.name="tailwind",n.params="base"):n.params==='"tailwindcss/components"'||n.params==="'tailwindcss/components'"?(n.name="tailwind",n.params="components"):n.params==='"tailwindcss/utilities"'||n.params==="'tailwindcss/utilities'"?(n.name="tailwind",n.params="utilities"):(n.params==='"tailwindcss/screens"'||n.params==="'tailwindcss/screens'"||n.params==='"tailwindcss/variants"'||n.params==="'tailwindcss/variants'")&&(n.name="tailwind",n.params="variants")),n.name==="tailwind"&&(n.params==="screens"&&(n.params="variants"),e.add(n.params)),["layer","responsive","variants"].includes(n.name)&&(["responsive","variants"].includes(n.name)&&V.warn(`${n.name}-at-rule-deprecated`,[`The \`@${n.name}\` directive has been deprecated in Tailwind CSS v3.0.`,"Use `@layer utilities` or `@layer components` instead.","https://tailwindcss.com/docs/upgrade-guide#replace-variants-with-layer"]),r.add(n))}),!e.has("base")||!e.has("components")||!e.has("utilities")){for(let n of r)if(n.name==="layer"&&["base","components","utilities"].includes(n.params)){if(!e.has(n.params))throw n.error(`\`@layer ${n.params}\` is used but no matching \`@tailwind ${n.params}\` directive is present.`)}else if(n.name==="responsive"){if(!e.has("utilities"))throw n.error("`@responsive` is used but `@tailwind utilities` is missing.")}else if(n.name==="variants"&&!e.has("utilities"))throw n.error("`@variants` is used but `@tailwind utilities` is missing.")}return{tailwindDirectives:e,applyDirectives:i}}var jg=E(()=>{u();Ge()});function Ht(t,e=void 0,r=void 0){return t.map(i=>{let n=i.clone(),s=i.raws.tailwind?.preserveSource!==!0||!n.source;return e!==void 0&&s&&(n.source=e,"walk"in n&&n.walk(a=>{a.source=e})),r!==void 0&&(n.raws.tailwind={...n.raws.tailwind,...r}),n})}var Ug=E(()=>{u()});function js(t){return t=Array.isArray(t)?t:[t],t=t.map(e=>e instanceof RegExp?e.source:e),t.join("")}function Be(t){return new RegExp(js(t),"g")}function xr(t){return`(?:${t.map(js).join("|")})`}function Kl(t){return`(?:${js(t)})?`}function Wg(t){return`(?:${js(t)})*`}function Gg(t){return t&&TA.test(t)?t.replace(Vg,"\\$&"):t||""}var Vg,TA,Hg=E(()=>{u();Vg=/[\\^$.*+?()[\]{}|]/g,TA=RegExp(Vg.source)});function Yg(t){let e=Array.from(OA(t));return r=>{let i=[];for(let n of e)i=[...i,...r.match(n)??[]];return i.filter(n=>n!==void 0).map(CA)}}function*OA(t){let e=t.tailwindConfig.separator,r=de(t.tailwindConfig,"variantGrouping"),i=t.tailwindConfig.prefix!==""?Kl(Be([/-?/,Gg(t.tailwindConfig.prefix)])):"",n=xr([/\[[^\s:'"`]+:[^\s\[\]]+\]/,/\[[^\s:'"`]+:[^\s]+?\[[^\s]+\][^\s]+?\]/,Be([/-?(?:\w+)/,Kl(xr([Be([/-(?:\w+-)*\[[^\s:]+\]/,/(?![{([]])/,/(?:\/[^\s'"`\\><$]*)?/]),Be([/-(?:\w+-)*\[[^\s]+\]/,/(?![{([]])/,/(?:\/[^\s'"`\\$]*)?/]),/[-\/][^\s'"`\\$={><]*/]))])]),s=[xr([Be([/@\[[^\s"'`]+\](\/[^\s"'`]+)?/,e]),Be([/([^\s"'`\[\\]+-)?\[[^\s"'`]+\]/,e]),Be([/[^\s"'`\[\\]+/,e])]),xr([Be([/([^\s"'`\[\\]+-)?\[[^\s`]+\]/,e]),Be([/[^\s`\[\\]+/,e])])];for(let a of s)yield Be(["((?=((",a,")+))\\2)?",/!?/,i,r?xr([Be([/\(/,n,Wg([/,/,n]),/\)/]),n]):n]);yield/[^<>"'`\s.(){}[\]#=%$]*[^<>"'`\s.(){}[\]#=%:$]/g}function CA(t){if(!t.includes("-["))return t;let e=0,r=[],i=t.matchAll(EA);i=Array.from(i).flatMap(n=>{let[,...s]=n;return s.map((a,o)=>Object.assign([],n,{index:n.index+o,0:a}))});for(let n of i){let s=n[0],a=r[r.length-1];if(s===a?r.pop():(s==="'"||s==='"'||s==="`")&&r.push(s),!a){if(s==="["){e++;continue}else if(s==="]"){e--;continue}if(e<0)return t.substring(0,n.index-1);if(e===0&&!AA.test(s))return t.substring(0,n.index)}}return t}var EA,AA,Qg=E(()=>{u();Xe();Hg();EA=/([\[\]'"`])([^\[\]'"`])?/g,AA=/[^"'`\s<>\]]+/});function PA(t,e){let r=t.tailwindConfig.content.extract;return r[e]||r.DEFAULT||Xg[e]||Xg.DEFAULT(t)}function qA(t,e){let r=t.content.transform;return r[e]||r.DEFAULT||Kg[e]||Kg.DEFAULT}function DA(t,e,r,i){zi.has(e)||zi.set(e,new Jg.default({maxSize:25e3}));for(let n of t.split(` -`))if(n=n.trim(),!i.has(n))if(i.add(n),zi.get(e).has(n))for(let s of zi.get(e).get(n))r.add(s);else{let s=e(n).filter(o=>o!=="!*"),a=new Set(s);for(let o of a)r.add(o);zi.get(e).set(n,a)}}function IA(t,e){let r=e.offsets.sort(t),i={base:new Set,defaults:new Set,components:new Set,utilities:new Set,variants:new Set};for(let[n,s]of r)i[n.layer].add(s);return i}function Zl(t){return e=>{let r={base:null,components:null,utilities:null,variants:null};if(e.walkAtRules(v=>{v.name==="tailwind"&&Object.keys(r).includes(v.params)&&(r[v.params]=v)}),Object.values(r).every(v=>v===null))return e;let i=new Set([...t.candidates??[],ht]),n=new Set;mt.DEBUG&&console.time("Reading changed files");for(let{file:v,content:_,extension:x}of t.changedContent){let y=qA(t.tailwindConfig,x),S=PA(t,x);_=v?we.readFileSync(v,"utf8"):_,DA(y(_),S,i,n)}mt.DEBUG&&console.timeEnd("Reading changed files");let s=t.classCache.size;mt.DEBUG&&console.time("Generate rules"),mt.DEBUG&&console.time("Sorting candidates");let a=new Set([...i].sort((v,_)=>v===_?0:v<_?-1:1));mt.DEBUG&&console.timeEnd("Sorting candidates"),Rs(a,t),mt.DEBUG&&console.timeEnd("Generate rules"),mt.DEBUG&&console.time("Build stylesheet"),(t.stylesheetCache===null||t.classCache.size!==s)&&(t.stylesheetCache=IA([...t.ruleCache],t)),mt.DEBUG&&console.timeEnd("Build stylesheet");let{defaults:o,base:l,components:f,utilities:c,variants:p}=t.stylesheetCache;r.base&&(r.base.before(Ht([...l,...o],r.base.source,{layer:"base"})),r.base.remove()),r.components&&(r.components.before(Ht([...f],r.components.source,{layer:"components"})),r.components.remove()),r.utilities&&(r.utilities.before(Ht([...c],r.utilities.source,{layer:"utilities"})),r.utilities.remove());let m=Array.from(p).filter(v=>{let _=v.raws.tailwind?.parentLayer;return _==="components"?r.components!==null:_==="utilities"?r.utilities!==null:!0});r.variants?(r.variants.before(Ht(m,r.variants.source,{layer:"variants"})),r.variants.remove()):m.length>0&&e.append(Ht(m,e.source,{layer:"variants"}));let d=m.some(v=>v.raws.tailwind?.parentLayer==="utilities");r.utilities&&c.size===0&&!d&&V.warn("content-problems",["No utility classes were detected in your source files. If this is unexpected, double-check the `content` option in your Tailwind CSS configuration.","https://tailwindcss.com/docs/content-configuration"]),mt.DEBUG&&(console.log("Potential classes: ",i.size),console.log("Active contexts: ",Es.size)),t.changedContent=[],e.walkAtRules("layer",v=>{Object.keys(r).includes(v.params)&&v.remove()})}}var Jg,mt,Xg,Kg,zi,Zg=E(()=>{u();ut();Jg=he(_a());Rt();Ls();Ge();Ug();Qg();mt=Qe,Xg={DEFAULT:Yg},Kg={DEFAULT:t=>t,svelte:t=>t.replace(/(?:^|\s)class:/g," ")};zi=new WeakMap});function Vs(t){let e=new Map;Q.root({nodes:[t.clone()]}).walkRules(s=>{(0,Us.default)(a=>{a.walkClasses(o=>{let l=o.parent.toString(),f=e.get(l);f||e.set(l,f=new Set),f.add(o.value)})}).processSync(s.selector)});let i=Array.from(e.values(),s=>Array.from(s)),n=i.flat();return Object.assign(n,{groups:i})}function eu(t){return RA.astSync(t)}function e0(t,e){let r=new Set;for(let i of t)r.add(i.split(e).pop());return Array.from(r)}function t0(t,e){let r=t.tailwindConfig.prefix;return typeof r=="function"?r(e):r+e}function*r0(t){for(yield t;t.parent;)yield t.parent,t=t.parent}function LA(t,e={}){let r=t.nodes;t.nodes=[];let i=t.clone(e);return t.nodes=r,i}function MA(t){for(let e of r0(t))if(t!==e){if(e.type==="root")break;t=LA(e,{nodes:[t]})}return t}function BA(t,e){let r=new Map;return t.walkRules(i=>{for(let a of r0(i))if(a.raws.tailwind?.layer!==void 0)return;let n=MA(i),s=e.offsets.create("user");for(let a of Vs(i)){let o=r.get(a)||[];r.set(a,o),o.push([{layer:"user",sort:s,important:!1},n])}}),r}function FA(t,e){for(let r of t){if(e.notClassCache.has(r)||e.applyClassCache.has(r))continue;if(e.classCache.has(r)){e.applyClassCache.set(r,e.classCache.get(r).map(([n,s])=>[n,s.clone()]));continue}let i=Array.from(Ds(r,e));if(i.length===0){e.notClassCache.add(r);continue}e.applyClassCache.set(r,i)}return e.applyClassCache}function NA(t){let e=null;return{get:r=>(e=e||t(),e.get(r)),has:r=>(e=e||t(),e.has(r))}}function zA(t){return{get:e=>t.flatMap(r=>r.get(e)||[]),has:e=>t.some(r=>r.has(e))}}function i0(t){let e=t.split(/[\s\t\n]+/g);return e[e.length-1]==="!important"?[e.slice(0,-1),!0]:[e,!1]}function n0(t,e,r){let i=new Set,n=[];if(t.walkAtRules("apply",l=>{let[f]=i0(l.params);for(let c of f)i.add(c);n.push(l)}),n.length===0)return;let s=zA([r,FA(i,e)]);function a(l,f,c){let p=eu(l),m=eu(f),v=eu(`.${Ee(c)}`).nodes[0].nodes[0];return p.each(_=>{let x=new Set;m.each(y=>{let S=!1;y=y.clone(),y.walkClasses(T=>{T.value===v.value&&(S||(T.replaceWith(..._.nodes.map(O=>O.clone())),x.add(y),S=!0))})});for(let y of x){let S=[[]];for(let T of y.nodes)T.type==="combinator"?(S.push(T),S.push([])):S[S.length-1].push(T);y.nodes=[];for(let T of S)Array.isArray(T)&&T.sort((O,P)=>O.type==="tag"&&P.type==="class"?-1:O.type==="class"&&P.type==="tag"?1:O.type==="class"&&P.type==="pseudo"&&P.value.startsWith("::")?-1:O.type==="pseudo"&&O.value.startsWith("::")&&P.type==="class"?1:0),y.nodes=y.nodes.concat(T)}_.replaceWith(...x)}),p.toString()}let o=new Map;for(let l of n){let[f]=o.get(l.parent)||[[],l.source];o.set(l.parent,[f,l.source]);let[c,p]=i0(l.params);if(l.parent.type==="atrule"){if(l.parent.name==="screen"){let m=l.parent.params;throw l.error(`@apply is not supported within nested at-rules like @screen. We suggest you write this as @apply ${c.map(d=>`${m}:${d}`).join(" ")} instead.`)}throw l.error(`@apply is not supported within nested at-rules like @${l.parent.name}. You can fix this by un-nesting @${l.parent.name}.`)}for(let m of c){if([t0(e,"group"),t0(e,"peer")].includes(m))throw l.error(`@apply should not be used with the '${m}' utility`);if(!s.has(m))throw l.error(`The \`${m}\` class does not exist. If \`${m}\` is a custom class, make sure it is defined within a \`@layer\` directive.`);let d=s.get(m);f.push([m,p,d])}}for(let[l,[f,c]]of o){let p=[];for(let[d,v,_]of f){let x=[d,...e0([d],e.tailwindConfig.separator)];for(let[y,S]of _){let T=Vs(l),O=Vs(S);if(O=O.groups.filter(F=>F.some(fe=>x.includes(fe))).flat(),O=O.concat(e0(O,e.tailwindConfig.separator)),T.some(F=>O.includes(F)))throw S.error(`You cannot \`@apply\` the \`${d}\` utility here because it creates a circular dependency.`);let N=Q.root({nodes:[S.clone()]});N.walk(F=>{F.source=c}),(S.type!=="atrule"||S.type==="atrule"&&S.name!=="keyframes")&&N.walkRules(F=>{if(!Vs(F).some(pe=>pe===d)){F.remove();return}let fe=typeof e.tailwindConfig.important=="string"?e.tailwindConfig.important:null,se=l.raws.tailwind!==void 0&&fe&&l.selector.indexOf(fe)===0?l.selector.slice(fe.length):l.selector;F.selector=a(se,F.selector,d),fe&&se!==l.selector&&(F.selector=Ps(F.selector,fe)),F.walkDecls(pe=>{pe.important=y.important||v});let ce=(0,Us.default)().astSync(F.selector);ce.each(pe=>gr(pe)),F.selector=ce.toString()}),!!N.nodes[0]&&p.push([y.sort,N.nodes[0]])}}let m=e.offsets.sort(p).map(d=>d[1]);l.after(m)}for(let l of n)l.parent.nodes.length>1?l.remove():l.parent.remove();n0(t,e,r)}function tu(t){return e=>{let r=NA(()=>BA(e,t));n0(e,t,r)}}var Us,RA,s0=E(()=>{u();qt();Us=he(tt());Ls();hr();Il();As();RA=(0,Us.default)()});var a0=b((G8,Ws)=>{u();(function(){"use strict";function t(i,n,s){if(!i)return null;t.caseSensitive||(i=i.toLowerCase());var a=t.threshold===null?null:t.threshold*i.length,o=t.thresholdAbsolute,l;a!==null&&o!==null?l=Math.min(a,o):a!==null?l=a:o!==null?l=o:l=null;var f,c,p,m,d,v=n.length;for(d=0;ds)return s+1;var l=[],f,c,p,m,d;for(f=0;f<=o;f++)l[f]=[f];for(c=0;c<=a;c++)l[0][c]=c;for(f=1;f<=o;f++){for(p=e,m=1,f>s&&(m=f-s),d=o+1,d>s+f&&(d=s+f),c=1;c<=a;c++)cd?l[f][c]=s+1:n.charAt(f-1)===i.charAt(c-1)?l[f][c]=l[f-1][c-1]:l[f][c]=Math.min(l[f-1][c-1]+1,Math.min(l[f][c-1]+1,l[f-1][c]+1)),l[f][c]s)return s+1}return l[o][a]}})()});var l0=b((H8,o0)=>{u();var ru="(".charCodeAt(0),iu=")".charCodeAt(0),Gs="'".charCodeAt(0),nu='"'.charCodeAt(0),su="\\".charCodeAt(0),kr="/".charCodeAt(0),au=",".charCodeAt(0),ou=":".charCodeAt(0),Hs="*".charCodeAt(0),$A="u".charCodeAt(0),jA="U".charCodeAt(0),UA="+".charCodeAt(0),VA=/^[a-f0-9?-]+$/i;o0.exports=function(t){for(var e=[],r=t,i,n,s,a,o,l,f,c,p=0,m=r.charCodeAt(p),d=r.length,v=[{nodes:e}],_=0,x,y="",S="",T="";p{u();u0.exports=function t(e,r,i){var n,s,a,o;for(n=0,s=e.length;n{u();function c0(t,e){var r=t.type,i=t.value,n,s;return e&&(s=e(t))!==void 0?s:r==="word"||r==="space"?i:r==="string"?(n=t.quote||"",n+i+(t.unclosed?"":n)):r==="comment"?"/*"+i+(t.unclosed?"":"*/"):r==="div"?(t.before||"")+i+(t.after||""):Array.isArray(t.nodes)?(n=p0(t.nodes,e),r!=="function"?n:i+"("+(t.before||"")+n+(t.after||"")+(t.unclosed?"":")")):i}function p0(t,e){var r,i;if(Array.isArray(t)){for(r="",i=t.length-1;~i;i-=1)r=c0(t[i],e)+r;return r}return c0(t,e)}d0.exports=p0});var g0=b((J8,m0)=>{u();var Ys="-".charCodeAt(0),Qs="+".charCodeAt(0),lu=".".charCodeAt(0),WA="e".charCodeAt(0),GA="E".charCodeAt(0);function HA(t){var e=t.charCodeAt(0),r;if(e===Qs||e===Ys){if(r=t.charCodeAt(1),r>=48&&r<=57)return!0;var i=t.charCodeAt(2);return r===lu&&i>=48&&i<=57}return e===lu?(r=t.charCodeAt(1),r>=48&&r<=57):e>=48&&e<=57}m0.exports=function(t){var e=0,r=t.length,i,n,s;if(r===0||!HA(t))return!1;for(i=t.charCodeAt(e),(i===Qs||i===Ys)&&e++;e57));)e+=1;if(i=t.charCodeAt(e),n=t.charCodeAt(e+1),i===lu&&n>=48&&n<=57)for(e+=2;e57));)e+=1;if(i=t.charCodeAt(e),n=t.charCodeAt(e+1),s=t.charCodeAt(e+2),(i===WA||i===GA)&&(n>=48&&n<=57||(n===Qs||n===Ys)&&s>=48&&s<=57))for(e+=n===Qs||n===Ys?3:2;e57));)e+=1;return{number:t.slice(0,e),unit:t.slice(e)}}});var $i=b((X8,v0)=>{u();var YA=l0(),w0=f0(),y0=h0();function Mt(t){return this instanceof Mt?(this.nodes=YA(t),this):new Mt(t)}Mt.prototype.toString=function(){return Array.isArray(this.nodes)?y0(this.nodes):""};Mt.prototype.walk=function(t,e){return w0(this.nodes,t,e),this};Mt.unit=g0();Mt.walk=w0;Mt.stringify=y0;v0.exports=Mt});function fu(t){return typeof t=="object"&&t!==null}function QA(t,e){let r=Tt(e);do if(r.pop(),(0,ji.default)(t,r)!==void 0)break;while(r.length);return r.length?r:void 0}function Sr(t){return typeof t=="string"?t:t.reduce((e,r,i)=>r.includes(".")?`${e}[${r}]`:i===0?r:`${e}.${r}`,"")}function x0(t){return t.map(e=>`'${e}'`).join(", ")}function k0(t){return x0(Object.keys(t))}function cu(t,e,r,i={}){let n=Array.isArray(e)?Sr(e):e.replace(/^['"]+|['"]+$/g,""),s=Array.isArray(e)?e:Tt(n),a=(0,ji.default)(t.theme,s,r);if(a===void 0){let l=`'${n}' does not exist in your theme config.`,f=s.slice(0,-1),c=(0,ji.default)(t.theme,f);if(fu(c)){let p=Object.keys(c).filter(d=>cu(t,[...f,d]).isValid),m=(0,b0.default)(s[s.length-1],p);m?l+=` Did you mean '${Sr([...f,m])}'?`:p.length>0&&(l+=` '${Sr(f)}' has the following valid keys: ${x0(p)}`)}else{let p=QA(t.theme,n);if(p){let m=(0,ji.default)(t.theme,p);fu(m)?l+=` '${Sr(p)}' has the following keys: ${k0(m)}`:l+=` '${Sr(p)}' is not an object.`}else l+=` Your theme has the following top-level keys: ${k0(t.theme)}`}return{isValid:!1,error:l}}if(!(typeof a=="string"||typeof a=="number"||typeof a=="function"||a instanceof String||a instanceof Number||Array.isArray(a))){let l=`'${n}' was found but does not resolve to a string.`;if(fu(a)){let f=Object.keys(a).filter(c=>cu(t,[...s,c]).isValid);f.length&&(l+=` Did you mean something like '${Sr([...s,f[0]])}'?`)}return{isValid:!1,error:l}}let[o]=s;return{isValid:!0,value:dt(o)(a,i)}}function JA(t,e,r){e=e.map(n=>S0(t,n,r));let i=[""];for(let n of e)n.type==="div"&&n.value===","?i.push(""):i[i.length-1]+=uu.default.stringify(n);return i}function S0(t,e,r){if(e.type==="function"&&r[e.value]!==void 0){let i=JA(t,e.nodes,r);e.type="word",e.value=r[e.value](t,...i)}return e}function XA(t,e,r){return(0,uu.default)(e).walk(i=>{S0(t,i,r)}).toString()}function*ZA(t){t=t.replace(/^['"]+|['"]+$/g,"");let e=t.match(/^([^\s]+)(?![^\[]*\])(?:\s*\/\s*([^\/\s]+))$/),r;yield[t,void 0],e&&(t=e[1],r=e[2],yield[t,r])}function eC(t,e,r){let i=Array.from(ZA(e)).map(([n,s])=>Object.assign(cu(t,n,r,{opacityValue:s}),{resolvedPath:n,alpha:s}));return i.find(n=>n.isValid)??i[0]}function _0(t){let e=t.tailwindConfig,r={theme:(i,n,...s)=>{let{isValid:a,value:o,error:l,alpha:f}=eC(e,n,s.length?s:void 0);if(!a){let m=i.parent,d=m?.raws.tailwind?.candidate;if(m&&d!==void 0){t.markInvalidUtilityNode(m),m.remove(),V.warn("invalid-theme-key-in-class",[`The utility \`${d}\` contains an invalid theme value and was not generated.`]);return}throw i.error(l)}let c=tr(o),p=c!==void 0&&typeof c=="function";return(f!==void 0||p)&&(f===void 0&&(f=1),o=Ke(c,f,c)),o},screen:(i,n)=>{n=n.replace(/^['"]+/g,"").replace(/['"]+$/g,"");let a=It(e.theme.screens).find(({name:o})=>o===n);if(!a)throw i.error(`The '${n}' screen does not exist in your theme.`);return Dt(a)}};return i=>{i.walk(n=>{let s=KA[n.type];s!==void 0&&(n[s]=XA(n,n[s],r))})}}var ji,b0,uu,KA,T0=E(()=>{u();ji=he(ho()),b0=he(a0());Ri();uu=he($i());Ts();ks();Tn();Hr();Kr();Ge();KA={atrule:"params",decl:"value"}});function O0({tailwindConfig:{theme:t}}){return function(e){e.walkAtRules("screen",r=>{let i=r.params,s=It(t.screens).find(({name:a})=>a===i);if(!s)throw r.error(`No \`${i}\` screen found.`);r.name="media",r.params=Dt(s)})}}var E0=E(()=>{u();Ts();ks()});function tC(t){let e=t.filter(o=>o.type!=="pseudo"||o.nodes.length>0?!0:o.value.startsWith("::")||[":before",":after",":first-line",":first-letter"].includes(o.value)).reverse(),r=new Set(["tag","class","id","attribute"]),i=e.findIndex(o=>r.has(o.type));if(i===-1)return e.reverse().join("").trim();let n=e[i],s=A0[n.type]?A0[n.type](n):n;e=e.slice(0,i);let a=e.findIndex(o=>o.type==="combinator"&&o.value===">");return a!==-1&&(e.splice(0,a),e.unshift(Js.default.universal())),[s,...e.reverse()].join("").trim()}function iC(t){return pu.has(t)||pu.set(t,rC.transformSync(t)),pu.get(t)}function du({tailwindConfig:t}){return e=>{let r=new Map,i=new Set;if(e.walkAtRules("defaults",n=>{if(n.nodes&&n.nodes.length>0){i.add(n);return}let s=n.params;r.has(s)||r.set(s,new Set),r.get(s).add(n.parent),n.remove()}),de(t,"optimizeUniversalDefaults"))for(let n of i){let s=new Map,a=r.get(n.params)??[];for(let o of a)for(let l of iC(o.selector)){let f=l.includes(":-")||l.includes("::-")?l:"__DEFAULT__",c=s.get(f)??new Set;s.set(f,c),c.add(l)}if(de(t,"optimizeUniversalDefaults")){if(s.size===0){n.remove();continue}for(let[,o]of s){let l=Q.rule({source:n.source});l.selectors=[...o],l.append(n.nodes.map(f=>f.clone())),n.before(l)}}n.remove()}else if(i.size){let n=Q.rule({selectors:["*","::before","::after"]});for(let a of i)n.append(a.nodes),n.parent||a.before(n),n.source||(n.source=a.source),a.remove();let s=n.clone({selectors:["::backdrop"]});n.after(s)}}}var Js,A0,rC,pu,C0=E(()=>{u();qt();Js=he(tt());Xe();A0={id(t){return Js.default.attribute({attribute:"id",operator:"=",value:t.value,quoteMark:'"'})}};rC=(0,Js.default)(t=>t.map(e=>{let r=e.split(i=>i.type==="combinator"&&i.value===" ").pop();return tC(r)})),pu=new Map});function hu(){function t(e){let r=null;e.each(i=>{if(!nC.has(i.type)){r=null;return}if(r===null){r=i;return}let n=P0[i.type];i.type==="atrule"&&i.name==="font-face"?r=i:n.every(s=>(i[s]??"").replace(/\s+/g," ")===(r[s]??"").replace(/\s+/g," "))?(i.nodes&&r.append(i.nodes),i.remove()):r=i}),e.each(i=>{i.type==="atrule"&&t(i)})}return e=>{t(e)}}var P0,nC,q0=E(()=>{u();P0={atrule:["name","params"],rule:["selector"]},nC=new Set(Object.keys(P0))});function mu(){return t=>{t.walkRules(e=>{let r=new Map,i=new Set([]),n=new Map;e.walkDecls(s=>{if(s.parent===e){if(r.has(s.prop)){if(r.get(s.prop).value===s.value){i.add(r.get(s.prop)),r.set(s.prop,s);return}n.has(s.prop)||n.set(s.prop,new Set),n.get(s.prop).add(r.get(s.prop)),n.get(s.prop).add(s)}r.set(s.prop,s)}});for(let s of i)s.remove();for(let s of n.values()){let a=new Map;for(let o of s){let l=aC(o.value);l!==null&&(a.has(l)||a.set(l,new Set),a.get(l).add(o))}for(let o of a.values()){let l=Array.from(o).slice(0,-1);for(let f of l)f.remove()}}})}}function aC(t){let e=/^-?\d*.?\d+([\w%]+)?$/g.exec(t);return e?e[1]??sC:null}var sC,D0=E(()=>{u();sC=Symbol("unitless-number")});function oC(t){if(!t.walkAtRules)return;let e=new Set;if(t.walkAtRules("apply",r=>{e.add(r.parent)}),e.size!==0)for(let r of e){let i=[],n=[];for(let s of r.nodes)s.type==="atrule"&&s.name==="apply"?(n.length>0&&(i.push(n),n=[]),i.push([s])):n.push(s);if(n.length>0&&i.push(n),i.length!==1){for(let s of[...i].reverse()){let a=r.clone({nodes:[]});a.append(s),r.after(a)}r.remove()}}}function Xs(){return t=>{oC(t)}}var I0=E(()=>{u()});function lC(t){return t.type==="root"}function uC(t){return t.type==="atrule"&&t.name==="layer"}function R0(t){return(e,r)=>{let i=!1;e.walkAtRules("tailwind",n=>{if(i)return!1;if(n.parent&&!(lC(n.parent)||uC(n.parent)))return i=!0,n.warn(r,["Nested @tailwind rules were detected, but are not supported.","Consider using a prefix to scope Tailwind's classes: https://tailwindcss.com/docs/configuration#prefix","Alternatively, use the important selector strategy: https://tailwindcss.com/docs/configuration#selector-strategy"].join(` -`)),!1}),e.walkRules(n=>{if(i)return!1;n.walkRules(s=>(i=!0,s.warn(r,["Nested CSS was detected, but CSS nesting has not been configured correctly.","Please enable a CSS nesting plugin *before* Tailwind in your configuration.","See how here: https://tailwindcss.com/docs/using-with-preprocessors#nesting"].join(` -`)),!1))})}}var L0=E(()=>{u()});function Ks(t){return function(e,r){let{tailwindDirectives:i,applyDirectives:n}=Xl(e);R0()(e,r),Xs()(e,r);let s=t({tailwindDirectives:i,applyDirectives:n,registerDependency(a){r.messages.push({plugin:"tailwindcss",parent:r.opts.from,...a})},createContext(a,o){return Ul(a,o,e)}})(e,r);if(s.tailwindConfig.separator==="-")throw new Error("The '-' character cannot be used as a custom separator in JIT mode due to parsing ambiguity. Please use another character like '_' instead.");ep(s.tailwindConfig),Zl(s)(e,r),Xs()(e,r),tu(s)(e,r),_0(s)(e,r),O0(s)(e,r),du(s)(e,r),hu(s)(e,r),mu(s)(e,r)}}var M0=E(()=>{u();jg();Zg();s0();T0();E0();C0();q0();D0();I0();L0();Bs();Xe()});function B0(t,e){let r=null,i=null;return t.walkAtRules("config",n=>{if(i=n.source?.input.file??e.opts.from??null,i===null)throw n.error("The `@config` directive cannot be used without setting `from` in your PostCSS config.");if(r)throw n.error("Only one `@config` directive is allowed per file.");let s=n.params.match(/(['"])(.*?)\1/);if(!s)throw n.error("A path is required when using the `@config` directive.");let a=s[2];if(me.isAbsolute(a))throw n.error("The `@config` directive cannot be used with an absolute path.");if(r=me.resolve(me.dirname(i),a),!we.existsSync(r))throw n.error(`The config file at "${a}" does not exist. Make sure the path is correct and the file exists.`);n.remove()}),r||null}var F0=E(()=>{u();ut();jt()});var N0=b((M9,gu)=>{u();$g();M0();Rt();F0();gu.exports=function(e){return{postcssPlugin:"tailwindcss",plugins:[Qe.DEBUG&&function(r){return console.log(` -`),console.time("JIT TOTAL"),r},function(r,i){e=B0(r,i)??e;let n=Jl(e);if(r.type==="document"){let s=r.nodes.filter(a=>a.type==="root");for(let a of s)a.type==="root"&&Ks(n)(a,i);return}Ks(n)(r,i)},!1,Qe.DEBUG&&function(r){return console.timeEnd("JIT TOTAL"),console.log(` -`),r}].filter(Boolean)}};gu.exports.postcss=!0});var $0=b((B9,z0)=>{u();z0.exports=N0()});var wu=b((F9,j0)=>{u();j0.exports=()=>["and_chr 92","and_uc 12.12","chrome 92","chrome 91","edge 91","firefox 89","ios_saf 14.5-14.7","ios_saf 14.0-14.4","safari 14.1","samsung 14.0"]});var Zs={};Ve(Zs,{agents:()=>fC,feature:()=>cC});function cC(){return{status:"cr",title:"CSS Feature Queries",stats:{ie:{"6":"n","7":"n","8":"n","9":"n","10":"n","11":"n","5.5":"n"},edge:{"12":"y","13":"y","14":"y","15":"y","16":"y","17":"y","18":"y","79":"y","80":"y","81":"y","83":"y","84":"y","85":"y","86":"y","87":"y","88":"y","89":"y","90":"y","91":"y","92":"y"},firefox:{"2":"n","3":"n","4":"n","5":"n","6":"n","7":"n","8":"n","9":"n","10":"n","11":"n","12":"n","13":"n","14":"n","15":"n","16":"n","17":"n","18":"n","19":"n","20":"n","21":"n","22":"y","23":"y","24":"y","25":"y","26":"y","27":"y","28":"y","29":"y","30":"y","31":"y","32":"y","33":"y","34":"y","35":"y","36":"y","37":"y","38":"y","39":"y","40":"y","41":"y","42":"y","43":"y","44":"y","45":"y","46":"y","47":"y","48":"y","49":"y","50":"y","51":"y","52":"y","53":"y","54":"y","55":"y","56":"y","57":"y","58":"y","59":"y","60":"y","61":"y","62":"y","63":"y","64":"y","65":"y","66":"y","67":"y","68":"y","69":"y","70":"y","71":"y","72":"y","73":"y","74":"y","75":"y","76":"y","77":"y","78":"y","79":"y","80":"y","81":"y","82":"y","83":"y","84":"y","85":"y","86":"y","87":"y","88":"y","89":"y","90":"y","91":"y","92":"y","93":"y","3.5":"n","3.6":"n"},chrome:{"4":"n","5":"n","6":"n","7":"n","8":"n","9":"n","10":"n","11":"n","12":"n","13":"n","14":"n","15":"n","16":"n","17":"n","18":"n","19":"n","20":"n","21":"n","22":"n","23":"n","24":"n","25":"n","26":"n","27":"n","28":"y","29":"y","30":"y","31":"y","32":"y","33":"y","34":"y","35":"y","36":"y","37":"y","38":"y","39":"y","40":"y","41":"y","42":"y","43":"y","44":"y","45":"y","46":"y","47":"y","48":"y","49":"y","50":"y","51":"y","52":"y","53":"y","54":"y","55":"y","56":"y","57":"y","58":"y","59":"y","60":"y","61":"y","62":"y","63":"y","64":"y","65":"y","66":"y","67":"y","68":"y","69":"y","70":"y","71":"y","72":"y","73":"y","74":"y","75":"y","76":"y","77":"y","78":"y","79":"y","80":"y","81":"y","83":"y","84":"y","85":"y","86":"y","87":"y","88":"y","89":"y","90":"y","91":"y","92":"y","93":"y","94":"y","95":"y"},safari:{"4":"n","5":"n","6":"n","7":"n","8":"n","9":"y","10":"y","11":"y","12":"y","13":"y","14":"y","15":"y","9.1":"y","10.1":"y","11.1":"y","12.1":"y","13.1":"y","14.1":"y",TP:"y","3.1":"n","3.2":"n","5.1":"n","6.1":"n","7.1":"n"},opera:{"9":"n","11":"n","12":"n","15":"y","16":"y","17":"y","18":"y","19":"y","20":"y","21":"y","22":"y","23":"y","24":"y","25":"y","26":"y","27":"y","28":"y","29":"y","30":"y","31":"y","32":"y","33":"y","34":"y","35":"y","36":"y","37":"y","38":"y","39":"y","40":"y","41":"y","42":"y","43":"y","44":"y","45":"y","46":"y","47":"y","48":"y","49":"y","50":"y","51":"y","52":"y","53":"y","54":"y","55":"y","56":"y","57":"y","58":"y","60":"y","62":"y","63":"y","64":"y","65":"y","66":"y","67":"y","68":"y","69":"y","70":"y","71":"y","72":"y","73":"y","74":"y","75":"y","76":"y","77":"y","78":"y","12.1":"y","9.5-9.6":"n","10.0-10.1":"n","10.5":"n","10.6":"n","11.1":"n","11.5":"n","11.6":"n"},ios_saf:{"8":"n","9.0-9.2":"y","9.3":"y","10.0-10.2":"y","10.3":"y","11.0-11.2":"y","11.3-11.4":"y","12.0-12.1":"y","12.2-12.4":"y","13.0-13.1":"y","13.2":"y","13.3":"y","13.4-13.7":"y","14.0-14.4":"y","14.5-14.7":"y","3.2":"n","4.0-4.1":"n","4.2-4.3":"n","5.0-5.1":"n","6.0-6.1":"n","7.0-7.1":"n","8.1-8.4":"n"},op_mini:{all:"y"},android:{"3":"n","4":"n","92":"y","4.4":"y","4.4.3-4.4.4":"y","2.1":"n","2.2":"n","2.3":"n","4.1":"n","4.2-4.3":"n"},bb:{"7":"n","10":"n"},op_mob:{"10":"n","11":"n","12":"n","64":"y","11.1":"n","11.5":"n","12.1":"n"},and_chr:{"92":"y"},and_ff:{"90":"y"},ie_mob:{"10":"n","11":"n"},and_uc:{"12.12":"y"},samsung:{"4":"y","5.0-5.4":"y","6.2-6.4":"y","7.2-7.4":"y","8.2":"y","9.2":"y","10.1":"y","11.1-11.2":"y","12.0":"y","13.0":"y","14.0":"y"},and_qq:{"10.4":"y"},baidu:{"7.12":"y"},kaios:{"2.5":"y"}}}}var fC,ea=E(()=>{u();fC={ie:{prefix:"ms"},edge:{prefix:"webkit",prefix_exceptions:{"12":"ms","13":"ms","14":"ms","15":"ms","16":"ms","17":"ms","18":"ms"}},firefox:{prefix:"moz"},chrome:{prefix:"webkit"},safari:{prefix:"webkit"},opera:{prefix:"webkit",prefix_exceptions:{"9":"o","11":"o","12":"o","9.5-9.6":"o","10.0-10.1":"o","10.5":"o","10.6":"o","11.1":"o","11.5":"o","11.6":"o","12.1":"o"}},ios_saf:{prefix:"webkit"},op_mini:{prefix:"o"},android:{prefix:"webkit"},bb:{prefix:"webkit"},op_mob:{prefix:"o",prefix_exceptions:{"64":"webkit"}},and_chr:{prefix:"webkit"},and_ff:{prefix:"moz"},ie_mob:{prefix:"ms"},and_uc:{prefix:"webkit",prefix_exceptions:{"12.12":"webkit"}},samsung:{prefix:"webkit"},and_qq:{prefix:"webkit"},baidu:{prefix:"webkit"},kaios:{prefix:"moz"}}});var U0=b(()=>{u()});var _e=b(($9,Bt)=>{u();var{list:yu}=De();Bt.exports.error=function(t){let e=new Error(t);throw e.autoprefixer=!0,e};Bt.exports.uniq=function(t){return[...new Set(t)]};Bt.exports.removeNote=function(t){return t.includes(" ")?t.split(" ")[0]:t};Bt.exports.escapeRegexp=function(t){return t.replace(/[$()*+-.?[\\\]^{|}]/g,"\\$&")};Bt.exports.regexp=function(t,e=!0){return e&&(t=this.escapeRegexp(t)),new RegExp(`(^|[\\s,(])(${t}($|[\\s(,]))`,"gi")};Bt.exports.editList=function(t,e){let r=yu.comma(t),i=e(r,[]);if(r===i)return t;let n=t.match(/,\s*/);return n=n?n[0]:", ",i.join(n)};Bt.exports.splitSelector=function(t){return yu.comma(t).map(e=>yu.space(e).map(r=>r.split(/(?=\.|#)/g)))}});var Ft=b((j9,G0)=>{u();var pC=wu(),V0=(ea(),Zs).agents,dC=_e(),W0=class{static prefixes(){if(this.prefixesCache)return this.prefixesCache;this.prefixesCache=[];for(let e in V0)this.prefixesCache.push(`-${V0[e].prefix}-`);return this.prefixesCache=dC.uniq(this.prefixesCache).sort((e,r)=>r.length-e.length),this.prefixesCache}static withPrefix(e){return this.prefixesRegexp||(this.prefixesRegexp=new RegExp(this.prefixes().join("|"))),this.prefixesRegexp.test(e)}constructor(e,r,i,n){this.data=e,this.options=i||{},this.browserslistOpts=n||{},this.selected=this.parse(r)}parse(e){let r={};for(let i in this.browserslistOpts)r[i]=this.browserslistOpts[i];return r.path=this.options.from,pC(e,r)}prefix(e){let[r,i]=e.split(" "),n=this.data[r],s=n.prefix_exceptions&&n.prefix_exceptions[i];return s||(s=n.prefix),`-${s}-`}isSelected(e){return this.selected.includes(e)}};G0.exports=W0});var Ui=b((U9,H0)=>{u();H0.exports={prefix(t){let e=t.match(/^(-\w+-)/);return e?e[0]:""},unprefixed(t){return t.replace(/^-\w+-/,"")}}});var _r=b((V9,Q0)=>{u();var hC=Ft(),Y0=Ui(),mC=_e();function vu(t,e){let r=new t.constructor;for(let i of Object.keys(t||{})){let n=t[i];i==="parent"&&typeof n=="object"?e&&(r[i]=e):i==="source"||i===null?r[i]=n:Array.isArray(n)?r[i]=n.map(s=>vu(s,r)):i!=="_autoprefixerPrefix"&&i!=="_autoprefixerValues"&&i!=="proxyCache"&&(typeof n=="object"&&n!==null&&(n=vu(n,r)),r[i]=n)}return r}var ta=class{static hack(e){return this.hacks||(this.hacks={}),e.names.map(r=>(this.hacks[r]=e,this.hacks[r]))}static load(e,r,i){let n=this.hacks&&this.hacks[e];return n?new n(e,r,i):new this(e,r,i)}static clone(e,r){let i=vu(e);for(let n in r)i[n]=r[n];return i}constructor(e,r,i){this.prefixes=r,this.name=e,this.all=i}parentPrefix(e){let r;return typeof e._autoprefixerPrefix!="undefined"?r=e._autoprefixerPrefix:e.type==="decl"&&e.prop[0]==="-"?r=Y0.prefix(e.prop):e.type==="root"?r=!1:e.type==="rule"&&e.selector.includes(":-")&&/:(-\w+-)/.test(e.selector)?r=e.selector.match(/:(-\w+-)/)[1]:e.type==="atrule"&&e.name[0]==="-"?r=Y0.prefix(e.name):r=this.parentPrefix(e.parent),hC.prefixes().includes(r)||(r=!1),e._autoprefixerPrefix=r,e._autoprefixerPrefix}process(e,r){if(!this.check(e))return;let i=this.parentPrefix(e),n=this.prefixes.filter(a=>!i||i===mC.removeNote(a)),s=[];for(let a of n)this.add(e,a,s.concat([a]),r)&&s.push(a);return s}clone(e,r){return ta.clone(e,r)}};Q0.exports=ta});var j=b((W9,K0)=>{u();var gC=_r(),wC=Ft(),J0=_e(),X0=class extends gC{check(){return!0}prefixed(e,r){return r+e}normalize(e){return e}otherPrefixes(e,r){for(let i of wC.prefixes())if(i!==r&&e.includes(i))return!0;return!1}set(e,r){return e.prop=this.prefixed(e.prop,r),e}needCascade(e){return e._autoprefixerCascade||(e._autoprefixerCascade=this.all.options.cascade!==!1&&e.raw("before").includes(` -`)),e._autoprefixerCascade}maxPrefixed(e,r){if(r._autoprefixerMax)return r._autoprefixerMax;let i=0;for(let n of e)n=J0.removeNote(n),n.length>i&&(i=n.length);return r._autoprefixerMax=i,r._autoprefixerMax}calcBefore(e,r,i=""){let s=this.maxPrefixed(e,r)-J0.removeNote(i).length,a=r.raw("before");return s>0&&(a+=Array(s).fill(" ").join("")),a}restoreBefore(e){let r=e.raw("before").split(` -`),i=r[r.length-1];this.all.group(e).up(n=>{let s=n.raw("before").split(` -`),a=s[s.length-1];a.lengtha.prop===n.prop&&a.value===n.value)))return this.needCascade(e)&&(n.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,n)}isAlready(e,r){let i=this.all.group(e).up(n=>n.prop===r);return i||(i=this.all.group(e).down(n=>n.prop===r)),i}add(e,r,i,n){let s=this.prefixed(e.prop,r);if(!(this.isAlready(e,s)||this.otherPrefixes(e.value,r)))return this.insert(e,r,i,n)}process(e,r){if(!this.needCascade(e)){super.process(e,r);return}let i=super.process(e,r);!i||!i.length||(this.restoreBefore(e),e.raws.before=this.calcBefore(i,e))}old(e,r){return[this.prefixed(e,r)]}};K0.exports=X0});var ew=b((G9,Z0)=>{u();Z0.exports=function t(e){return{mul:r=>new t(e*r),div:r=>new t(e/r),simplify:()=>new t(e),toString:()=>e.toString()}}});var iw=b((H9,rw)=>{u();var yC=ew(),vC=_r(),bu=_e(),bC=/(min|max)-resolution\s*:\s*\d*\.?\d+(dppx|dpcm|dpi|x)/gi,xC=/(min|max)-resolution(\s*:\s*)(\d*\.?\d+)(dppx|dpcm|dpi|x)/i,tw=class extends vC{prefixName(e,r){return e==="-moz-"?r+"--moz-device-pixel-ratio":e+r+"-device-pixel-ratio"}prefixQuery(e,r,i,n,s){return n=new yC(n),s==="dpi"?n=n.div(96):s==="dpcm"&&(n=n.mul(2.54).div(96)),n=n.simplify(),e==="-o-"&&(n=n.n+"/"+n.d),this.prefixName(e,r)+i+n}clean(e){if(!this.bad){this.bad=[];for(let r of this.prefixes)this.bad.push(this.prefixName(r,"min")),this.bad.push(this.prefixName(r,"max"))}e.params=bu.editList(e.params,r=>r.filter(i=>this.bad.every(n=>!i.includes(n))))}process(e){let r=this.parentPrefix(e),i=r?[r]:this.prefixes;e.params=bu.editList(e.params,(n,s)=>{for(let a of n){if(!a.includes("min-resolution")&&!a.includes("max-resolution")){s.push(a);continue}for(let o of i){let l=a.replace(bC,f=>{let c=f.match(xC);return this.prefixQuery(o,c[1],c[2],c[3],c[4])});s.push(l)}s.push(a)}return bu.uniq(s)})}};rw.exports=tw});var lw=b((Y9,ow)=>{u();var{list:kC}=De(),nw=$i(),SC=Ft(),sw=Ui(),aw=class{constructor(e){this.props=["transition","transition-property"],this.prefixes=e}add(e,r){let i,n,s=this.prefixes.add[e.prop],a=this.ruleVendorPrefixes(e),o=a||s&&s.prefixes||[],l=this.parse(e.value),f=l.map(d=>this.findProp(d)),c=[];if(f.some(d=>d[0]==="-"))return;for(let d of l){if(n=this.findProp(d),n[0]==="-")continue;let v=this.prefixes.add[n];if(!(!v||!v.prefixes))for(i of v.prefixes){if(a&&!a.some(x=>i.includes(x)))continue;let _=this.prefixes.prefixed(n,i);_!=="-ms-transform"&&!f.includes(_)&&(this.disabled(n,i)||c.push(this.clone(n,_,d)))}}l=l.concat(c);let p=this.stringify(l),m=this.stringify(this.cleanFromUnprefixed(l,"-webkit-"));if(o.includes("-webkit-")&&this.cloneBefore(e,`-webkit-${e.prop}`,m),this.cloneBefore(e,e.prop,m),o.includes("-o-")){let d=this.stringify(this.cleanFromUnprefixed(l,"-o-"));this.cloneBefore(e,`-o-${e.prop}`,d)}for(i of o)if(i!=="-webkit-"&&i!=="-o-"){let d=this.stringify(this.cleanOtherPrefixes(l,i));this.cloneBefore(e,i+e.prop,d)}p!==e.value&&!this.already(e,e.prop,p)&&(this.checkForWarning(r,e),e.cloneBefore(),e.value=p)}findProp(e){let r=e[0].value;if(/^\d/.test(r)){for(let[i,n]of e.entries())if(i!==0&&n.type==="word")return n.value}return r}already(e,r,i){return e.parent.some(n=>n.prop===r&&n.value===i)}cloneBefore(e,r,i){this.already(e,r,i)||e.cloneBefore({prop:r,value:i})}checkForWarning(e,r){if(r.prop!=="transition-property")return;let i=!1,n=!1;r.parent.each(s=>{if(s.type!=="decl"||s.prop.indexOf("transition-")!==0)return;let a=kC.comma(s.value);if(s.prop==="transition-property"){a.forEach(o=>{let l=this.prefixes.add[o];l&&l.prefixes&&l.prefixes.length>0&&(i=!0)});return}return n=n||a.length>1,!1}),i&&n&&r.warn(e,"Replace transition-property to transition, because Autoprefixer could not support any cases of transition-property and other transition-*")}remove(e){let r=this.parse(e.value);r=r.filter(a=>{let o=this.prefixes.remove[this.findProp(a)];return!o||!o.remove});let i=this.stringify(r);if(e.value===i)return;if(r.length===0){e.remove();return}let n=e.parent.some(a=>a.prop===e.prop&&a.value===i),s=e.parent.some(a=>a!==e&&a.prop===e.prop&&a.value.length>i.length);if(n||s){e.remove();return}e.value=i}parse(e){let r=nw(e),i=[],n=[];for(let s of r.nodes)n.push(s),s.type==="div"&&s.value===","&&(i.push(n),n=[]);return i.push(n),i.filter(s=>s.length>0)}stringify(e){if(e.length===0)return"";let r=[];for(let i of e)i[i.length-1].type!=="div"&&i.push(this.div(e)),r=r.concat(i);return r[0].type==="div"&&(r=r.slice(1)),r[r.length-1].type==="div"&&(r=r.slice(0,-2+1||void 0)),nw.stringify({nodes:r})}clone(e,r,i){let n=[],s=!1;for(let a of i)!s&&a.type==="word"&&a.value===e?(n.push({type:"word",value:r}),s=!0):n.push(a);return n}div(e){for(let r of e)for(let i of r)if(i.type==="div"&&i.value===",")return i;return{type:"div",value:",",after:" "}}cleanOtherPrefixes(e,r){return e.filter(i=>{let n=sw.prefix(this.findProp(i));return n===""||n===r})}cleanFromUnprefixed(e,r){let i=e.map(s=>this.findProp(s)).filter(s=>s.slice(0,r.length)===r).map(s=>this.prefixes.unprefixed(s)),n=[];for(let s of e){let a=this.findProp(s),o=sw.prefix(a);!i.includes(a)&&(o===r||o==="")&&n.push(s)}return n}disabled(e,r){let i=["order","justify-content","align-self","align-content"];if(e.includes("flex")||i.includes(e)){if(this.prefixes.options.flexbox===!1)return!0;if(this.prefixes.options.flexbox==="no-2009")return r.includes("2009")}}ruleVendorPrefixes(e){let{parent:r}=e;if(r.type!=="rule")return!1;if(!r.selector.includes(":-"))return!1;let i=SC.prefixes().filter(n=>r.selector.includes(":"+n));return i.length>0?i:!1}};ow.exports=aw});var Tr=b((Q9,fw)=>{u();var _C=_e(),uw=class{constructor(e,r,i,n){this.unprefixed=e,this.prefixed=r,this.string=i||r,this.regexp=n||_C.regexp(r)}check(e){return e.includes(this.string)?!!e.match(this.regexp):!1}};fw.exports=uw});var Fe=b((J9,pw)=>{u();var TC=_r(),OC=Tr(),EC=Ui(),AC=_e(),cw=class extends TC{static save(e,r){let i=r.prop,n=[];for(let s in r._autoprefixerValues){let a=r._autoprefixerValues[s];if(a===r.value)continue;let o,l=EC.prefix(i);if(l==="-pie-")continue;if(l===s){o=r.value=a,n.push(o);continue}let f=e.prefixed(i,s),c=r.parent;if(!c.every(v=>v.prop!==f)){n.push(o);continue}let p=a.replace(/\s+/," ");if(c.some(v=>v.prop===r.prop&&v.value.replace(/\s+/," ")===p)){n.push(o);continue}let d=this.clone(r,{value:a});o=r.parent.insertBefore(r,d),n.push(o)}return n}check(e){let r=e.value;return r.includes(this.name)?!!r.match(this.regexp()):!1}regexp(){return this.regexpCache||(this.regexpCache=AC.regexp(this.name))}replace(e,r){return e.replace(this.regexp(),`$1${r}$2`)}value(e){return e.raws.value&&e.raws.value.value===e.value?e.raws.value.raw:e.value}add(e,r){e._autoprefixerValues||(e._autoprefixerValues={});let i=e._autoprefixerValues[r]||this.value(e),n;do if(n=i,i=this.replace(i,r),i===!1)return;while(i!==n);e._autoprefixerValues[r]=i}old(e){return new OC(this.name,e+this.name)}};pw.exports=cw});var Nt=b((X9,dw)=>{u();dw.exports={}});var ku=b((K9,gw)=>{u();var hw=$i(),CC=Fe(),PC=Nt().insertAreas,qC=/(^|[^-])linear-gradient\(\s*(top|left|right|bottom)/i,DC=/(^|[^-])radial-gradient\(\s*\d+(\w*|%)\s+\d+(\w*|%)\s*,/i,IC=/(!\s*)?autoprefixer:\s*ignore\s+next/i,RC=/(!\s*)?autoprefixer\s*grid:\s*(on|off|(no-)?autoplace)/i,LC=["width","height","min-width","max-width","min-height","max-height","inline-size","min-inline-size","max-inline-size","block-size","min-block-size","max-block-size"];function xu(t){return t.parent.some(e=>e.prop==="grid-template"||e.prop==="grid-template-areas")}function MC(t){let e=t.parent.some(i=>i.prop==="grid-template-rows"),r=t.parent.some(i=>i.prop==="grid-template-columns");return e&&r}var mw=class{constructor(e){this.prefixes=e}add(e,r){let i=this.prefixes.add["@resolution"],n=this.prefixes.add["@keyframes"],s=this.prefixes.add["@viewport"],a=this.prefixes.add["@supports"];e.walkAtRules(c=>{if(c.name==="keyframes"){if(!this.disabled(c,r))return n&&n.process(c)}else if(c.name==="viewport"){if(!this.disabled(c,r))return s&&s.process(c)}else if(c.name==="supports"){if(this.prefixes.options.supports!==!1&&!this.disabled(c,r))return a.process(c)}else if(c.name==="media"&&c.params.includes("-resolution")&&!this.disabled(c,r))return i&&i.process(c)}),e.walkRules(c=>{if(!this.disabled(c,r))return this.prefixes.add.selectors.map(p=>p.process(c,r))});function o(c){return c.parent.nodes.some(p=>{if(p.type!=="decl")return!1;let m=p.prop==="display"&&/(inline-)?grid/.test(p.value),d=p.prop.startsWith("grid-template"),v=/^grid-([A-z]+-)?gap/.test(p.prop);return m||d||v})}function l(c){return c.parent.some(p=>p.prop==="display"&&/(inline-)?flex/.test(p.value))}let f=this.gridStatus(e,r)&&this.prefixes.add["grid-area"]&&this.prefixes.add["grid-area"].prefixes;return e.walkDecls(c=>{if(this.disabledDecl(c,r))return;let p=c.parent,m=c.prop,d=c.value;if(m==="grid-row-span"){r.warn("grid-row-span is not part of final Grid Layout. Use grid-row.",{node:c});return}else if(m==="grid-column-span"){r.warn("grid-column-span is not part of final Grid Layout. Use grid-column.",{node:c});return}else if(m==="display"&&d==="box"){r.warn("You should write display: flex by final spec instead of display: box",{node:c});return}else if(m==="text-emphasis-position")(d==="under"||d==="over")&&r.warn("You should use 2 values for text-emphasis-position For example, `under left` instead of just `under`.",{node:c});else if(/^(align|justify|place)-(items|content)$/.test(m)&&l(c))(d==="start"||d==="end")&&r.warn(`${d} value has mixed support, consider using flex-${d} instead`,{node:c});else if(m==="text-decoration-skip"&&d==="ink")r.warn("Replace text-decoration-skip: ink to text-decoration-skip-ink: auto, because spec had been changed",{node:c});else{if(f&&this.gridStatus(c,r))if(c.value==="subgrid"&&r.warn("IE does not support subgrid",{node:c}),/^(align|justify|place)-items$/.test(m)&&o(c)){let _=m.replace("-items","-self");r.warn(`IE does not support ${m} on grid containers. Try using ${_} on child elements instead: ${c.parent.selector} > * { ${_}: ${c.value} }`,{node:c})}else if(/^(align|justify|place)-content$/.test(m)&&o(c))r.warn(`IE does not support ${c.prop} on grid containers`,{node:c});else if(m==="display"&&c.value==="contents"){r.warn("Please do not use display: contents; if you have grid setting enabled",{node:c});return}else if(c.prop==="grid-gap"){let _=this.gridStatus(c,r);_==="autoplace"&&!MC(c)&&!xu(c)?r.warn("grid-gap only works if grid-template(-areas) is being used or both rows and columns have been declared and cells have not been manually placed inside the explicit grid",{node:c}):(_===!0||_==="no-autoplace")&&!xu(c)&&r.warn("grid-gap only works if grid-template(-areas) is being used",{node:c})}else if(m==="grid-auto-columns"){r.warn("grid-auto-columns is not supported by IE",{node:c});return}else if(m==="grid-auto-rows"){r.warn("grid-auto-rows is not supported by IE",{node:c});return}else if(m==="grid-auto-flow"){let _=p.some(y=>y.prop==="grid-template-rows"),x=p.some(y=>y.prop==="grid-template-columns");xu(c)?r.warn("grid-auto-flow is not supported by IE",{node:c}):d.includes("dense")?r.warn("grid-auto-flow: dense is not supported by IE",{node:c}):!_&&!x&&r.warn("grid-auto-flow works only if grid-template-rows and grid-template-columns are present in the same rule",{node:c});return}else if(d.includes("auto-fit")){r.warn("auto-fit value is not supported by IE",{node:c,word:"auto-fit"});return}else if(d.includes("auto-fill")){r.warn("auto-fill value is not supported by IE",{node:c,word:"auto-fill"});return}else m.startsWith("grid-template")&&d.includes("[")&&r.warn("Autoprefixer currently does not support line names. Try using grid-template-areas instead.",{node:c,word:"["});if(d.includes("radial-gradient"))if(DC.test(c.value))r.warn("Gradient has outdated direction syntax. New syntax is like `closest-side at 0 0` instead of `0 0, closest-side`.",{node:c});else{let _=hw(d);for(let x of _.nodes)if(x.type==="function"&&x.value==="radial-gradient")for(let y of x.nodes)y.type==="word"&&(y.value==="cover"?r.warn("Gradient has outdated direction syntax. Replace `cover` to `farthest-corner`.",{node:c}):y.value==="contain"&&r.warn("Gradient has outdated direction syntax. Replace `contain` to `closest-side`.",{node:c}))}d.includes("linear-gradient")&&qC.test(d)&&r.warn("Gradient has outdated direction syntax. New syntax is like `to left` instead of `right`.",{node:c})}LC.includes(c.prop)&&(c.value.includes("-fill-available")||(c.value.includes("fill-available")?r.warn("Replace fill-available to stretch, because spec had been changed",{node:c}):c.value.includes("fill")&&hw(d).nodes.some(x=>x.type==="word"&&x.value==="fill")&&r.warn("Replace fill to stretch, because spec had been changed",{node:c})));let v;if(c.prop==="transition"||c.prop==="transition-property")return this.prefixes.transition.add(c,r);if(c.prop==="align-self"){if(this.displayType(c)!=="grid"&&this.prefixes.options.flexbox!==!1&&(v=this.prefixes.add["align-self"],v&&v.prefixes&&v.process(c)),this.gridStatus(c,r)!==!1&&(v=this.prefixes.add["grid-row-align"],v&&v.prefixes))return v.process(c,r)}else if(c.prop==="justify-self"){if(this.gridStatus(c,r)!==!1&&(v=this.prefixes.add["grid-column-align"],v&&v.prefixes))return v.process(c,r)}else if(c.prop==="place-self"){if(v=this.prefixes.add["place-self"],v&&v.prefixes&&this.gridStatus(c,r)!==!1)return v.process(c,r)}else if(v=this.prefixes.add[c.prop],v&&v.prefixes)return v.process(c,r)}),this.gridStatus(e,r)&&PC(e,this.disabled),e.walkDecls(c=>{if(this.disabledValue(c,r))return;let p=this.prefixes.unprefixed(c.prop),m=this.prefixes.values("add",p);if(Array.isArray(m))for(let d of m)d.process&&d.process(c,r);CC.save(this.prefixes,c)})}remove(e,r){let i=this.prefixes.remove["@resolution"];e.walkAtRules((n,s)=>{this.prefixes.remove[`@${n.name}`]?this.disabled(n,r)||n.parent.removeChild(s):n.name==="media"&&n.params.includes("-resolution")&&i&&i.clean(n)});for(let n of this.prefixes.remove.selectors)e.walkRules((s,a)=>{n.check(s)&&(this.disabled(s,r)||s.parent.removeChild(a))});return e.walkDecls((n,s)=>{if(this.disabled(n,r))return;let a=n.parent,o=this.prefixes.unprefixed(n.prop);if((n.prop==="transition"||n.prop==="transition-property")&&this.prefixes.transition.remove(n),this.prefixes.remove[n.prop]&&this.prefixes.remove[n.prop].remove){let l=this.prefixes.group(n).down(f=>this.prefixes.normalize(f.prop)===o);if(o==="flex-flow"&&(l=!0),n.prop==="-webkit-box-orient"){let f={"flex-direction":!0,"flex-flow":!0};if(!n.parent.some(c=>f[c.prop]))return}if(l&&!this.withHackValue(n)){n.raw("before").includes(` -`)&&this.reduceSpaces(n),a.removeChild(s);return}}for(let l of this.prefixes.values("remove",o)){if(!l.check||!l.check(n.value))continue;if(o=l.unprefixed,this.prefixes.group(n).down(c=>c.value.includes(o))){a.removeChild(s);return}}})}withHackValue(e){return e.prop==="-webkit-background-clip"&&e.value==="text"}disabledValue(e,r){return this.gridStatus(e,r)===!1&&e.type==="decl"&&e.prop==="display"&&e.value.includes("grid")||this.prefixes.options.flexbox===!1&&e.type==="decl"&&e.prop==="display"&&e.value.includes("flex")||e.type==="decl"&&e.prop==="content"?!0:this.disabled(e,r)}disabledDecl(e,r){if(this.gridStatus(e,r)===!1&&e.type==="decl"&&(e.prop.includes("grid")||e.prop==="justify-items"))return!0;if(this.prefixes.options.flexbox===!1&&e.type==="decl"){let i=["order","justify-content","align-items","align-content"];if(e.prop.includes("flex")||i.includes(e.prop))return!0}return this.disabled(e,r)}disabled(e,r){if(!e)return!1;if(e._autoprefixerDisabled!==void 0)return e._autoprefixerDisabled;if(e.parent){let n=e.prev();if(n&&n.type==="comment"&&IC.test(n.text))return e._autoprefixerDisabled=!0,e._autoprefixerSelfDisabled=!0,!0}let i=null;if(e.nodes){let n;e.each(s=>{s.type==="comment"&&/(!\s*)?autoprefixer:\s*(off|on)/i.test(s.text)&&(typeof n!="undefined"?r.warn("Second Autoprefixer control comment was ignored. Autoprefixer applies control comment to whole block, not to next rules.",{node:s}):n=/on/i.test(s.text))}),n!==void 0&&(i=!n)}if(!e.nodes||i===null)if(e.parent){let n=this.disabled(e.parent,r);e.parent._autoprefixerSelfDisabled===!0?i=!1:i=n}else i=!1;return e._autoprefixerDisabled=i,i}reduceSpaces(e){let r=!1;if(this.prefixes.group(e).up(()=>(r=!0,!0)),r)return;let i=e.raw("before").split(` -`),n=i[i.length-1].length,s=!1;this.prefixes.group(e).down(a=>{i=a.raw("before").split(` -`);let o=i.length-1;i[o].length>n&&(s===!1&&(s=i[o].length-n),i[o]=i[o].slice(0,-s),a.raws.before=i.join(` -`))})}displayType(e){for(let r of e.parent.nodes)if(r.prop==="display"){if(r.value.includes("flex"))return"flex";if(r.value.includes("grid"))return"grid"}return!1}gridStatus(e,r){if(!e)return!1;if(e._autoprefixerGridStatus!==void 0)return e._autoprefixerGridStatus;let i=null;if(e.nodes){let n;e.each(s=>{if(s.type==="comment"&&RC.test(s.text)){let a=/:\s*autoplace/i.test(s.text),o=/no-autoplace/i.test(s.text);typeof n!="undefined"?r.warn("Second Autoprefixer grid control comment was ignored. Autoprefixer applies control comments to the whole block, not to the next rules.",{node:s}):a?n="autoplace":o?n=!0:n=/on/i.test(s.text)}}),n!==void 0&&(i=n)}if(e.type==="atrule"&&e.name==="supports"){let n=e.params;n.includes("grid")&&n.includes("auto")&&(i=!1)}if(!e.nodes||i===null)if(e.parent){let n=this.gridStatus(e.parent,r);e.parent._autoprefixerSelfDisabled===!0?i=!1:i=n}else typeof this.prefixes.options.grid!="undefined"?i=this.prefixes.options.grid:typeof g.env.AUTOPREFIXER_GRID!="undefined"?g.env.AUTOPREFIXER_GRID==="autoplace"?i="autoplace":i=!0:i=!1;return e._autoprefixerGridStatus=i,i}};gw.exports=mw});var yw=b((Z9,ww)=>{u();ww.exports={A:{A:{"2":"J D E F A B iB"},B:{"1":"C K L G M N O R S T U V W X Y Z a P b H"},C:{"1":"0 1 2 3 4 5 6 7 8 9 g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB bB HB cB IB JB Q KB LB MB NB OB PB QB RB SB TB UB VB WB XB R S T kB U V W X Y Z a P b H dB","2":"jB aB I c J D E F A B C K L G M N O d e f lB mB"},D:{"1":"0 1 2 3 4 5 6 7 8 9 m n o p q r s t u v w x y z AB BB CB DB EB FB GB bB HB cB IB JB Q KB LB MB NB OB PB QB RB SB TB UB VB WB XB R S T U V W X Y Z a P b H dB nB oB","2":"I c J D E F A B C K L G M N O d e f g h i j k l"},E:{"1":"F A B C K L G tB fB YB ZB uB vB wB","2":"I c J D E pB eB qB rB sB"},F:{"1":"0 1 2 3 4 5 6 7 8 9 G M N O d e f g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB HB IB JB Q KB LB MB NB OB PB QB RB SB TB UB VB WB XB ZB","2":"F B C xB yB zB 0B YB gB 1B"},G:{"1":"7B 8B 9B AC BC CC DC EC FC GC HC IC JC KC","2":"E eB 2B hB 3B 4B 5B 6B"},H:{"1":"LC"},I:{"1":"H QC RC","2":"aB I MC NC OC PC hB"},J:{"2":"D A"},K:{"1":"Q","2":"A B C YB gB ZB"},L:{"1":"H"},M:{"1":"P"},N:{"2":"A B"},O:{"1":"SC"},P:{"1":"I TC UC VC WC XC fB YC ZC aC bC"},Q:{"1":"cC"},R:{"1":"dC"},S:{"1":"eC"}},B:4,C:"CSS Feature Queries"}});var kw=b((ez,xw)=>{u();function vw(t){return t[t.length-1]}var bw={parse(t){let e=[""],r=[e];for(let i of t){if(i==="("){e=[""],vw(r).push(e),r.push(e);continue}if(i===")"){r.pop(),e=vw(r),e.push("");continue}e[e.length-1]+=i}return r[0]},stringify(t){let e="";for(let r of t){if(typeof r=="object"){e+=`(${bw.stringify(r)})`;continue}e+=r}return e}};xw.exports=bw});var Ew=b((tz,Ow)=>{u();var BC=yw(),{feature:FC}=(ea(),Zs),{parse:NC}=De(),zC=Ft(),Su=kw(),$C=Fe(),jC=_e(),Sw=FC(BC),_w=[];for(let t in Sw.stats){let e=Sw.stats[t];for(let r in e){let i=e[r];/y/.test(i)&&_w.push(t+" "+r)}}var Tw=class{constructor(e,r){this.Prefixes=e,this.all=r}prefixer(){if(this.prefixerCache)return this.prefixerCache;let e=this.all.browsers.selected.filter(i=>_w.includes(i)),r=new zC(this.all.browsers.data,e,this.all.options);return this.prefixerCache=new this.Prefixes(this.all.data,r,this.all.options),this.prefixerCache}parse(e){let r=e.split(":"),i=r[0],n=r[1];return n||(n=""),[i.trim(),n.trim()]}virtual(e){let[r,i]=this.parse(e),n=NC("a{}").first;return n.append({prop:r,value:i,raws:{before:""}}),n}prefixed(e){let r=this.virtual(e);if(this.disabled(r.first))return r.nodes;let i={warn:()=>null},n=this.prefixer().add[r.first.prop];n&&n.process&&n.process(r.first,i);for(let s of r.nodes){for(let a of this.prefixer().values("add",r.first.prop))a.process(s);$C.save(this.all,s)}return r.nodes}isNot(e){return typeof e=="string"&&/not\s*/i.test(e)}isOr(e){return typeof e=="string"&&/\s*or\s*/i.test(e)}isProp(e){return typeof e=="object"&&e.length===1&&typeof e[0]=="string"}isHack(e,r){return!new RegExp(`(\\(|\\s)${jC.escapeRegexp(r)}:`).test(e)}toRemove(e,r){let[i,n]=this.parse(e),s=this.all.unprefixed(i),a=this.all.cleaner();if(a.remove[i]&&a.remove[i].remove&&!this.isHack(r,s))return!0;for(let o of a.values("remove",s))if(o.check(n))return!0;return!1}remove(e,r){let i=0;for(;itypeof r!="object"?r:r.length===1&&typeof r[0]=="object"?this.cleanBrackets(r[0]):this.cleanBrackets(r))}convert(e){let r=[""];for(let i of e)r.push([`${i.prop}: ${i.value}`]),r.push(" or ");return r[r.length-1]="",r}normalize(e){if(typeof e!="object")return e;if(e=e.filter(r=>r!==""),typeof e[0]=="string"){let r=e[0].trim();if(r.includes(":")||r==="selector"||r==="not selector")return[Su.stringify(e)]}return e.map(r=>this.normalize(r))}add(e,r){return e.map(i=>{if(this.isProp(i)){let n=this.prefixed(i[0]);return n.length>1?this.convert(n):i}return typeof i=="object"?this.add(i,r):i})}process(e){let r=Su.parse(e.params);r=this.normalize(r),r=this.remove(r,e.params),r=this.add(r,e.params),r=this.cleanBrackets(r),e.params=Su.stringify(r)}disabled(e){if(!this.all.options.grid&&(e.prop==="display"&&e.value.includes("grid")||e.prop.includes("grid")||e.prop==="justify-items"))return!0;if(this.all.options.flexbox===!1){if(e.prop==="display"&&e.value.includes("flex"))return!0;let r=["order","justify-content","align-items","align-content"];if(e.prop.includes("flex")||r.includes(e.prop))return!0}return!1}};Ow.exports=Tw});var Pw=b((rz,Cw)=>{u();var Aw=class{constructor(e,r){this.prefix=r,this.prefixed=e.prefixed(this.prefix),this.regexp=e.regexp(this.prefix),this.prefixeds=e.possible().map(i=>[e.prefixed(i),e.regexp(i)]),this.unprefixed=e.name,this.nameRegexp=e.regexp()}isHack(e){let r=e.parent.index(e)+1,i=e.parent.nodes;for(;r{u();var{list:UC}=De(),VC=Pw(),WC=_r(),GC=Ft(),HC=_e(),qw=class extends WC{constructor(e,r,i){super(e,r,i);this.regexpCache=new Map}check(e){return e.selector.includes(this.name)?!!e.selector.match(this.regexp()):!1}prefixed(e){return this.name.replace(/^(\W*)/,`$1${e}`)}regexp(e){if(!this.regexpCache.has(e)){let r=e?this.prefixed(e):this.name;this.regexpCache.set(e,new RegExp(`(^|[^:"'=])${HC.escapeRegexp(r)}`,"gi"))}return this.regexpCache.get(e)}possible(){return GC.prefixes()}prefixeds(e){if(e._autoprefixerPrefixeds){if(e._autoprefixerPrefixeds[this.name])return e._autoprefixerPrefixeds}else e._autoprefixerPrefixeds={};let r={};if(e.selector.includes(",")){let n=UC.comma(e.selector).filter(s=>s.includes(this.name));for(let s of this.possible())r[s]=n.map(a=>this.replace(a,s)).join(", ")}else for(let i of this.possible())r[i]=this.replace(e.selector,i);return e._autoprefixerPrefixeds[this.name]=r,e._autoprefixerPrefixeds}already(e,r,i){let n=e.parent.index(e)-1;for(;n>=0;){let s=e.parent.nodes[n];if(s.type!=="rule")return!1;let a=!1;for(let o in r[this.name]){let l=r[this.name][o];if(s.selector===l){if(i===o)return!0;a=!0;break}}if(!a)return!1;n-=1}return!1}replace(e,r){return e.replace(this.regexp(),`$1${this.prefixed(r)}`)}add(e,r){let i=this.prefixeds(e);if(this.already(e,i,r))return;let n=this.clone(e,{selector:i[this.name][r]});e.parent.insertBefore(e,n)}old(e){return new VC(this,e)}};Dw.exports=qw});var Lw=b((nz,Rw)=>{u();var YC=_r(),Iw=class extends YC{add(e,r){let i=r+e.name;if(e.parent.some(a=>a.name===i&&a.params===e.params))return;let s=this.clone(e,{name:i});return e.parent.insertBefore(e,s)}process(e){let r=this.parentPrefix(e);for(let i of this.prefixes)(!r||r===i)&&this.add(e,i)}};Rw.exports=Iw});var Bw=b((sz,Mw)=>{u();var QC=Or(),_u=class extends QC{prefixed(e){return e==="-webkit-"?":-webkit-full-screen":e==="-moz-"?":-moz-full-screen":`:${e}fullscreen`}};_u.names=[":fullscreen"];Mw.exports=_u});var Nw=b((az,Fw)=>{u();var JC=Or(),Tu=class extends JC{possible(){return super.possible().concat(["-moz- old","-ms- old"])}prefixed(e){return e==="-webkit-"?"::-webkit-input-placeholder":e==="-ms-"?"::-ms-input-placeholder":e==="-ms- old"?":-ms-input-placeholder":e==="-moz- old"?":-moz-placeholder":`::${e}placeholder`}};Tu.names=["::placeholder"];Fw.exports=Tu});var $w=b((oz,zw)=>{u();var XC=Or(),Ou=class extends XC{prefixed(e){return e==="-ms-"?":-ms-input-placeholder":`:${e}placeholder-shown`}};Ou.names=[":placeholder-shown"];zw.exports=Ou});var Uw=b((lz,jw)=>{u();var KC=Or(),ZC=_e(),Eu=class extends KC{constructor(e,r,i){super(e,r,i);this.prefixes&&(this.prefixes=ZC.uniq(this.prefixes.map(n=>"-webkit-")))}prefixed(e){return e==="-webkit-"?"::-webkit-file-upload-button":`::${e}file-selector-button`}};Eu.names=["::file-selector-button"];jw.exports=Eu});var Ce=b((uz,Vw)=>{u();Vw.exports=function(t){let e;return t==="-webkit- 2009"||t==="-moz-"?e=2009:t==="-ms-"?e=2012:t==="-webkit-"&&(e="final"),t==="-webkit- 2009"&&(t="-webkit-"),[e,t]}});var Yw=b((fz,Hw)=>{u();var Ww=De().list,Gw=Ce(),e4=j(),Er=class extends e4{prefixed(e,r){let i;return[i,r]=Gw(r),i===2009?r+"box-flex":super.prefixed(e,r)}normalize(){return"flex"}set(e,r){let i=Gw(r)[0];if(i===2009)return e.value=Ww.space(e.value)[0],e.value=Er.oldValues[e.value]||e.value,super.set(e,r);if(i===2012){let n=Ww.space(e.value);n.length===3&&n[2]==="0"&&(e.value=n.slice(0,2).concat("0px").join(" "))}return super.set(e,r)}};Er.names=["flex","box-flex"];Er.oldValues={auto:"1",none:"0"};Hw.exports=Er});var Xw=b((cz,Jw)=>{u();var Qw=Ce(),t4=j(),Au=class extends t4{prefixed(e,r){let i;return[i,r]=Qw(r),i===2009?r+"box-ordinal-group":i===2012?r+"flex-order":super.prefixed(e,r)}normalize(){return"order"}set(e,r){return Qw(r)[0]===2009&&/\d/.test(e.value)?(e.value=(parseInt(e.value)+1).toString(),super.set(e,r)):super.set(e,r)}};Au.names=["order","flex-order","box-ordinal-group"];Jw.exports=Au});var Zw=b((pz,Kw)=>{u();var r4=j(),Cu=class extends r4{check(e){let r=e.value;return!r.toLowerCase().includes("alpha(")&&!r.includes("DXImageTransform.Microsoft")&&!r.includes("data:image/svg+xml")}};Cu.names=["filter"];Kw.exports=Cu});var ty=b((dz,ey)=>{u();var i4=j(),Pu=class extends i4{insert(e,r,i,n){if(r!=="-ms-")return super.insert(e,r,i);let s=this.clone(e),a=e.prop.replace(/end$/,"start"),o=r+e.prop.replace(/end$/,"span");if(!e.parent.some(l=>l.prop===o)){if(s.prop=o,e.value.includes("span"))s.value=e.value.replace(/span\s/i,"");else{let l;if(e.parent.walkDecls(a,f=>{l=f}),l){let f=Number(e.value)-Number(l.value)+"";s.value=f}else e.warn(n,`Can not prefix ${e.prop} (${a} is not found)`)}e.cloneBefore(s)}}};Pu.names=["grid-row-end","grid-column-end"];ey.exports=Pu});var iy=b((hz,ry)=>{u();var n4=j(),qu=class extends n4{check(e){return!e.value.split(/\s+/).some(r=>{let i=r.toLowerCase();return i==="reverse"||i==="alternate-reverse"})}};qu.names=["animation","animation-direction"];ry.exports=qu});var sy=b((mz,ny)=>{u();var s4=Ce(),a4=j(),Du=class extends a4{insert(e,r,i){let n;if([n,r]=s4(r),n!==2009)return super.insert(e,r,i);let s=e.value.split(/\s+/).filter(p=>p!=="wrap"&&p!=="nowrap"&&"wrap-reverse");if(s.length===0||e.parent.some(p=>p.prop===r+"box-orient"||p.prop===r+"box-direction"))return;let o=s[0],l=o.includes("row")?"horizontal":"vertical",f=o.includes("reverse")?"reverse":"normal",c=this.clone(e);return c.prop=r+"box-orient",c.value=l,this.needCascade(e)&&(c.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,c),c=this.clone(e),c.prop=r+"box-direction",c.value=f,this.needCascade(e)&&(c.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,c)}};Du.names=["flex-flow","box-direction","box-orient"];ny.exports=Du});var oy=b((gz,ay)=>{u();var o4=Ce(),l4=j(),Iu=class extends l4{normalize(){return"flex"}prefixed(e,r){let i;return[i,r]=o4(r),i===2009?r+"box-flex":i===2012?r+"flex-positive":super.prefixed(e,r)}};Iu.names=["flex-grow","flex-positive"];ay.exports=Iu});var uy=b((wz,ly)=>{u();var u4=Ce(),f4=j(),Ru=class extends f4{set(e,r){if(u4(r)[0]!==2009)return super.set(e,r)}};Ru.names=["flex-wrap"];ly.exports=Ru});var cy=b((yz,fy)=>{u();var c4=j(),Ar=Nt(),Lu=class extends c4{insert(e,r,i,n){if(r!=="-ms-")return super.insert(e,r,i);let s=Ar.parse(e),[a,o]=Ar.translate(s,0,2),[l,f]=Ar.translate(s,1,3);[["grid-row",a],["grid-row-span",o],["grid-column",l],["grid-column-span",f]].forEach(([c,p])=>{Ar.insertDecl(e,c,p)}),Ar.warnTemplateSelectorNotFound(e,n),Ar.warnIfGridRowColumnExists(e,n)}};Lu.names=["grid-area"];fy.exports=Lu});var dy=b((vz,py)=>{u();var p4=j(),Vi=Nt(),Mu=class extends p4{insert(e,r,i){if(r!=="-ms-")return super.insert(e,r,i);if(e.parent.some(a=>a.prop==="-ms-grid-row-align"))return;let[[n,s]]=Vi.parse(e);s?(Vi.insertDecl(e,"grid-row-align",n),Vi.insertDecl(e,"grid-column-align",s)):(Vi.insertDecl(e,"grid-row-align",n),Vi.insertDecl(e,"grid-column-align",n))}};Mu.names=["place-self"];py.exports=Mu});var my=b((bz,hy)=>{u();var d4=j(),Bu=class extends d4{check(e){let r=e.value;return!r.includes("/")||r.includes("span")}normalize(e){return e.replace("-start","")}prefixed(e,r){let i=super.prefixed(e,r);return r==="-ms-"&&(i=i.replace("-start","")),i}};Bu.names=["grid-row-start","grid-column-start"];hy.exports=Bu});var yy=b((xz,wy)=>{u();var gy=Ce(),h4=j(),Cr=class extends h4{check(e){return e.parent&&!e.parent.some(r=>r.prop&&r.prop.startsWith("grid-"))}prefixed(e,r){let i;return[i,r]=gy(r),i===2012?r+"flex-item-align":super.prefixed(e,r)}normalize(){return"align-self"}set(e,r){let i=gy(r)[0];if(i===2012)return e.value=Cr.oldValues[e.value]||e.value,super.set(e,r);if(i==="final")return super.set(e,r)}};Cr.names=["align-self","flex-item-align"];Cr.oldValues={"flex-end":"end","flex-start":"start"};wy.exports=Cr});var by=b((kz,vy)=>{u();var m4=j(),g4=_e(),Fu=class extends m4{constructor(e,r,i){super(e,r,i);this.prefixes&&(this.prefixes=g4.uniq(this.prefixes.map(n=>n==="-ms-"?"-webkit-":n)))}};Fu.names=["appearance"];vy.exports=Fu});var Sy=b((Sz,ky)=>{u();var xy=Ce(),w4=j(),Nu=class extends w4{normalize(){return"flex-basis"}prefixed(e,r){let i;return[i,r]=xy(r),i===2012?r+"flex-preferred-size":super.prefixed(e,r)}set(e,r){let i;if([i,r]=xy(r),i===2012||i==="final")return super.set(e,r)}};Nu.names=["flex-basis","flex-preferred-size"];ky.exports=Nu});var Ty=b((_z,_y)=>{u();var y4=j(),zu=class extends y4{normalize(){return this.name.replace("box-image","border")}prefixed(e,r){let i=super.prefixed(e,r);return r==="-webkit-"&&(i=i.replace("border","box-image")),i}};zu.names=["mask-border","mask-border-source","mask-border-slice","mask-border-width","mask-border-outset","mask-border-repeat","mask-box-image","mask-box-image-source","mask-box-image-slice","mask-box-image-width","mask-box-image-outset","mask-box-image-repeat"];_y.exports=zu});var Ey=b((Tz,Oy)=>{u();var v4=j(),st=class extends v4{insert(e,r,i){let n=e.prop==="mask-composite",s;n?s=e.value.split(","):s=e.value.match(st.regexp)||[],s=s.map(f=>f.trim()).filter(f=>f);let a=s.length,o;if(a&&(o=this.clone(e),o.value=s.map(f=>st.oldValues[f]||f).join(", "),s.includes("intersect")&&(o.value+=", xor"),o.prop=r+"mask-composite"),n)return a?(this.needCascade(e)&&(o.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,o)):void 0;let l=this.clone(e);return l.prop=r+l.prop,a&&(l.value=l.value.replace(st.regexp,"")),this.needCascade(e)&&(l.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,l),a?(this.needCascade(e)&&(o.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,o)):e}};st.names=["mask","mask-composite"];st.oldValues={add:"source-over",subtract:"source-out",intersect:"source-in",exclude:"xor"};st.regexp=new RegExp(`\\s+(${Object.keys(st.oldValues).join("|")})\\b(?!\\))\\s*(?=[,])`,"ig");Oy.exports=st});var Py=b((Oz,Cy)=>{u();var Ay=Ce(),b4=j(),Pr=class extends b4{prefixed(e,r){let i;return[i,r]=Ay(r),i===2009?r+"box-align":i===2012?r+"flex-align":super.prefixed(e,r)}normalize(){return"align-items"}set(e,r){let i=Ay(r)[0];return(i===2009||i===2012)&&(e.value=Pr.oldValues[e.value]||e.value),super.set(e,r)}};Pr.names=["align-items","flex-align","box-align"];Pr.oldValues={"flex-end":"end","flex-start":"start"};Cy.exports=Pr});var Dy=b((Ez,qy)=>{u();var x4=j(),$u=class extends x4{set(e,r){return r==="-ms-"&&e.value==="contain"&&(e.value="element"),super.set(e,r)}insert(e,r,i){if(!(e.value==="all"&&r==="-ms-"))return super.insert(e,r,i)}};$u.names=["user-select"];qy.exports=$u});var Ly=b((Az,Ry)=>{u();var Iy=Ce(),k4=j(),ju=class extends k4{normalize(){return"flex-shrink"}prefixed(e,r){let i;return[i,r]=Iy(r),i===2012?r+"flex-negative":super.prefixed(e,r)}set(e,r){let i;if([i,r]=Iy(r),i===2012||i==="final")return super.set(e,r)}};ju.names=["flex-shrink","flex-negative"];Ry.exports=ju});var By=b((Cz,My)=>{u();var S4=j(),Uu=class extends S4{prefixed(e,r){return`${r}column-${e}`}normalize(e){return e.includes("inside")?"break-inside":e.includes("before")?"break-before":"break-after"}set(e,r){return(e.prop==="break-inside"&&e.value==="avoid-column"||e.value==="avoid-page")&&(e.value="avoid"),super.set(e,r)}insert(e,r,i){if(e.prop!=="break-inside")return super.insert(e,r,i);if(!(/region/i.test(e.value)||/page/i.test(e.value)))return super.insert(e,r,i)}};Uu.names=["break-inside","page-break-inside","column-break-inside","break-before","page-break-before","column-break-before","break-after","page-break-after","column-break-after"];My.exports=Uu});var Ny=b((Pz,Fy)=>{u();var _4=j(),Vu=class extends _4{prefixed(e,r){return r+"print-color-adjust"}normalize(){return"color-adjust"}};Vu.names=["color-adjust","print-color-adjust"];Fy.exports=Vu});var $y=b((qz,zy)=>{u();var T4=j(),qr=class extends T4{insert(e,r,i){if(r==="-ms-"){let n=this.set(this.clone(e),r);this.needCascade(e)&&(n.raws.before=this.calcBefore(i,e,r));let s="ltr";return e.parent.nodes.forEach(a=>{a.prop==="direction"&&(a.value==="rtl"||a.value==="ltr")&&(s=a.value)}),n.value=qr.msValues[s][e.value]||e.value,e.parent.insertBefore(e,n)}return super.insert(e,r,i)}};qr.names=["writing-mode"];qr.msValues={ltr:{"horizontal-tb":"lr-tb","vertical-rl":"tb-rl","vertical-lr":"tb-lr"},rtl:{"horizontal-tb":"rl-tb","vertical-rl":"bt-rl","vertical-lr":"bt-lr"}};zy.exports=qr});var Uy=b((Dz,jy)=>{u();var O4=j(),Wu=class extends O4{set(e,r){return e.value=e.value.replace(/\s+fill(\s)/,"$1"),super.set(e,r)}};Wu.names=["border-image"];jy.exports=Wu});var Gy=b((Iz,Wy)=>{u();var Vy=Ce(),E4=j(),Dr=class extends E4{prefixed(e,r){let i;return[i,r]=Vy(r),i===2012?r+"flex-line-pack":super.prefixed(e,r)}normalize(){return"align-content"}set(e,r){let i=Vy(r)[0];if(i===2012)return e.value=Dr.oldValues[e.value]||e.value,super.set(e,r);if(i==="final")return super.set(e,r)}};Dr.names=["align-content","flex-line-pack"];Dr.oldValues={"flex-end":"end","flex-start":"start","space-between":"justify","space-around":"distribute"};Wy.exports=Dr});var Yy=b((Rz,Hy)=>{u();var A4=j(),Ne=class extends A4{prefixed(e,r){return r==="-moz-"?r+(Ne.toMozilla[e]||e):super.prefixed(e,r)}normalize(e){return Ne.toNormal[e]||e}};Ne.names=["border-radius"];Ne.toMozilla={};Ne.toNormal={};for(let t of["top","bottom"])for(let e of["left","right"]){let r=`border-${t}-${e}-radius`,i=`border-radius-${t}${e}`;Ne.names.push(r),Ne.names.push(i),Ne.toMozilla[r]=i,Ne.toNormal[i]=r}Hy.exports=Ne});var Jy=b((Lz,Qy)=>{u();var C4=j(),Gu=class extends C4{prefixed(e,r){return e.includes("-start")?r+e.replace("-block-start","-before"):r+e.replace("-block-end","-after")}normalize(e){return e.includes("-before")?e.replace("-before","-block-start"):e.replace("-after","-block-end")}};Gu.names=["border-block-start","border-block-end","margin-block-start","margin-block-end","padding-block-start","padding-block-end","border-before","border-after","margin-before","margin-after","padding-before","padding-after"];Qy.exports=Gu});var Ky=b((Mz,Xy)=>{u();var P4=j(),{parseTemplate:q4,warnMissedAreas:D4,getGridGap:I4,warnGridGap:R4,inheritGridGap:L4}=Nt(),Hu=class extends P4{insert(e,r,i,n){if(r!=="-ms-")return super.insert(e,r,i);if(e.parent.some(d=>d.prop==="-ms-grid-rows"))return;let s=I4(e),a=L4(e,s),{rows:o,columns:l,areas:f}=q4({decl:e,gap:a||s}),c=Object.keys(f).length>0,p=Boolean(o),m=Boolean(l);return R4({gap:s,hasColumns:m,decl:e,result:n}),D4(f,e,n),(p&&m||c)&&e.cloneBefore({prop:"-ms-grid-rows",value:o,raws:{}}),m&&e.cloneBefore({prop:"-ms-grid-columns",value:l,raws:{}}),e}};Hu.names=["grid-template"];Xy.exports=Hu});var ev=b((Bz,Zy)=>{u();var M4=j(),Yu=class extends M4{prefixed(e,r){return r+e.replace("-inline","")}normalize(e){return e.replace(/(margin|padding|border)-(start|end)/,"$1-inline-$2")}};Yu.names=["border-inline-start","border-inline-end","margin-inline-start","margin-inline-end","padding-inline-start","padding-inline-end","border-start","border-end","margin-start","margin-end","padding-start","padding-end"];Zy.exports=Yu});var rv=b((Fz,tv)=>{u();var B4=j(),Qu=class extends B4{check(e){return!e.value.includes("flex-")&&e.value!=="baseline"}prefixed(e,r){return r+"grid-row-align"}normalize(){return"align-self"}};Qu.names=["grid-row-align"];tv.exports=Qu});var nv=b((Nz,iv)=>{u();var F4=j(),Ir=class extends F4{keyframeParents(e){let{parent:r}=e;for(;r;){if(r.type==="atrule"&&r.name==="keyframes")return!0;({parent:r}=r)}return!1}contain3d(e){if(e.prop==="transform-origin")return!1;for(let r of Ir.functions3d)if(e.value.includes(`${r}(`))return!0;return!1}set(e,r){return e=super.set(e,r),r==="-ms-"&&(e.value=e.value.replace(/rotatez/gi,"rotate")),e}insert(e,r,i){if(r==="-ms-"){if(!this.contain3d(e)&&!this.keyframeParents(e))return super.insert(e,r,i)}else if(r==="-o-"){if(!this.contain3d(e))return super.insert(e,r,i)}else return super.insert(e,r,i)}};Ir.names=["transform","transform-origin"];Ir.functions3d=["matrix3d","translate3d","translateZ","scale3d","scaleZ","rotate3d","rotateX","rotateY","perspective"];iv.exports=Ir});var ov=b((zz,av)=>{u();var sv=Ce(),N4=j(),Ju=class extends N4{normalize(){return"flex-direction"}insert(e,r,i){let n;if([n,r]=sv(r),n!==2009)return super.insert(e,r,i);if(e.parent.some(c=>c.prop===r+"box-orient"||c.prop===r+"box-direction"))return;let a=e.value,o,l;a==="inherit"||a==="initial"||a==="unset"?(o=a,l=a):(o=a.includes("row")?"horizontal":"vertical",l=a.includes("reverse")?"reverse":"normal");let f=this.clone(e);return f.prop=r+"box-orient",f.value=o,this.needCascade(e)&&(f.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,f),f=this.clone(e),f.prop=r+"box-direction",f.value=l,this.needCascade(e)&&(f.raws.before=this.calcBefore(i,e,r)),e.parent.insertBefore(e,f)}old(e,r){let i;return[i,r]=sv(r),i===2009?[r+"box-orient",r+"box-direction"]:super.old(e,r)}};Ju.names=["flex-direction","box-direction","box-orient"];av.exports=Ju});var uv=b(($z,lv)=>{u();var z4=j(),Xu=class extends z4{check(e){return e.value==="pixelated"}prefixed(e,r){return r==="-ms-"?"-ms-interpolation-mode":super.prefixed(e,r)}set(e,r){return r!=="-ms-"?super.set(e,r):(e.prop="-ms-interpolation-mode",e.value="nearest-neighbor",e)}normalize(){return"image-rendering"}process(e,r){return super.process(e,r)}};Xu.names=["image-rendering","interpolation-mode"];lv.exports=Xu});var cv=b((jz,fv)=>{u();var $4=j(),j4=_e(),Ku=class extends $4{constructor(e,r,i){super(e,r,i);this.prefixes&&(this.prefixes=j4.uniq(this.prefixes.map(n=>n==="-ms-"?"-webkit-":n)))}};Ku.names=["backdrop-filter"];fv.exports=Ku});var dv=b((Uz,pv)=>{u();var U4=j(),V4=_e(),Zu=class extends U4{constructor(e,r,i){super(e,r,i);this.prefixes&&(this.prefixes=V4.uniq(this.prefixes.map(n=>n==="-ms-"?"-webkit-":n)))}check(e){return e.value.toLowerCase()==="text"}};Zu.names=["background-clip"];pv.exports=Zu});var mv=b((Vz,hv)=>{u();var W4=j(),G4=["none","underline","overline","line-through","blink","inherit","initial","unset"],ef=class extends W4{check(e){return e.value.split(/\s+/).some(r=>!G4.includes(r))}};ef.names=["text-decoration"];hv.exports=ef});var yv=b((Wz,wv)=>{u();var gv=Ce(),H4=j(),Rr=class extends H4{prefixed(e,r){let i;return[i,r]=gv(r),i===2009?r+"box-pack":i===2012?r+"flex-pack":super.prefixed(e,r)}normalize(){return"justify-content"}set(e,r){let i=gv(r)[0];if(i===2009||i===2012){let n=Rr.oldValues[e.value]||e.value;if(e.value=n,i!==2009||n!=="distribute")return super.set(e,r)}else if(i==="final")return super.set(e,r)}};Rr.names=["justify-content","flex-pack","box-pack"];Rr.oldValues={"flex-end":"end","flex-start":"start","space-between":"justify","space-around":"distribute"};wv.exports=Rr});var bv=b((Gz,vv)=>{u();var Y4=j(),tf=class extends Y4{set(e,r){let i=e.value.toLowerCase();return r==="-webkit-"&&!i.includes(" ")&&i!=="contain"&&i!=="cover"&&(e.value=e.value+" "+e.value),super.set(e,r)}};tf.names=["background-size"];vv.exports=tf});var kv=b((Hz,xv)=>{u();var Q4=j(),rf=Nt(),nf=class extends Q4{insert(e,r,i){if(r!=="-ms-")return super.insert(e,r,i);let n=rf.parse(e),[s,a]=rf.translate(n,0,1);n[0]&&n[0].includes("span")&&(a=n[0].join("").replace(/\D/g,"")),[[e.prop,s],[`${e.prop}-span`,a]].forEach(([l,f])=>{rf.insertDecl(e,l,f)})}};nf.names=["grid-row","grid-column"];xv.exports=nf});var Tv=b((Yz,_v)=>{u();var J4=j(),{prefixTrackProp:Sv,prefixTrackValue:X4,autoplaceGridItems:K4,getGridGap:Z4,inheritGridGap:eP}=Nt(),tP=ku(),sf=class extends J4{prefixed(e,r){return r==="-ms-"?Sv({prop:e,prefix:r}):super.prefixed(e,r)}normalize(e){return e.replace(/^grid-(rows|columns)/,"grid-template-$1")}insert(e,r,i,n){if(r!=="-ms-")return super.insert(e,r,i);let{parent:s,prop:a,value:o}=e,l=a.includes("rows"),f=a.includes("columns"),c=s.some(S=>S.prop==="grid-template"||S.prop==="grid-template-areas");if(c&&l)return!1;let p=new tP({options:{}}),m=p.gridStatus(s,n),d=Z4(e);d=eP(e,d)||d;let v=l?d.row:d.column;(m==="no-autoplace"||m===!0)&&!c&&(v=null);let _=X4({value:o,gap:v});e.cloneBefore({prop:Sv({prop:a,prefix:r}),value:_});let x=s.nodes.find(S=>S.prop==="grid-auto-flow"),y="row";if(x&&!p.disabled(x,n)&&(y=x.value.trim()),m==="autoplace"){let S=s.nodes.find(O=>O.prop==="grid-template-rows");if(!S&&c)return;if(!S&&!c){e.warn(n,"Autoplacement does not work without grid-template-rows property");return}!s.nodes.find(O=>O.prop==="grid-template-columns")&&!c&&e.warn(n,"Autoplacement does not work without grid-template-columns property"),f&&!c&&K4(e,n,d,y)}}};sf.names=["grid-template-rows","grid-template-columns","grid-rows","grid-columns"];_v.exports=sf});var Ev=b((Qz,Ov)=>{u();var rP=j(),af=class extends rP{check(e){return!e.value.includes("flex-")&&e.value!=="baseline"}prefixed(e,r){return r+"grid-column-align"}normalize(){return"justify-self"}};af.names=["grid-column-align"];Ov.exports=af});var Cv=b((Jz,Av)=>{u();var iP=j(),of=class extends iP{prefixed(e,r){return r+"scroll-chaining"}normalize(){return"overscroll-behavior"}set(e,r){return e.value==="auto"?e.value="chained":(e.value==="none"||e.value==="contain")&&(e.value="none"),super.set(e,r)}};of.names=["overscroll-behavior","scroll-chaining"];Av.exports=of});var Dv=b((Xz,qv)=>{u();var nP=j(),{parseGridAreas:sP,warnMissedAreas:aP,prefixTrackProp:oP,prefixTrackValue:Pv,getGridGap:lP,warnGridGap:uP,inheritGridGap:fP}=Nt();function cP(t){return t.trim().slice(1,-1).split(/["']\s*["']?/g)}var lf=class extends nP{insert(e,r,i,n){if(r!=="-ms-")return super.insert(e,r,i);let s=!1,a=!1,o=e.parent,l=lP(e);l=fP(e,l)||l,o.walkDecls(/-ms-grid-rows/,p=>p.remove()),o.walkDecls(/grid-template-(rows|columns)/,p=>{if(p.prop==="grid-template-rows"){a=!0;let{prop:m,value:d}=p;p.cloneBefore({prop:oP({prop:m,prefix:r}),value:Pv({value:d,gap:l.row})})}else s=!0});let f=cP(e.value);s&&!a&&l.row&&f.length>1&&e.cloneBefore({prop:"-ms-grid-rows",value:Pv({value:`repeat(${f.length}, auto)`,gap:l.row}),raws:{}}),uP({gap:l,hasColumns:s,decl:e,result:n});let c=sP({rows:f,gap:l});return aP(c,e,n),e}};lf.names=["grid-template-areas"];qv.exports=lf});var Rv=b((Kz,Iv)=>{u();var pP=j(),uf=class extends pP{set(e,r){return r==="-webkit-"&&(e.value=e.value.replace(/\s*(right|left)\s*/i,"")),super.set(e,r)}};uf.names=["text-emphasis-position"];Iv.exports=uf});var Mv=b((Zz,Lv)=>{u();var dP=j(),ff=class extends dP{set(e,r){return e.prop==="text-decoration-skip-ink"&&e.value==="auto"?(e.prop=r+"text-decoration-skip",e.value="ink",e):super.set(e,r)}};ff.names=["text-decoration-skip-ink","text-decoration-skip"];Lv.exports=ff});var jv=b((e$,$v)=>{u();"use strict";$v.exports={wrap:Bv,limit:Fv,validate:Nv,test:cf,curry:hP,name:zv};function Bv(t,e,r){var i=e-t;return((r-t)%i+i)%i+t}function Fv(t,e,r){return Math.max(t,Math.min(e,r))}function Nv(t,e,r,i,n){if(!cf(t,e,r,i,n))throw new Error(r+" is outside of range ["+t+","+e+")");return r}function cf(t,e,r,i,n){return!(re||n&&r===e||i&&r===t)}function zv(t,e,r,i){return(r?"(":"[")+t+","+e+(i?")":"]")}function hP(t,e,r,i){var n=zv.bind(null,t,e,r,i);return{wrap:Bv.bind(null,t,e),limit:Fv.bind(null,t,e),validate:function(s){return Nv(t,e,s,r,i)},test:function(s){return cf(t,e,s,r,i)},toString:n,name:n}}});var Wv=b((t$,Vv)=>{u();var pf=$i(),mP=jv(),gP=Tr(),wP=Fe(),yP=_e(),Uv=/top|left|right|bottom/gi,gt=class extends wP{replace(e,r){let i=pf(e);for(let n of i.nodes)if(n.type==="function"&&n.value===this.name)if(n.nodes=this.newDirection(n.nodes),n.nodes=this.normalize(n.nodes),r==="-webkit- old"){if(!this.oldWebkit(n))return!1}else n.nodes=this.convertDirection(n.nodes),n.value=r+n.value;return i.toString()}replaceFirst(e,...r){return r.map(n=>n===" "?{type:"space",value:n}:{type:"word",value:n}).concat(e.slice(1))}normalizeUnit(e,r){return`${parseFloat(e)/r*360}deg`}normalize(e){if(!e[0])return e;if(/-?\d+(.\d+)?grad/.test(e[0].value))e[0].value=this.normalizeUnit(e[0].value,400);else if(/-?\d+(.\d+)?rad/.test(e[0].value))e[0].value=this.normalizeUnit(e[0].value,2*Math.PI);else if(/-?\d+(.\d+)?turn/.test(e[0].value))e[0].value=this.normalizeUnit(e[0].value,1);else if(e[0].value.includes("deg")){let r=parseFloat(e[0].value);r=mP.wrap(0,360,r),e[0].value=`${r}deg`}return e[0].value==="0deg"?e=this.replaceFirst(e,"to"," ","top"):e[0].value==="90deg"?e=this.replaceFirst(e,"to"," ","right"):e[0].value==="180deg"?e=this.replaceFirst(e,"to"," ","bottom"):e[0].value==="270deg"&&(e=this.replaceFirst(e,"to"," ","left")),e}newDirection(e){if(e[0].value==="to"||(Uv.lastIndex=0,!Uv.test(e[0].value)))return e;e.unshift({type:"word",value:"to"},{type:"space",value:" "});for(let r=2;r0&&(e[0].value==="to"?this.fixDirection(e):e[0].value.includes("deg")?this.fixAngle(e):this.isRadial(e)&&this.fixRadial(e)),e}fixDirection(e){e.splice(0,2);for(let r of e){if(r.type==="div")break;r.type==="word"&&(r.value=this.revertDirection(r.value))}}fixAngle(e){let r=e[0].value;r=parseFloat(r),r=Math.abs(450-r)%360,r=this.roundFloat(r,3),e[0].value=`${r}deg`}fixRadial(e){let r=[],i=[],n,s,a,o,l;for(o=0;o{u();var vP=Tr(),bP=Fe();function Gv(t){return new RegExp(`(^|[\\s,(])(${t}($|[\\s),]))`,"gi")}var df=class extends bP{regexp(){return this.regexpCache||(this.regexpCache=Gv(this.name)),this.regexpCache}isStretch(){return this.name==="stretch"||this.name==="fill"||this.name==="fill-available"}replace(e,r){return r==="-moz-"&&this.isStretch()?e.replace(this.regexp(),"$1-moz-available$3"):r==="-webkit-"&&this.isStretch()?e.replace(this.regexp(),"$1-webkit-fill-available$3"):super.replace(e,r)}old(e){let r=e+this.name;return this.isStretch()&&(e==="-moz-"?r="-moz-available":e==="-webkit-"&&(r="-webkit-fill-available")),new vP(this.name,r,r,Gv(r))}add(e,r){if(!(e.prop.includes("grid")&&r!=="-webkit-"))return super.add(e,r)}};df.names=["max-content","min-content","fit-content","fill","fill-available","stretch"];Hv.exports=df});var Xv=b((i$,Jv)=>{u();var Qv=Tr(),xP=Fe(),hf=class extends xP{replace(e,r){return r==="-webkit-"?e.replace(this.regexp(),"$1-webkit-optimize-contrast"):r==="-moz-"?e.replace(this.regexp(),"$1-moz-crisp-edges"):super.replace(e,r)}old(e){return e==="-webkit-"?new Qv(this.name,"-webkit-optimize-contrast"):e==="-moz-"?new Qv(this.name,"-moz-crisp-edges"):super.old(e)}};hf.names=["pixelated"];Jv.exports=hf});var Zv=b((n$,Kv)=>{u();var kP=Fe(),mf=class extends kP{replace(e,r){let i=super.replace(e,r);return r==="-webkit-"&&(i=i.replace(/("[^"]+"|'[^']+')(\s+\d+\w)/gi,"url($1)$2")),i}};mf.names=["image-set"];Kv.exports=mf});var tb=b((s$,eb)=>{u();var SP=De().list,_P=Fe(),gf=class extends _P{replace(e,r){return SP.space(e).map(i=>{if(i.slice(0,+this.name.length+1)!==this.name+"(")return i;let n=i.lastIndexOf(")"),s=i.slice(n+1),a=i.slice(this.name.length+1,n);if(r==="-webkit-"){let o=a.match(/\d*.?\d+%?/);o?(a=a.slice(o[0].length).trim(),a+=`, ${o[0]}`):a+=", 0.5"}return r+this.name+"("+a+")"+s}).join(" ")}};gf.names=["cross-fade"];eb.exports=gf});var ib=b((a$,rb)=>{u();var TP=Ce(),OP=Tr(),EP=Fe(),wf=class extends EP{constructor(e,r){super(e,r);e==="display-flex"&&(this.name="flex")}check(e){return e.prop==="display"&&e.value===this.name}prefixed(e){let r,i;return[r,e]=TP(e),r===2009?this.name==="flex"?i="box":i="inline-box":r===2012?this.name==="flex"?i="flexbox":i="inline-flexbox":r==="final"&&(i=this.name),e+i}replace(e,r){return this.prefixed(r)}old(e){let r=this.prefixed(e);if(!!r)return new OP(this.name,r)}};wf.names=["display-flex","inline-flex"];rb.exports=wf});var sb=b((o$,nb)=>{u();var AP=Fe(),yf=class extends AP{constructor(e,r){super(e,r);e==="display-grid"&&(this.name="grid")}check(e){return e.prop==="display"&&e.value===this.name}};yf.names=["display-grid","inline-grid"];nb.exports=yf});var ob=b((l$,ab)=>{u();var CP=Fe(),vf=class extends CP{constructor(e,r){super(e,r);e==="filter-function"&&(this.name="filter")}};vf.names=["filter","filter-function"];ab.exports=vf});var cb=b((u$,fb)=>{u();var lb=Ui(),U=j(),ub=iw(),PP=lw(),qP=ku(),DP=Ew(),bf=Ft(),Lr=Or(),IP=Lw(),at=Fe(),Mr=_e(),RP=Bw(),LP=Nw(),MP=$w(),BP=Uw(),FP=Yw(),NP=Xw(),zP=Zw(),$P=ty(),jP=iy(),UP=sy(),VP=oy(),WP=uy(),GP=cy(),HP=dy(),YP=my(),QP=yy(),JP=by(),XP=Sy(),KP=Ty(),ZP=Ey(),e5=Py(),t5=Dy(),r5=Ly(),i5=By(),n5=Ny(),s5=$y(),a5=Uy(),o5=Gy(),l5=Yy(),u5=Jy(),f5=Ky(),c5=ev(),p5=rv(),d5=nv(),h5=ov(),m5=uv(),g5=cv(),w5=dv(),y5=mv(),v5=yv(),b5=bv(),x5=kv(),k5=Tv(),S5=Ev(),_5=Cv(),T5=Dv(),O5=Rv(),E5=Mv(),A5=Wv(),C5=Yv(),P5=Xv(),q5=Zv(),D5=tb(),I5=ib(),R5=sb(),L5=ob();Lr.hack(RP);Lr.hack(LP);Lr.hack(MP);Lr.hack(BP);U.hack(FP);U.hack(NP);U.hack(zP);U.hack($P);U.hack(jP);U.hack(UP);U.hack(VP);U.hack(WP);U.hack(GP);U.hack(HP);U.hack(YP);U.hack(QP);U.hack(JP);U.hack(XP);U.hack(KP);U.hack(ZP);U.hack(e5);U.hack(t5);U.hack(r5);U.hack(i5);U.hack(n5);U.hack(s5);U.hack(a5);U.hack(o5);U.hack(l5);U.hack(u5);U.hack(f5);U.hack(c5);U.hack(p5);U.hack(d5);U.hack(h5);U.hack(m5);U.hack(g5);U.hack(w5);U.hack(y5);U.hack(v5);U.hack(b5);U.hack(x5);U.hack(k5);U.hack(S5);U.hack(_5);U.hack(T5);U.hack(O5);U.hack(E5);at.hack(A5);at.hack(C5);at.hack(P5);at.hack(q5);at.hack(D5);at.hack(I5);at.hack(R5);at.hack(L5);var xf=new Map,Wi=class{constructor(e,r,i={}){this.data=e,this.browsers=r,this.options=i,[this.add,this.remove]=this.preprocess(this.select(this.data)),this.transition=new PP(this),this.processor=new qP(this)}cleaner(){if(this.cleanerCache)return this.cleanerCache;if(this.browsers.selected.length){let e=new bf(this.browsers.data,[]);this.cleanerCache=new Wi(this.data,e,this.options)}else return this;return this.cleanerCache}select(e){let r={add:{},remove:{}};for(let i in e){let n=e[i],s=n.browsers.map(l=>{let f=l.split(" ");return{browser:`${f[0]} ${f[1]}`,note:f[2]}}),a=s.filter(l=>l.note).map(l=>`${this.browsers.prefix(l.browser)} ${l.note}`);a=Mr.uniq(a),s=s.filter(l=>this.browsers.isSelected(l.browser)).map(l=>{let f=this.browsers.prefix(l.browser);return l.note?`${f} ${l.note}`:f}),s=this.sort(Mr.uniq(s)),this.options.flexbox==="no-2009"&&(s=s.filter(l=>!l.includes("2009")));let o=n.browsers.map(l=>this.browsers.prefix(l));n.mistakes&&(o=o.concat(n.mistakes)),o=o.concat(a),o=Mr.uniq(o),s.length?(r.add[i]=s,s.length!s.includes(l)))):r.remove[i]=o}return r}sort(e){return e.sort((r,i)=>{let n=Mr.removeNote(r).length,s=Mr.removeNote(i).length;return n===s?i.length-r.length:s-n})}preprocess(e){let r={selectors:[],"@supports":new DP(Wi,this)};for(let n in e.add){let s=e.add[n];if(n==="@keyframes"||n==="@viewport")r[n]=new IP(n,s,this);else if(n==="@resolution")r[n]=new ub(n,s,this);else if(this.data[n].selector)r.selectors.push(Lr.load(n,s,this));else{let a=this.data[n].props;if(a){let o=at.load(n,s,this);for(let l of a)r[l]||(r[l]={values:[]}),r[l].values.push(o)}else{let o=r[n]&&r[n].values||[];r[n]=U.load(n,s,this),r[n].values=o}}}let i={selectors:[]};for(let n in e.remove){let s=e.remove[n];if(this.data[n].selector){let a=Lr.load(n,s);for(let o of s)i.selectors.push(a.old(o))}else if(n==="@keyframes"||n==="@viewport")for(let a of s){let o=`@${a}${n.slice(1)}`;i[o]={remove:!0}}else if(n==="@resolution")i[n]=new ub(n,s,this);else{let a=this.data[n].props;if(a){let o=at.load(n,[],this);for(let l of s){let f=o.old(l);if(f)for(let c of a)i[c]||(i[c]={}),i[c].values||(i[c].values=[]),i[c].values.push(f)}}else for(let o of s){let l=this.decl(n).old(n,o);if(n==="align-self"){let f=r[n]&&r[n].prefixes;if(f){if(o==="-webkit- 2009"&&f.includes("-webkit-"))continue;if(o==="-webkit-"&&f.includes("-webkit- 2009"))continue}}for(let f of l)i[f]||(i[f]={}),i[f].remove=!0}}}return[r,i]}decl(e){return xf.has(e)||xf.set(e,U.load(e)),xf.get(e)}unprefixed(e){let r=this.normalize(lb.unprefixed(e));return r==="flex-direction"&&(r="flex-flow"),r}normalize(e){return this.decl(e).normalize(e)}prefixed(e,r){return e=lb.unprefixed(e),this.decl(e).prefixed(e,r)}values(e,r){let i=this[e],n=i["*"]&&i["*"].values,s=i[r]&&i[r].values;return n&&s?Mr.uniq(n.concat(s)):n||s||[]}group(e){let r=e.parent,i=r.index(e),{length:n}=r.nodes,s=this.unprefixed(e.prop),a=(o,l)=>{for(i+=o;i>=0&&i{u();pb.exports={"backface-visibility":{mistakes:["-ms-","-o-"],feature:"transforms3d",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1"]},"backdrop-filter":{feature:"css-backdrop-filter",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1"]},element:{props:["background","background-image","border-image","mask","list-style","list-style-image","content","mask-image"],feature:"css-element-function",browsers:["firefox 89"]},"user-select":{mistakes:["-khtml-"],feature:"user-select-none",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1"]},"background-clip":{feature:"background-clip-text",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},hyphens:{feature:"css-hyphens",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1"]},":fullscreen":{selector:!0,feature:"fullscreen",browsers:["and_chr 92","and_uc 12.12","safari 14.1"]},"::backdrop":{selector:!0,feature:"fullscreen",browsers:["and_chr 92","and_uc 12.12","safari 14.1"]},"::file-selector-button":{selector:!0,feature:"fullscreen",browsers:["safari 14.1"]},"tab-size":{feature:"css3-tabsize",browsers:["firefox 89"]},fill:{props:["width","min-width","max-width","height","min-height","max-height","inline-size","min-inline-size","max-inline-size","block-size","min-block-size","max-block-size","grid","grid-template","grid-template-rows","grid-template-columns","grid-auto-columns","grid-auto-rows"],feature:"intrinsic-width",browsers:["and_chr 92","chrome 91","chrome 92","edge 91","samsung 14.0"]},"fill-available":{props:["width","min-width","max-width","height","min-height","max-height","inline-size","min-inline-size","max-inline-size","block-size","min-block-size","max-block-size","grid","grid-template","grid-template-rows","grid-template-columns","grid-auto-columns","grid-auto-rows"],feature:"intrinsic-width",browsers:["and_chr 92","chrome 91","chrome 92","edge 91","samsung 14.0"]},stretch:{props:["width","min-width","max-width","height","min-height","max-height","inline-size","min-inline-size","max-inline-size","block-size","min-block-size","max-block-size","grid","grid-template","grid-template-rows","grid-template-columns","grid-auto-columns","grid-auto-rows"],feature:"intrinsic-width",browsers:["firefox 89"]},"fit-content":{props:["width","min-width","max-width","height","min-height","max-height","inline-size","min-inline-size","max-inline-size","block-size","min-block-size","max-block-size","grid","grid-template","grid-template-rows","grid-template-columns","grid-auto-columns","grid-auto-rows"],feature:"intrinsic-width",browsers:["firefox 89"]},"text-decoration-style":{feature:"text-decoration",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7"]},"text-decoration-color":{feature:"text-decoration",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7"]},"text-decoration-line":{feature:"text-decoration",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7"]},"text-decoration":{feature:"text-decoration",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7"]},"text-decoration-skip":{feature:"text-decoration",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7"]},"text-decoration-skip-ink":{feature:"text-decoration",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7"]},"text-size-adjust":{feature:"text-size-adjust",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7"]},"mask-clip":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-composite":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-image":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-origin":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-repeat":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-border-repeat":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-border-source":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},mask:{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-position":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-size":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-border":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-border-outset":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-border-width":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"mask-border-slice":{feature:"css-masks",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"clip-path":{feature:"css-clip-path",browsers:["and_uc 12.12","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"box-decoration-break":{feature:"css-boxdecorationbreak",browsers:["and_chr 92","chrome 91","chrome 92","edge 91","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"@resolution":{feature:"css-media-resolution",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1"]},"border-inline-start":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"border-inline-end":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"margin-inline-start":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"margin-inline-end":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"padding-inline-start":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"padding-inline-end":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"border-block-start":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"border-block-end":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"margin-block-start":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"margin-block-end":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"padding-block-start":{feature:"css-logical-props",browsers:["and_uc 12.12"]},"padding-block-end":{feature:"css-logical-props",browsers:["and_uc 12.12"]},appearance:{feature:"css-appearance",browsers:["and_uc 12.12","ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1","samsung 14.0"]},"image-set":{props:["background","background-image","border-image","cursor","mask","mask-image","list-style","list-style-image","content"],feature:"css-image-set",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","samsung 14.0"]},"cross-fade":{props:["background","background-image","border-image","mask","list-style","list-style-image","content","mask-image"],feature:"css-cross-fade",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","samsung 14.0"]},"text-emphasis":{feature:"text-emphasis",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","samsung 14.0"]},"text-emphasis-position":{feature:"text-emphasis",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","samsung 14.0"]},"text-emphasis-style":{feature:"text-emphasis",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","samsung 14.0"]},"text-emphasis-color":{feature:"text-emphasis",browsers:["and_chr 92","and_uc 12.12","chrome 91","chrome 92","edge 91","samsung 14.0"]},":any-link":{selector:!0,feature:"css-any-link",browsers:["and_uc 12.12"]},isolate:{props:["unicode-bidi"],feature:"css-unicode-bidi",browsers:["ios_saf 14.0-14.4","ios_saf 14.5-14.7","safari 14.1"]},"color-adjust":{feature:"css-color-adjust",browsers:["chrome 91","chrome 92","edge 91","safari 14.1"]}}});var mb=b((c$,hb)=>{u();hb.exports={}});var vb=b((p$,yb)=>{u();var M5=wu(),{agents:B5}=(ea(),Zs),kf=U0(),F5=Ft(),N5=cb(),z5=db(),$5=mb(),gb={browsers:B5,prefixes:z5},wb=` - Replace Autoprefixer \`browsers\` option to Browserslist config. - Use \`browserslist\` key in \`package.json\` or \`.browserslistrc\` file. - - Using \`browsers\` option can cause errors. Browserslist config can - be used for Babel, Autoprefixer, postcss-normalize and other tools. - - If you really need to use option, rename it to \`overrideBrowserslist\`. - - Learn more at: - https://github.com/browserslist/browserslist#readme - https://twitter.com/browserslist - -`;function j5(t){return Object.prototype.toString.apply(t)==="[object Object]"}var Sf=new Map;function U5(t,e){e.browsers.selected.length!==0&&(e.add.selectors.length>0||Object.keys(e.add).length>2||t.warn(`Autoprefixer target browsers do not need any prefixes.You do not need Autoprefixer anymore. -Check your Browserslist config to be sure that your targets are set up correctly. - - Learn more at: - https://github.com/postcss/autoprefixer#readme - https://github.com/browserslist/browserslist#readme - -`))}yb.exports=Br;function Br(...t){let e;if(t.length===1&&j5(t[0])?(e=t[0],t=void 0):t.length===0||t.length===1&&!t[0]?t=void 0:t.length<=2&&(Array.isArray(t[0])||!t[0])?(e=t[1],t=t[0]):typeof t[t.length-1]=="object"&&(e=t.pop()),e||(e={}),e.browser)throw new Error("Change `browser` option to `overrideBrowserslist` in Autoprefixer");if(e.browserslist)throw new Error("Change `browserslist` option to `overrideBrowserslist` in Autoprefixer");e.overrideBrowserslist?t=e.overrideBrowserslist:e.browsers&&(typeof console!="undefined"&&console.warn&&(kf.red?console.warn(kf.red(wb.replace(/`[^`]+`/g,n=>kf.yellow(n.slice(1,-1))))):console.warn(wb)),t=e.browsers);let r={ignoreUnknownVersions:e.ignoreUnknownVersions,stats:e.stats,env:e.env};function i(n){let s=gb,a=new F5(s.browsers,t,n,r),o=a.selected.join(", ")+JSON.stringify(e);return Sf.has(o)||Sf.set(o,new N5(s.prefixes,a,e)),Sf.get(o)}return{postcssPlugin:"autoprefixer",prepare(n){let s=i({from:n.opts.from,env:e.env});return{OnceExit(a){U5(n,s),e.remove!==!1&&s.processor.remove(a,n),e.add!==!1&&s.processor.add(a,n)}}},info(n){return n=n||{},n.from=n.from||g.cwd(),$5(i(n))},options:e,browsers:t}}Br.postcss=!0;Br.data=gb;Br.defaults=M5.defaults;Br.info=()=>Br().info()});var xb=b((d$,bb)=>{u();bb.exports={aqua:/#00ffff(ff)?(?!\w)|#0ff(f)?(?!\w)/gi,azure:/#f0ffff(ff)?(?!\w)/gi,beige:/#f5f5dc(ff)?(?!\w)/gi,bisque:/#ffe4c4(ff)?(?!\w)/gi,black:/#000000(ff)?(?!\w)|#000(f)?(?!\w)/gi,blue:/#0000ff(ff)?(?!\w)|#00f(f)?(?!\w)/gi,brown:/#a52a2a(ff)?(?!\w)/gi,coral:/#ff7f50(ff)?(?!\w)/gi,cornsilk:/#fff8dc(ff)?(?!\w)/gi,crimson:/#dc143c(ff)?(?!\w)/gi,cyan:/#00ffff(ff)?(?!\w)|#0ff(f)?(?!\w)/gi,darkblue:/#00008b(ff)?(?!\w)/gi,darkcyan:/#008b8b(ff)?(?!\w)/gi,darkgrey:/#a9a9a9(ff)?(?!\w)/gi,darkred:/#8b0000(ff)?(?!\w)/gi,deeppink:/#ff1493(ff)?(?!\w)/gi,dimgrey:/#696969(ff)?(?!\w)/gi,gold:/#ffd700(ff)?(?!\w)/gi,green:/#008000(ff)?(?!\w)/gi,grey:/#808080(ff)?(?!\w)/gi,honeydew:/#f0fff0(ff)?(?!\w)/gi,hotpink:/#ff69b4(ff)?(?!\w)/gi,indigo:/#4b0082(ff)?(?!\w)/gi,ivory:/#fffff0(ff)?(?!\w)/gi,khaki:/#f0e68c(ff)?(?!\w)/gi,lavender:/#e6e6fa(ff)?(?!\w)/gi,lime:/#00ff00(ff)?(?!\w)|#0f0(f)?(?!\w)/gi,linen:/#faf0e6(ff)?(?!\w)/gi,maroon:/#800000(ff)?(?!\w)/gi,moccasin:/#ffe4b5(ff)?(?!\w)/gi,navy:/#000080(ff)?(?!\w)/gi,oldlace:/#fdf5e6(ff)?(?!\w)/gi,olive:/#808000(ff)?(?!\w)/gi,orange:/#ffa500(ff)?(?!\w)/gi,orchid:/#da70d6(ff)?(?!\w)/gi,peru:/#cd853f(ff)?(?!\w)/gi,pink:/#ffc0cb(ff)?(?!\w)/gi,plum:/#dda0dd(ff)?(?!\w)/gi,purple:/#800080(ff)?(?!\w)/gi,red:/#ff0000(ff)?(?!\w)|#f00(f)?(?!\w)/gi,salmon:/#fa8072(ff)?(?!\w)/gi,seagreen:/#2e8b57(ff)?(?!\w)/gi,seashell:/#fff5ee(ff)?(?!\w)/gi,sienna:/#a0522d(ff)?(?!\w)/gi,silver:/#c0c0c0(ff)?(?!\w)/gi,skyblue:/#87ceeb(ff)?(?!\w)/gi,snow:/#fffafa(ff)?(?!\w)/gi,tan:/#d2b48c(ff)?(?!\w)/gi,teal:/#008080(ff)?(?!\w)/gi,thistle:/#d8bfd8(ff)?(?!\w)/gi,tomato:/#ff6347(ff)?(?!\w)/gi,violet:/#ee82ee(ff)?(?!\w)/gi,wheat:/#f5deb3(ff)?(?!\w)/gi,white:/#ffffff(ff)?(?!\w)|#fff(f)?(?!\w)/gi}});var Sb=b((h$,kb)=>{u();var _f=xb(),Tf={whitespace:/\s+/g,urlHexPairs:/%[\dA-F]{2}/g,quotes:/"/g};function V5(t){return t.trim().replace(Tf.whitespace," ")}function W5(t){return encodeURIComponent(t).replace(Tf.urlHexPairs,H5)}function G5(t){return Object.keys(_f).forEach(function(e){_f[e].test(t)&&(t=t.replace(_f[e],e))}),t}function H5(t){switch(t){case"%20":return" ";case"%3D":return"=";case"%3A":return":";case"%2F":return"/";default:return t.toLowerCase()}}function Of(t){if(typeof t!="string")throw new TypeError("Expected a string, but received "+typeof t);t.charCodeAt(0)===65279&&(t=t.slice(1));var e=G5(V5(t)).replace(Tf.quotes,"'");return"data:image/svg+xml,"+W5(e)}Of.toSrcset=function(e){return Of(e).replace(/ /g,"%20")};kb.exports=Of});var Ef={};Ve(Ef,{default:()=>Y5});var _b,Y5,Af=E(()=>{u();En();_b=he(Dn()),Y5=Ot(_b.default.theme)});var Cb=b((g$,Ab)=>{u();var ra=Sb(),Q5=(br(),vr).default,Tb=(Af(),Ef).default,zt=(Wr(),_n).default,[J5,{lineHeight:X5}]=Tb.fontSize.base,{spacing:wt,borderWidth:Ob,borderRadius:Eb}=Tb,K5=Q5.withOptions(function(t={strategy:void 0}){return function({addBase:e,addComponents:r,theme:i}){let n=t.strategy===void 0?["base","class"]:[t.strategy],s=[{base:["[type='text']","[type='email']","[type='url']","[type='password']","[type='number']","[type='date']","[type='datetime-local']","[type='month']","[type='search']","[type='tel']","[type='time']","[type='week']","[multiple]","textarea","select"],class:[".form-input",".form-textarea",".form-select",".form-multiselect"],styles:{appearance:"none","background-color":"#fff","border-color":i("colors.gray.500",zt.gray[500]),"border-width":Ob.DEFAULT,"border-radius":Eb.none,"padding-top":wt[2],"padding-right":wt[3],"padding-bottom":wt[2],"padding-left":wt[3],"font-size":J5,"line-height":X5,"--tw-shadow":"0 0 #0000","&:focus":{outline:"2px solid transparent","outline-offset":"2px","--tw-ring-inset":"var(--tw-empty,/*!*/ /*!*/)","--tw-ring-offset-width":"0px","--tw-ring-offset-color":"#fff","--tw-ring-color":i("colors.blue.600",zt.blue[600]),"--tw-ring-offset-shadow":"var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color)","--tw-ring-shadow":"var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color)","box-shadow":"var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow)","border-color":i("colors.blue.600",zt.blue[600])}}},{base:["input::placeholder","textarea::placeholder"],class:[".form-input::placeholder",".form-textarea::placeholder"],styles:{color:i("colors.gray.500",zt.gray[500]),opacity:"1"}},{base:["::-webkit-datetime-edit-fields-wrapper"],class:[".form-input::-webkit-datetime-edit-fields-wrapper"],styles:{padding:"0"}},{base:["::-webkit-date-and-time-value"],class:[".form-input::-webkit-date-and-time-value"],styles:{"min-height":"1.5em"}},{base:["::-webkit-datetime-edit","::-webkit-datetime-edit-year-field","::-webkit-datetime-edit-month-field","::-webkit-datetime-edit-day-field","::-webkit-datetime-edit-hour-field","::-webkit-datetime-edit-minute-field","::-webkit-datetime-edit-second-field","::-webkit-datetime-edit-millisecond-field","::-webkit-datetime-edit-meridiem-field"],class:[".form-input::-webkit-datetime-edit",".form-input::-webkit-datetime-edit-year-field",".form-input::-webkit-datetime-edit-month-field",".form-input::-webkit-datetime-edit-day-field",".form-input::-webkit-datetime-edit-hour-field",".form-input::-webkit-datetime-edit-minute-field",".form-input::-webkit-datetime-edit-second-field",".form-input::-webkit-datetime-edit-millisecond-field",".form-input::-webkit-datetime-edit-meridiem-field"],styles:{"padding-top":0,"padding-bottom":0}},{base:["select"],class:[".form-select"],styles:{"background-image":`url("${ra(``)}")`,"background-position":`right ${wt[2]} center`,"background-repeat":"no-repeat","background-size":"1.5em 1.5em","padding-right":wt[10],"print-color-adjust":"exact"}},{base:["[multiple]"],class:null,styles:{"background-image":"initial","background-position":"initial","background-repeat":"unset","background-size":"initial","padding-right":wt[3],"print-color-adjust":"unset"}},{base:["[type='checkbox']","[type='radio']"],class:[".form-checkbox",".form-radio"],styles:{appearance:"none",padding:"0","print-color-adjust":"exact",display:"inline-block","vertical-align":"middle","background-origin":"border-box","user-select":"none","flex-shrink":"0",height:wt[4],width:wt[4],color:i("colors.blue.600",zt.blue[600]),"background-color":"#fff","border-color":i("colors.gray.500",zt.gray[500]),"border-width":Ob.DEFAULT,"--tw-shadow":"0 0 #0000"}},{base:["[type='checkbox']"],class:[".form-checkbox"],styles:{"border-radius":Eb.none}},{base:["[type='radio']"],class:[".form-radio"],styles:{"border-radius":"100%"}},{base:["[type='checkbox']:focus","[type='radio']:focus"],class:[".form-checkbox:focus",".form-radio:focus"],styles:{outline:"2px solid transparent","outline-offset":"2px","--tw-ring-inset":"var(--tw-empty,/*!*/ /*!*/)","--tw-ring-offset-width":"2px","--tw-ring-offset-color":"#fff","--tw-ring-color":i("colors.blue.600",zt.blue[600]),"--tw-ring-offset-shadow":"var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color)","--tw-ring-shadow":"var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color)","box-shadow":"var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow)"}},{base:["[type='checkbox']:checked","[type='radio']:checked"],class:[".form-checkbox:checked",".form-radio:checked"],styles:{"border-color":"transparent","background-color":"currentColor","background-size":"100% 100%","background-position":"center","background-repeat":"no-repeat"}},{base:["[type='checkbox']:checked"],class:[".form-checkbox:checked"],styles:{"background-image":`url("${ra('')}")`}},{base:["[type='radio']:checked"],class:[".form-radio:checked"],styles:{"background-image":`url("${ra('')}")`}},{base:["[type='checkbox']:checked:hover","[type='checkbox']:checked:focus","[type='radio']:checked:hover","[type='radio']:checked:focus"],class:[".form-checkbox:checked:hover",".form-checkbox:checked:focus",".form-radio:checked:hover",".form-radio:checked:focus"],styles:{"border-color":"transparent","background-color":"currentColor"}},{base:["[type='checkbox']:indeterminate"],class:[".form-checkbox:indeterminate"],styles:{"background-image":`url("${ra('')}")`,"border-color":"transparent","background-color":"currentColor","background-size":"100% 100%","background-position":"center","background-repeat":"no-repeat"}},{base:["[type='checkbox']:indeterminate:hover","[type='checkbox']:indeterminate:focus"],class:[".form-checkbox:indeterminate:hover",".form-checkbox:indeterminate:focus"],styles:{"border-color":"transparent","background-color":"currentColor"}},{base:["[type='file']"],class:null,styles:{background:"unset","border-color":"inherit","border-width":"0","border-radius":"0",padding:"0","font-size":"unset","line-height":"inherit"}},{base:["[type='file']:focus"],class:null,styles:{outline:["1px solid ButtonText","1px auto -webkit-focus-ring-color"]}}],a=o=>s.map(l=>l[o]===null?null:{[l[o]]:l.styles}).filter(Boolean);n.includes("base")&&e(a("base")),n.includes("class")&&r(a("class"))}});Ab.exports=K5});var n1=b((Ji,zr)=>{u();var Z5=200,Pb="__lodash_hash_undefined__",eq=800,tq=16,qb=9007199254740991,Db="[object Arguments]",rq="[object Array]",iq="[object AsyncFunction]",nq="[object Boolean]",sq="[object Date]",aq="[object Error]",Ib="[object Function]",oq="[object GeneratorFunction]",lq="[object Map]",uq="[object Number]",fq="[object Null]",Rb="[object Object]",cq="[object Proxy]",pq="[object RegExp]",dq="[object Set]",hq="[object String]",mq="[object Undefined]",gq="[object WeakMap]",wq="[object ArrayBuffer]",yq="[object DataView]",vq="[object Float32Array]",bq="[object Float64Array]",xq="[object Int8Array]",kq="[object Int16Array]",Sq="[object Int32Array]",_q="[object Uint8Array]",Tq="[object Uint8ClampedArray]",Oq="[object Uint16Array]",Eq="[object Uint32Array]",Aq=/[\\^$.*+?()[\]{}|]/g,Cq=/^\[object .+?Constructor\]$/,Pq=/^(?:0|[1-9]\d*)$/,ne={};ne[vq]=ne[bq]=ne[xq]=ne[kq]=ne[Sq]=ne[_q]=ne[Tq]=ne[Oq]=ne[Eq]=!0;ne[Db]=ne[rq]=ne[wq]=ne[nq]=ne[yq]=ne[sq]=ne[aq]=ne[Ib]=ne[lq]=ne[uq]=ne[Rb]=ne[pq]=ne[dq]=ne[hq]=ne[gq]=!1;var Lb=typeof global=="object"&&global&&global.Object===Object&&global,qq=typeof self=="object"&&self&&self.Object===Object&&self,Gi=Lb||qq||Function("return this")(),Mb=typeof Ji=="object"&&Ji&&!Ji.nodeType&&Ji,Hi=Mb&&typeof zr=="object"&&zr&&!zr.nodeType&&zr,Bb=Hi&&Hi.exports===Mb,Cf=Bb&&Lb.process,Fb=function(){try{var t=Hi&&Hi.require&&Hi.require("util").types;return t||Cf&&Cf.binding&&Cf.binding("util")}catch(e){}}(),Nb=Fb&&Fb.isTypedArray;function Dq(t,e,r){switch(r.length){case 0:return t.call(e);case 1:return t.call(e,r[0]);case 2:return t.call(e,r[0],r[1]);case 3:return t.call(e,r[0],r[1],r[2])}return t.apply(e,r)}function Iq(t,e){for(var r=-1,i=Array(t);++r-1}function t3(t,e){var r=this.__data__,i=oa(r,t);return i<0?(++this.size,r.push([t,e])):r[i][1]=e,this}vt.prototype.clear=Xq;vt.prototype.delete=Kq;vt.prototype.get=Zq;vt.prototype.has=e3;vt.prototype.set=t3;function Fr(t){var e=-1,r=t==null?0:t.length;for(this.clear();++e1?r[n-1]:void 0,a=n>2?r[2]:void 0;for(s=t.length>3&&typeof s=="function"?(n--,s):void 0,a&&P3(r[0],r[1],a)&&(s=n<3?void 0:s,n=1),e=Object(e);++i-1&&t%1==0&&t0){if(++e>=eq)return arguments[0]}else e=0;return t.apply(void 0,arguments)}}function F3(t){if(t!=null){try{return na.call(t)}catch(e){}try{return t+""}catch(e){}}return""}function fa(t,e){return t===e||t!==t&&e!==e}var Lf=Qb(function(){return arguments}())?Qb:function(t){return Qi(t)&&yt.call(t,"callee")&&!$q.call(t,"callee")},Mf=Array.isArray;function Bf(t){return t!=null&&e1(t.length)&&!Ff(t)}function N3(t){return Qi(t)&&Bf(t)}var Zb=Uq||V3;function Ff(t){if(!Jt(t))return!1;var e=la(t);return e==Ib||e==oq||e==iq||e==cq}function e1(t){return typeof t=="number"&&t>-1&&t%1==0&&t<=qb}function Jt(t){var e=typeof t;return t!=null&&(e=="object"||e=="function")}function Qi(t){return t!=null&&typeof t=="object"}function z3(t){if(!Qi(t)||la(t)!=Rb)return!1;var e=Wb(t);if(e===null)return!0;var r=yt.call(e,"constructor")&&e.constructor;return typeof r=="function"&&r instanceof r&&na.call(r)==Nq}var t1=Nb?Rq(Nb):g3;function $3(t){return T3(t,r1(t))}function r1(t){return Bf(t)?p3(t,!0):w3(t)}var j3=O3(function(t,e,r){Jb(t,e,r)});function U3(t){return function(){return t}}function i1(t){return t}function V3(){return!1}zr.exports=j3});var a1=b((w$,s1)=>{u();function W3(){if(!arguments.length)return[];var t=arguments[0];return G3(t)?t:[t]}var G3=Array.isArray;s1.exports=W3});var l1=b((y$,o1)=>{u();var k=(Wr(),_n).default,$=t=>t.toFixed(7).replace(/(\.[0-9]+?)0+$/,"$1").replace(/\.0$/,""),ot=t=>`${$(t/16)}rem`,h=(t,e)=>`${$(t/e)}em`,Nf={sm:{css:[{fontSize:ot(14),lineHeight:$(24/14),p:{marginTop:h(16,14),marginBottom:h(16,14)},'[class~="lead"]':{fontSize:h(18,14),lineHeight:$(28/18),marginTop:h(16,18),marginBottom:h(16,18)},blockquote:{marginTop:h(24,18),marginBottom:h(24,18),paddingLeft:h(20,18)},h1:{fontSize:h(30,14),marginTop:"0",marginBottom:h(24,30),lineHeight:$(36/30)},h2:{fontSize:h(20,14),marginTop:h(32,20),marginBottom:h(16,20),lineHeight:$(28/20)},h3:{fontSize:h(18,14),marginTop:h(28,18),marginBottom:h(8,18),lineHeight:$(28/18)},h4:{marginTop:h(20,14),marginBottom:h(8,14),lineHeight:$(20/14)},img:{marginTop:h(24,14),marginBottom:h(24,14)},video:{marginTop:h(24,14),marginBottom:h(24,14)},figure:{marginTop:h(24,14),marginBottom:h(24,14)},"figure > *":{marginTop:"0",marginBottom:"0"},figcaption:{fontSize:h(12,14),lineHeight:$(16/12),marginTop:h(8,12)},code:{fontSize:h(12,14)},"h2 code":{fontSize:h(18,20)},"h3 code":{fontSize:h(16,18)},pre:{fontSize:h(12,14),lineHeight:$(20/12),marginTop:h(20,12),marginBottom:h(20,12),borderRadius:ot(4),paddingTop:h(8,12),paddingRight:h(12,12),paddingBottom:h(8,12),paddingLeft:h(12,12)},ol:{marginTop:h(16,14),marginBottom:h(16,14),paddingLeft:h(22,14)},ul:{marginTop:h(16,14),marginBottom:h(16,14),paddingLeft:h(22,14)},li:{marginTop:h(4,14),marginBottom:h(4,14)},"ol > li":{paddingLeft:h(6,14)},"ul > li":{paddingLeft:h(6,14)},"> ul > li p":{marginTop:h(8,14),marginBottom:h(8,14)},"> ul > li > *:first-child":{marginTop:h(16,14)},"> ul > li > *:last-child":{marginBottom:h(16,14)},"> ol > li > *:first-child":{marginTop:h(16,14)},"> ol > li > *:last-child":{marginBottom:h(16,14)},"ul ul, ul ol, ol ul, ol ol":{marginTop:h(8,14),marginBottom:h(8,14)},hr:{marginTop:h(40,14),marginBottom:h(40,14)},"hr + *":{marginTop:"0"},"h2 + *":{marginTop:"0"},"h3 + *":{marginTop:"0"},"h4 + *":{marginTop:"0"},table:{fontSize:h(12,14),lineHeight:$(18/12)},"thead th":{paddingRight:h(12,12),paddingBottom:h(8,12),paddingLeft:h(12,12)},"thead th:first-child":{paddingLeft:"0"},"thead th:last-child":{paddingRight:"0"},"tbody td, tfoot td":{paddingTop:h(8,12),paddingRight:h(12,12),paddingBottom:h(8,12),paddingLeft:h(12,12)},"tbody td:first-child, tfoot td:first-child":{paddingLeft:"0"},"tbody td:last-child, tfoot td:last-child":{paddingRight:"0"}},{"> :first-child":{marginTop:"0"},"> :last-child":{marginBottom:"0"}}]},base:{css:[{fontSize:ot(16),lineHeight:$(28/16),p:{marginTop:h(20,16),marginBottom:h(20,16)},'[class~="lead"]':{fontSize:h(20,16),lineHeight:$(32/20),marginTop:h(24,20),marginBottom:h(24,20)},blockquote:{marginTop:h(32,20),marginBottom:h(32,20),paddingLeft:h(20,20)},h1:{fontSize:h(36,16),marginTop:"0",marginBottom:h(32,36),lineHeight:$(40/36)},h2:{fontSize:h(24,16),marginTop:h(48,24),marginBottom:h(24,24),lineHeight:$(32/24)},h3:{fontSize:h(20,16),marginTop:h(32,20),marginBottom:h(12,20),lineHeight:$(32/20)},h4:{marginTop:h(24,16),marginBottom:h(8,16),lineHeight:$(24/16)},img:{marginTop:h(32,16),marginBottom:h(32,16)},video:{marginTop:h(32,16),marginBottom:h(32,16)},figure:{marginTop:h(32,16),marginBottom:h(32,16)},"figure > *":{marginTop:"0",marginBottom:"0"},figcaption:{fontSize:h(14,16),lineHeight:$(20/14),marginTop:h(12,14)},code:{fontSize:h(14,16)},"h2 code":{fontSize:h(21,24)},"h3 code":{fontSize:h(18,20)},pre:{fontSize:h(14,16),lineHeight:$(24/14),marginTop:h(24,14),marginBottom:h(24,14),borderRadius:ot(6),paddingTop:h(12,14),paddingRight:h(16,14),paddingBottom:h(12,14),paddingLeft:h(16,14)},ol:{marginTop:h(20,16),marginBottom:h(20,16),paddingLeft:h(26,16)},ul:{marginTop:h(20,16),marginBottom:h(20,16),paddingLeft:h(26,16)},li:{marginTop:h(8,16),marginBottom:h(8,16)},"ol > li":{paddingLeft:h(6,16)},"ul > li":{paddingLeft:h(6,16)},"> ul > li p":{marginTop:h(12,16),marginBottom:h(12,16)},"> ul > li > *:first-child":{marginTop:h(20,16)},"> ul > li > *:last-child":{marginBottom:h(20,16)},"> ol > li > *:first-child":{marginTop:h(20,16)},"> ol > li > *:last-child":{marginBottom:h(20,16)},"ul ul, ul ol, ol ul, ol ol":{marginTop:h(12,16),marginBottom:h(12,16)},hr:{marginTop:h(48,16),marginBottom:h(48,16)},"hr + *":{marginTop:"0"},"h2 + *":{marginTop:"0"},"h3 + *":{marginTop:"0"},"h4 + *":{marginTop:"0"},table:{fontSize:h(14,16),lineHeight:$(24/14)},"thead th":{paddingRight:h(8,14),paddingBottom:h(8,14),paddingLeft:h(8,14)},"thead th:first-child":{paddingLeft:"0"},"thead th:last-child":{paddingRight:"0"},"tbody td, tfoot td":{paddingTop:h(8,14),paddingRight:h(8,14),paddingBottom:h(8,14),paddingLeft:h(8,14)},"tbody td:first-child, tfoot td:first-child":{paddingLeft:"0"},"tbody td:last-child, tfoot td:last-child":{paddingRight:"0"}},{"> :first-child":{marginTop:"0"},"> :last-child":{marginBottom:"0"}}]},lg:{css:[{fontSize:ot(18),lineHeight:$(32/18),p:{marginTop:h(24,18),marginBottom:h(24,18)},'[class~="lead"]':{fontSize:h(22,18),lineHeight:$(32/22),marginTop:h(24,22),marginBottom:h(24,22)},blockquote:{marginTop:h(40,24),marginBottom:h(40,24),paddingLeft:h(24,24)},h1:{fontSize:h(48,18),marginTop:"0",marginBottom:h(40,48),lineHeight:$(48/48)},h2:{fontSize:h(30,18),marginTop:h(56,30),marginBottom:h(32,30),lineHeight:$(40/30)},h3:{fontSize:h(24,18),marginTop:h(40,24),marginBottom:h(16,24),lineHeight:$(36/24)},h4:{marginTop:h(32,18),marginBottom:h(8,18),lineHeight:$(28/18)},img:{marginTop:h(32,18),marginBottom:h(32,18)},video:{marginTop:h(32,18),marginBottom:h(32,18)},figure:{marginTop:h(32,18),marginBottom:h(32,18)},"figure > *":{marginTop:"0",marginBottom:"0"},figcaption:{fontSize:h(16,18),lineHeight:$(24/16),marginTop:h(16,16)},code:{fontSize:h(16,18)},"h2 code":{fontSize:h(26,30)},"h3 code":{fontSize:h(21,24)},pre:{fontSize:h(16,18),lineHeight:$(28/16),marginTop:h(32,16),marginBottom:h(32,16),borderRadius:ot(6),paddingTop:h(16,16),paddingRight:h(24,16),paddingBottom:h(16,16),paddingLeft:h(24,16)},ol:{marginTop:h(24,18),marginBottom:h(24,18),paddingLeft:h(28,18)},ul:{marginTop:h(24,18),marginBottom:h(24,18),paddingLeft:h(28,18)},li:{marginTop:h(12,18),marginBottom:h(12,18)},"ol > li":{paddingLeft:h(8,18)},"ul > li":{paddingLeft:h(8,18)},"> ul > li p":{marginTop:h(16,18),marginBottom:h(16,18)},"> ul > li > *:first-child":{marginTop:h(24,18)},"> ul > li > *:last-child":{marginBottom:h(24,18)},"> ol > li > *:first-child":{marginTop:h(24,18)},"> ol > li > *:last-child":{marginBottom:h(24,18)},"ul ul, ul ol, ol ul, ol ol":{marginTop:h(16,18),marginBottom:h(16,18)},hr:{marginTop:h(56,18),marginBottom:h(56,18)},"hr + *":{marginTop:"0"},"h2 + *":{marginTop:"0"},"h3 + *":{marginTop:"0"},"h4 + *":{marginTop:"0"},table:{fontSize:h(16,18),lineHeight:$(24/16)},"thead th":{paddingRight:h(12,16),paddingBottom:h(12,16),paddingLeft:h(12,16)},"thead th:first-child":{paddingLeft:"0"},"thead th:last-child":{paddingRight:"0"},"tbody td, tfoot td":{paddingTop:h(12,16),paddingRight:h(12,16),paddingBottom:h(12,16),paddingLeft:h(12,16)},"tbody td:first-child, tfoot td:first-child":{paddingLeft:"0"},"tbody td:last-child, tfoot td:last-child":{paddingRight:"0"}},{"> :first-child":{marginTop:"0"},"> :last-child":{marginBottom:"0"}}]},xl:{css:[{fontSize:ot(20),lineHeight:$(36/20),p:{marginTop:h(24,20),marginBottom:h(24,20)},'[class~="lead"]':{fontSize:h(24,20),lineHeight:$(36/24),marginTop:h(24,24),marginBottom:h(24,24)},blockquote:{marginTop:h(48,30),marginBottom:h(48,30),paddingLeft:h(32,30)},h1:{fontSize:h(56,20),marginTop:"0",marginBottom:h(48,56),lineHeight:$(56/56)},h2:{fontSize:h(36,20),marginTop:h(56,36),marginBottom:h(32,36),lineHeight:$(40/36)},h3:{fontSize:h(30,20),marginTop:h(48,30),marginBottom:h(20,30),lineHeight:$(40/30)},h4:{marginTop:h(36,20),marginBottom:h(12,20),lineHeight:$(32/20)},img:{marginTop:h(40,20),marginBottom:h(40,20)},video:{marginTop:h(40,20),marginBottom:h(40,20)},figure:{marginTop:h(40,20),marginBottom:h(40,20)},"figure > *":{marginTop:"0",marginBottom:"0"},figcaption:{fontSize:h(18,20),lineHeight:$(28/18),marginTop:h(18,18)},code:{fontSize:h(18,20)},"h2 code":{fontSize:h(31,36)},"h3 code":{fontSize:h(27,30)},pre:{fontSize:h(18,20),lineHeight:$(32/18),marginTop:h(36,18),marginBottom:h(36,18),borderRadius:ot(8),paddingTop:h(20,18),paddingRight:h(24,18),paddingBottom:h(20,18),paddingLeft:h(24,18)},ol:{marginTop:h(24,20),marginBottom:h(24,20),paddingLeft:h(32,20)},ul:{marginTop:h(24,20),marginBottom:h(24,20),paddingLeft:h(32,20)},li:{marginTop:h(12,20),marginBottom:h(12,20)},"ol > li":{paddingLeft:h(8,20)},"ul > li":{paddingLeft:h(8,20)},"> ul > li p":{marginTop:h(16,20),marginBottom:h(16,20)},"> ul > li > *:first-child":{marginTop:h(24,20)},"> ul > li > *:last-child":{marginBottom:h(24,20)},"> ol > li > *:first-child":{marginTop:h(24,20)},"> ol > li > *:last-child":{marginBottom:h(24,20)},"ul ul, ul ol, ol ul, ol ol":{marginTop:h(16,20),marginBottom:h(16,20)},hr:{marginTop:h(56,20),marginBottom:h(56,20)},"hr + *":{marginTop:"0"},"h2 + *":{marginTop:"0"},"h3 + *":{marginTop:"0"},"h4 + *":{marginTop:"0"},table:{fontSize:h(18,20),lineHeight:$(28/18)},"thead th":{paddingRight:h(12,18),paddingBottom:h(16,18),paddingLeft:h(12,18)},"thead th:first-child":{paddingLeft:"0"},"thead th:last-child":{paddingRight:"0"},"tbody td, tfoot td":{paddingTop:h(16,18),paddingRight:h(12,18),paddingBottom:h(16,18),paddingLeft:h(12,18)},"tbody td:first-child, tfoot td:first-child":{paddingLeft:"0"},"tbody td:last-child, tfoot td:last-child":{paddingRight:"0"}},{"> :first-child":{marginTop:"0"},"> :last-child":{marginBottom:"0"}}]},"2xl":{css:[{fontSize:ot(24),lineHeight:$(40/24),p:{marginTop:h(32,24),marginBottom:h(32,24)},'[class~="lead"]':{fontSize:h(30,24),lineHeight:$(44/30),marginTop:h(32,30),marginBottom:h(32,30)},blockquote:{marginTop:h(64,36),marginBottom:h(64,36),paddingLeft:h(40,36)},h1:{fontSize:h(64,24),marginTop:"0",marginBottom:h(56,64),lineHeight:$(64/64)},h2:{fontSize:h(48,24),marginTop:h(72,48),marginBottom:h(40,48),lineHeight:$(52/48)},h3:{fontSize:h(36,24),marginTop:h(56,36),marginBottom:h(24,36),lineHeight:$(44/36)},h4:{marginTop:h(40,24),marginBottom:h(16,24),lineHeight:$(36/24)},img:{marginTop:h(48,24),marginBottom:h(48,24)},video:{marginTop:h(48,24),marginBottom:h(48,24)},figure:{marginTop:h(48,24),marginBottom:h(48,24)},"figure > *":{marginTop:"0",marginBottom:"0"},figcaption:{fontSize:h(20,24),lineHeight:$(32/20),marginTop:h(20,20)},code:{fontSize:h(20,24)},"h2 code":{fontSize:h(42,48)},"h3 code":{fontSize:h(32,36)},pre:{fontSize:h(20,24),lineHeight:$(36/20),marginTop:h(40,20),marginBottom:h(40,20),borderRadius:ot(8),paddingTop:h(24,20),paddingRight:h(32,20),paddingBottom:h(24,20),paddingLeft:h(32,20)},ol:{marginTop:h(32,24),marginBottom:h(32,24),paddingLeft:h(38,24)},ul:{marginTop:h(32,24),marginBottom:h(32,24),paddingLeft:h(38,24)},li:{marginTop:h(12,24),marginBottom:h(12,24)},"ol > li":{paddingLeft:h(10,24)},"ul > li":{paddingLeft:h(10,24)},"> ul > li p":{marginTop:h(20,24),marginBottom:h(20,24)},"> ul > li > *:first-child":{marginTop:h(32,24)},"> ul > li > *:last-child":{marginBottom:h(32,24)},"> ol > li > *:first-child":{marginTop:h(32,24)},"> ol > li > *:last-child":{marginBottom:h(32,24)},"ul ul, ul ol, ol ul, ol ol":{marginTop:h(16,24),marginBottom:h(16,24)},hr:{marginTop:h(72,24),marginBottom:h(72,24)},"hr + *":{marginTop:"0"},"h2 + *":{marginTop:"0"},"h3 + *":{marginTop:"0"},"h4 + *":{marginTop:"0"},table:{fontSize:h(20,24),lineHeight:$(28/20)},"thead th":{paddingRight:h(12,20),paddingBottom:h(16,20),paddingLeft:h(12,20)},"thead th:first-child":{paddingLeft:"0"},"thead th:last-child":{paddingRight:"0"},"tbody td, tfoot td":{paddingTop:h(16,20),paddingRight:h(12,20),paddingBottom:h(16,20),paddingLeft:h(12,20)},"tbody td:first-child, tfoot td:first-child":{paddingLeft:"0"},"tbody td:last-child, tfoot td:last-child":{paddingRight:"0"}},{"> :first-child":{marginTop:"0"},"> :last-child":{marginBottom:"0"}}]},invert:{css:{"--tw-prose-body":"var(--tw-prose-invert-body)","--tw-prose-headings":"var(--tw-prose-invert-headings)","--tw-prose-lead":"var(--tw-prose-invert-lead)","--tw-prose-links":"var(--tw-prose-invert-links)","--tw-prose-bold":"var(--tw-prose-invert-bold)","--tw-prose-counters":"var(--tw-prose-invert-counters)","--tw-prose-bullets":"var(--tw-prose-invert-bullets)","--tw-prose-hr":"var(--tw-prose-invert-hr)","--tw-prose-quotes":"var(--tw-prose-invert-quotes)","--tw-prose-quote-borders":"var(--tw-prose-invert-quote-borders)","--tw-prose-captions":"var(--tw-prose-invert-captions)","--tw-prose-code":"var(--tw-prose-invert-code)","--tw-prose-pre-code":"var(--tw-prose-invert-pre-code)","--tw-prose-pre-bg":"var(--tw-prose-invert-pre-bg)","--tw-prose-th-borders":"var(--tw-prose-invert-th-borders)","--tw-prose-td-borders":"var(--tw-prose-invert-td-borders)"}},slate:{css:{"--tw-prose-body":k.slate[700],"--tw-prose-headings":k.slate[900],"--tw-prose-lead":k.slate[600],"--tw-prose-links":k.slate[900],"--tw-prose-bold":k.slate[900],"--tw-prose-counters":k.slate[500],"--tw-prose-bullets":k.slate[300],"--tw-prose-hr":k.slate[200],"--tw-prose-quotes":k.slate[900],"--tw-prose-quote-borders":k.slate[200],"--tw-prose-captions":k.slate[500],"--tw-prose-code":k.slate[900],"--tw-prose-pre-code":k.slate[200],"--tw-prose-pre-bg":k.slate[800],"--tw-prose-th-borders":k.slate[300],"--tw-prose-td-borders":k.slate[200],"--tw-prose-invert-body":k.slate[300],"--tw-prose-invert-headings":k.white,"--tw-prose-invert-lead":k.slate[400],"--tw-prose-invert-links":k.white,"--tw-prose-invert-bold":k.white,"--tw-prose-invert-counters":k.slate[400],"--tw-prose-invert-bullets":k.slate[600],"--tw-prose-invert-hr":k.slate[700],"--tw-prose-invert-quotes":k.slate[100],"--tw-prose-invert-quote-borders":k.slate[700],"--tw-prose-invert-captions":k.slate[400],"--tw-prose-invert-code":k.white,"--tw-prose-invert-pre-code":k.slate[300],"--tw-prose-invert-pre-bg":"rgb(0 0 0 / 50%)","--tw-prose-invert-th-borders":k.slate[600],"--tw-prose-invert-td-borders":k.slate[700]}},gray:{css:{"--tw-prose-body":k.gray[700],"--tw-prose-headings":k.gray[900],"--tw-prose-lead":k.gray[600],"--tw-prose-links":k.gray[900],"--tw-prose-bold":k.gray[900],"--tw-prose-counters":k.gray[500],"--tw-prose-bullets":k.gray[300],"--tw-prose-hr":k.gray[200],"--tw-prose-quotes":k.gray[900],"--tw-prose-quote-borders":k.gray[200],"--tw-prose-captions":k.gray[500],"--tw-prose-code":k.gray[900],"--tw-prose-pre-code":k.gray[200],"--tw-prose-pre-bg":k.gray[800],"--tw-prose-th-borders":k.gray[300],"--tw-prose-td-borders":k.gray[200],"--tw-prose-invert-body":k.gray[300],"--tw-prose-invert-headings":k.white,"--tw-prose-invert-lead":k.gray[400],"--tw-prose-invert-links":k.white,"--tw-prose-invert-bold":k.white,"--tw-prose-invert-counters":k.gray[400],"--tw-prose-invert-bullets":k.gray[600],"--tw-prose-invert-hr":k.gray[700],"--tw-prose-invert-quotes":k.gray[100],"--tw-prose-invert-quote-borders":k.gray[700],"--tw-prose-invert-captions":k.gray[400],"--tw-prose-invert-code":k.white,"--tw-prose-invert-pre-code":k.gray[300],"--tw-prose-invert-pre-bg":"rgb(0 0 0 / 50%)","--tw-prose-invert-th-borders":k.gray[600],"--tw-prose-invert-td-borders":k.gray[700]}},zinc:{css:{"--tw-prose-body":k.zinc[700],"--tw-prose-headings":k.zinc[900],"--tw-prose-lead":k.zinc[600],"--tw-prose-links":k.zinc[900],"--tw-prose-bold":k.zinc[900],"--tw-prose-counters":k.zinc[500],"--tw-prose-bullets":k.zinc[300],"--tw-prose-hr":k.zinc[200],"--tw-prose-quotes":k.zinc[900],"--tw-prose-quote-borders":k.zinc[200],"--tw-prose-captions":k.zinc[500],"--tw-prose-code":k.zinc[900],"--tw-prose-pre-code":k.zinc[200],"--tw-prose-pre-bg":k.zinc[800],"--tw-prose-th-borders":k.zinc[300],"--tw-prose-td-borders":k.zinc[200],"--tw-prose-invert-body":k.zinc[300],"--tw-prose-invert-headings":k.white,"--tw-prose-invert-lead":k.zinc[400],"--tw-prose-invert-links":k.white,"--tw-prose-invert-bold":k.white,"--tw-prose-invert-counters":k.zinc[400],"--tw-prose-invert-bullets":k.zinc[600],"--tw-prose-invert-hr":k.zinc[700],"--tw-prose-invert-quotes":k.zinc[100],"--tw-prose-invert-quote-borders":k.zinc[700],"--tw-prose-invert-captions":k.zinc[400],"--tw-prose-invert-code":k.white,"--tw-prose-invert-pre-code":k.zinc[300],"--tw-prose-invert-pre-bg":"rgb(0 0 0 / 50%)","--tw-prose-invert-th-borders":k.zinc[600],"--tw-prose-invert-td-borders":k.zinc[700]}},neutral:{css:{"--tw-prose-body":k.neutral[700],"--tw-prose-headings":k.neutral[900],"--tw-prose-lead":k.neutral[600],"--tw-prose-links":k.neutral[900],"--tw-prose-bold":k.neutral[900],"--tw-prose-counters":k.neutral[500],"--tw-prose-bullets":k.neutral[300],"--tw-prose-hr":k.neutral[200],"--tw-prose-quotes":k.neutral[900],"--tw-prose-quote-borders":k.neutral[200],"--tw-prose-captions":k.neutral[500],"--tw-prose-code":k.neutral[900],"--tw-prose-pre-code":k.neutral[200],"--tw-prose-pre-bg":k.neutral[800],"--tw-prose-th-borders":k.neutral[300],"--tw-prose-td-borders":k.neutral[200],"--tw-prose-invert-body":k.neutral[300],"--tw-prose-invert-headings":k.white,"--tw-prose-invert-lead":k.neutral[400],"--tw-prose-invert-links":k.white,"--tw-prose-invert-bold":k.white,"--tw-prose-invert-counters":k.neutral[400],"--tw-prose-invert-bullets":k.neutral[600],"--tw-prose-invert-hr":k.neutral[700],"--tw-prose-invert-quotes":k.neutral[100],"--tw-prose-invert-quote-borders":k.neutral[700],"--tw-prose-invert-captions":k.neutral[400],"--tw-prose-invert-code":k.white,"--tw-prose-invert-pre-code":k.neutral[300],"--tw-prose-invert-pre-bg":"rgb(0 0 0 / 50%)","--tw-prose-invert-th-borders":k.neutral[600],"--tw-prose-invert-td-borders":k.neutral[700]}},stone:{css:{"--tw-prose-body":k.stone[700],"--tw-prose-headings":k.stone[900],"--tw-prose-lead":k.stone[600],"--tw-prose-links":k.stone[900],"--tw-prose-bold":k.stone[900],"--tw-prose-counters":k.stone[500],"--tw-prose-bullets":k.stone[300],"--tw-prose-hr":k.stone[200],"--tw-prose-quotes":k.stone[900],"--tw-prose-quote-borders":k.stone[200],"--tw-prose-captions":k.stone[500],"--tw-prose-code":k.stone[900],"--tw-prose-pre-code":k.stone[200],"--tw-prose-pre-bg":k.stone[800],"--tw-prose-th-borders":k.stone[300],"--tw-prose-td-borders":k.stone[200],"--tw-prose-invert-body":k.stone[300],"--tw-prose-invert-headings":k.white,"--tw-prose-invert-lead":k.stone[400],"--tw-prose-invert-links":k.white,"--tw-prose-invert-bold":k.white,"--tw-prose-invert-counters":k.stone[400],"--tw-prose-invert-bullets":k.stone[600],"--tw-prose-invert-hr":k.stone[700],"--tw-prose-invert-quotes":k.stone[100],"--tw-prose-invert-quote-borders":k.stone[700],"--tw-prose-invert-captions":k.stone[400],"--tw-prose-invert-code":k.white,"--tw-prose-invert-pre-code":k.stone[300],"--tw-prose-invert-pre-bg":"rgb(0 0 0 / 50%)","--tw-prose-invert-th-borders":k.stone[600],"--tw-prose-invert-td-borders":k.stone[700]}},red:{css:{"--tw-prose-links":k.red[600],"--tw-prose-invert-links":k.red[500]}},orange:{css:{"--tw-prose-links":k.orange[600],"--tw-prose-invert-links":k.orange[500]}},amber:{css:{"--tw-prose-links":k.amber[600],"--tw-prose-invert-links":k.amber[500]}},yellow:{css:{"--tw-prose-links":k.yellow[600],"--tw-prose-invert-links":k.yellow[500]}},lime:{css:{"--tw-prose-links":k.lime[600],"--tw-prose-invert-links":k.lime[500]}},green:{css:{"--tw-prose-links":k.green[600],"--tw-prose-invert-links":k.green[500]}},emerald:{css:{"--tw-prose-links":k.emerald[600],"--tw-prose-invert-links":k.emerald[500]}},teal:{css:{"--tw-prose-links":k.teal[600],"--tw-prose-invert-links":k.teal[500]}},cyan:{css:{"--tw-prose-links":k.cyan[600],"--tw-prose-invert-links":k.cyan[500]}},sky:{css:{"--tw-prose-links":k.sky[600],"--tw-prose-invert-links":k.sky[500]}},blue:{css:{"--tw-prose-links":k.blue[600],"--tw-prose-invert-links":k.blue[500]}},indigo:{css:{"--tw-prose-links":k.indigo[600],"--tw-prose-invert-links":k.indigo[500]}},violet:{css:{"--tw-prose-links":k.violet[600],"--tw-prose-invert-links":k.violet[500]}},purple:{css:{"--tw-prose-links":k.purple[600],"--tw-prose-invert-links":k.purple[500]}},fuchsia:{css:{"--tw-prose-links":k.fuchsia[600],"--tw-prose-invert-links":k.fuchsia[500]}},pink:{css:{"--tw-prose-links":k.pink[600],"--tw-prose-invert-links":k.pink[500]}},rose:{css:{"--tw-prose-links":k.rose[600],"--tw-prose-invert-links":k.rose[500]}}};o1.exports={DEFAULT:{css:[{color:"var(--tw-prose-body)",maxWidth:"65ch",p:{},'[class~="lead"]':{color:"var(--tw-prose-lead)"},a:{color:"var(--tw-prose-links)",textDecoration:"underline",fontWeight:"500"},strong:{color:"var(--tw-prose-bold)",fontWeight:"600"},"a strong":{color:"inherit"},"blockquote strong":{color:"inherit"},"thead th strong":{color:"inherit"},ol:{listStyleType:"decimal"},'ol[type="A"]':{listStyleType:"upper-alpha"},'ol[type="a"]':{listStyleType:"lower-alpha"},'ol[type="A" s]':{listStyleType:"upper-alpha"},'ol[type="a" s]':{listStyleType:"lower-alpha"},'ol[type="I"]':{listStyleType:"upper-roman"},'ol[type="i"]':{listStyleType:"lower-roman"},'ol[type="I" s]':{listStyleType:"upper-roman"},'ol[type="i" s]':{listStyleType:"lower-roman"},'ol[type="1"]':{listStyleType:"decimal"},ul:{listStyleType:"disc"},"ol > li::marker":{fontWeight:"400",color:"var(--tw-prose-counters)"},"ul > li::marker":{color:"var(--tw-prose-bullets)"},hr:{borderColor:"var(--tw-prose-hr)",borderTopWidth:1},blockquote:{fontWeight:"500",fontStyle:"italic",color:"var(--tw-prose-quotes)",borderLeftWidth:"0.25rem",borderLeftColor:"var(--tw-prose-quote-borders)",quotes:'"\\201C""\\201D""\\2018""\\2019"'},"blockquote p:first-of-type::before":{content:"open-quote"},"blockquote p:last-of-type::after":{content:"close-quote"},h1:{color:"var(--tw-prose-headings)",fontWeight:"800"},"h1 strong":{fontWeight:"900",color:"inherit"},h2:{color:"var(--tw-prose-headings)",fontWeight:"700"},"h2 strong":{fontWeight:"800",color:"inherit"},h3:{color:"var(--tw-prose-headings)",fontWeight:"600"},"h3 strong":{fontWeight:"700",color:"inherit"},h4:{color:"var(--tw-prose-headings)",fontWeight:"600"},"h4 strong":{fontWeight:"700",color:"inherit"},img:{},"figure > *":{},figcaption:{color:"var(--tw-prose-captions)"},code:{color:"var(--tw-prose-code)",fontWeight:"600"},"code::before":{content:'"`"'},"code::after":{content:'"`"'},"a code":{color:"inherit"},"h1 code":{color:"inherit"},"h2 code":{color:"inherit"},"h3 code":{color:"inherit"},"h4 code":{color:"inherit"},"blockquote code":{color:"inherit"},"thead th code":{color:"inherit"},pre:{color:"var(--tw-prose-pre-code)",backgroundColor:"var(--tw-prose-pre-bg)",overflowX:"auto",fontWeight:"400"},"pre code":{backgroundColor:"transparent",borderWidth:"0",borderRadius:"0",padding:"0",fontWeight:"inherit",color:"inherit",fontSize:"inherit",fontFamily:"inherit",lineHeight:"inherit"},"pre code::before":{content:"none"},"pre code::after":{content:"none"},table:{width:"100%",tableLayout:"auto",textAlign:"left",marginTop:h(32,16),marginBottom:h(32,16)},thead:{borderBottomWidth:"1px",borderBottomColor:"var(--tw-prose-th-borders)"},"thead th":{color:"var(--tw-prose-headings)",fontWeight:"600",verticalAlign:"bottom"},"tbody tr":{borderBottomWidth:"1px",borderBottomColor:"var(--tw-prose-td-borders)"},"tbody tr:last-child":{borderBottomWidth:"0"},"tbody td":{verticalAlign:"baseline"},tfoot:{borderTopWidth:"1px",borderTopColor:"var(--tw-prose-th-borders)"},"tfoot td":{verticalAlign:"top"}},Nf.gray.css,...Nf.base.css]},...Nf}});var p1=b((v$,c1)=>{u();var H3="[object Object]";function Y3(t){var e=!1;if(t!=null&&typeof t.toString!="function")try{e=!!(t+"")}catch(r){}return e}function Q3(t,e){return function(r){return t(e(r))}}var J3=Function.prototype,u1=Object.prototype,f1=J3.toString,X3=u1.hasOwnProperty,K3=f1.call(Object),Z3=u1.toString,eD=Q3(Object.getPrototypeOf,Object);function tD(t){return!!t&&typeof t=="object"}function rD(t){if(!tD(t)||Z3.call(t)!=H3||Y3(t))return!1;var e=eD(t);if(e===null)return!0;var r=X3.call(e,"constructor")&&e.constructor;return typeof r=="function"&&r instanceof r&&f1.call(r)==K3}c1.exports=rD});var zf=b((ca,d1)=>{u();"use strict";ca.__esModule=!0;ca.default=sD;function iD(t){for(var e=t.toLowerCase(),r="",i=!1,n=0;n<6&&e[n]!==void 0;n++){var s=e.charCodeAt(n),a=s>=97&&s<=102||s>=48&&s<=57;if(i=s===32,!a)break;r+=e[n]}if(r.length!==0){var o=parseInt(r,16),l=o>=55296&&o<=57343;return l||o===0||o>1114111?["\uFFFD",r.length+(i?1:0)]:[String.fromCodePoint(o),r.length+(i?1:0)]}}var nD=/\\/;function sD(t){var e=nD.test(t);if(!e)return t;for(var r="",i=0;i{u();"use strict";pa.__esModule=!0;pa.default=aD;function aD(t){for(var e=arguments.length,r=new Array(e>1?e-1:0),i=1;i0;){var n=r.shift();if(!t[n])return;t=t[n]}return t}h1.exports=pa.default});var w1=b((da,g1)=>{u();"use strict";da.__esModule=!0;da.default=oD;function oD(t){for(var e=arguments.length,r=new Array(e>1?e-1:0),i=1;i0;){var n=r.shift();t[n]||(t[n]={}),t=t[n]}}g1.exports=da.default});var v1=b((ha,y1)=>{u();"use strict";ha.__esModule=!0;ha.default=lD;function lD(t){for(var e="",r=t.indexOf("/*"),i=0;r>=0;){e=e+t.slice(i,r);var n=t.indexOf("*/",r+2);if(n<0)return e;i=n+2,r=t.indexOf("/*",i)}return e=e+t.slice(i),e}y1.exports=ha.default});var Xi=b(lt=>{u();"use strict";lt.__esModule=!0;lt.stripComments=lt.ensureObject=lt.getProp=lt.unesc=void 0;var uD=ma(zf());lt.unesc=uD.default;var fD=ma(m1());lt.getProp=fD.default;var cD=ma(w1());lt.ensureObject=cD.default;var pD=ma(v1());lt.stripComments=pD.default;function ma(t){return t&&t.__esModule?t:{default:t}}});var bt=b((Ki,k1)=>{u();"use strict";Ki.__esModule=!0;Ki.default=void 0;var b1=Xi();function x1(t,e){for(var r=0;ri||this.source.end.linen||this.source.end.line===i&&this.source.end.column{u();"use strict";re.__esModule=!0;re.UNIVERSAL=re.ATTRIBUTE=re.CLASS=re.COMBINATOR=re.COMMENT=re.ID=re.NESTING=re.PSEUDO=re.ROOT=re.SELECTOR=re.STRING=re.TAG=void 0;var gD="tag";re.TAG=gD;var wD="string";re.STRING=wD;var yD="selector";re.SELECTOR=yD;var vD="root";re.ROOT=vD;var bD="pseudo";re.PSEUDO=bD;var xD="nesting";re.NESTING=xD;var kD="id";re.ID=kD;var SD="comment";re.COMMENT=SD;var _D="combinator";re.COMBINATOR=_D;var TD="class";re.CLASS=TD;var OD="attribute";re.ATTRIBUTE=OD;var ED="universal";re.UNIVERSAL=ED});var ga=b((Zi,O1)=>{u();"use strict";Zi.__esModule=!0;Zi.default=void 0;var AD=PD(bt()),xt=CD(xe());function S1(){if(typeof WeakMap!="function")return null;var t=new WeakMap;return S1=function(){return t},t}function CD(t){if(t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var e=S1();if(e&&e.has(t))return e.get(t);var r={},i=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var n in t)if(Object.prototype.hasOwnProperty.call(t,n)){var s=i?Object.getOwnPropertyDescriptor(t,n):null;s&&(s.get||s.set)?Object.defineProperty(r,n,s):r[n]=t[n]}return r.default=t,e&&e.set(t,r),r}function PD(t){return t&&t.__esModule?t:{default:t}}function qD(t,e){var r;if(typeof Symbol=="undefined"||t[Symbol.iterator]==null){if(Array.isArray(t)||(r=DD(t))||e&&t&&typeof t.length=="number"){r&&(t=r);var i=0;return function(){return i>=t.length?{done:!0}:{done:!1,value:t[i++]}}}throw new TypeError(`Invalid attempt to iterate non-iterable instance. -In order to be iterable, non-array objects must have a [Symbol.iterator]() method.`)}return r=t[Symbol.iterator](),r.next.bind(r)}function DD(t,e){if(!!t){if(typeof t=="string")return _1(t,e);var r=Object.prototype.toString.call(t).slice(8,-1);if(r==="Object"&&t.constructor&&(r=t.constructor.name),r==="Map"||r==="Set")return Array.from(t);if(r==="Arguments"||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r))return _1(t,e)}}function _1(t,e){(e==null||e>t.length)&&(e=t.length);for(var r=0,i=new Array(e);r=n&&(this.indexes[a]=s-1);return this},r.removeAll=function(){for(var n=qD(this.nodes),s;!(s=n()).done;){var a=s.value;a.parent=void 0}return this.nodes=[],this},r.empty=function(){return this.removeAll()},r.insertAfter=function(n,s){s.parent=this;var a=this.index(n);this.nodes.splice(a+1,0,s),s.parent=this;var o;for(var l in this.indexes)o=this.indexes[l],a<=o&&(this.indexes[l]=o+1);return this},r.insertBefore=function(n,s){s.parent=this;var a=this.index(n);this.nodes.splice(a,0,s),s.parent=this;var o;for(var l in this.indexes)o=this.indexes[l],o<=a&&(this.indexes[l]=o+1);return this},r._findChildAtPosition=function(n,s){var a=void 0;return this.each(function(o){if(o.atPosition){var l=o.atPosition(n,s);if(l)return a=l,!1}else if(o.isAtPosition(n,s))return a=o,!1}),a},r.atPosition=function(n,s){if(this.isAtPosition(n,s))return this._findChildAtPosition(n,s)||this},r._inferEndPosition=function(){this.last&&this.last.source&&this.last.source.end&&(this.source=this.source||{},this.source.end=this.source.end||{},Object.assign(this.source.end,this.last.source.end))},r.each=function(n){this.lastEach||(this.lastEach=0),this.indexes||(this.indexes={}),this.lastEach++;var s=this.lastEach;if(this.indexes[s]=0,!!this.length){for(var a,o;this.indexes[s]{u();"use strict";en.__esModule=!0;en.default=void 0;var MD=FD(ga()),BD=xe();function FD(t){return t&&t.__esModule?t:{default:t}}function E1(t,e){for(var r=0;r{u();"use strict";tn.__esModule=!0;tn.default=void 0;var jD=VD(ga()),UD=xe();function VD(t){return t&&t.__esModule?t:{default:t}}function WD(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Vf(t,e)}function Vf(t,e){return Vf=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},Vf(t,e)}var GD=function(t){WD(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=UD.SELECTOR,i}return e}(jD.default);tn.default=GD;C1.exports=tn.default});var Hf=b((rn,D1)=>{u();"use strict";rn.__esModule=!0;rn.default=void 0;var HD=P1(Vt()),YD=Xi(),QD=P1(bt()),JD=xe();function P1(t){return t&&t.__esModule?t:{default:t}}function q1(t,e){for(var r=0;r{u();"use strict";nn.__esModule=!0;nn.default=void 0;var eI=rI(bt()),tI=xe();function rI(t){return t&&t.__esModule?t:{default:t}}function iI(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Yf(t,e)}function Yf(t,e){return Yf=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},Yf(t,e)}var nI=function(t){iI(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=tI.COMMENT,i}return e}(eI.default);nn.default=nI;I1.exports=nn.default});var Xf=b((sn,R1)=>{u();"use strict";sn.__esModule=!0;sn.default=void 0;var sI=oI(bt()),aI=xe();function oI(t){return t&&t.__esModule?t:{default:t}}function lI(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Jf(t,e)}function Jf(t,e){return Jf=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},Jf(t,e)}var uI=function(t){lI(e,t);function e(i){var n;return n=t.call(this,i)||this,n.type=aI.ID,n}var r=e.prototype;return r.valueToString=function(){return"#"+t.prototype.valueToString.call(this)},e}(sI.default);sn.default=uI;R1.exports=sn.default});var wa=b((an,B1)=>{u();"use strict";an.__esModule=!0;an.default=void 0;var fI=L1(Vt()),cI=Xi(),pI=L1(bt());function L1(t){return t&&t.__esModule?t:{default:t}}function M1(t,e){for(var r=0;r{u();"use strict";on.__esModule=!0;on.default=void 0;var gI=yI(wa()),wI=xe();function yI(t){return t&&t.__esModule?t:{default:t}}function vI(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,Zf(t,e)}function Zf(t,e){return Zf=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},Zf(t,e)}var bI=function(t){vI(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=wI.TAG,i}return e}(gI.default);on.default=bI;F1.exports=on.default});var rc=b((ln,N1)=>{u();"use strict";ln.__esModule=!0;ln.default=void 0;var xI=SI(bt()),kI=xe();function SI(t){return t&&t.__esModule?t:{default:t}}function _I(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,tc(t,e)}function tc(t,e){return tc=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},tc(t,e)}var TI=function(t){_I(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=kI.STRING,i}return e}(xI.default);ln.default=TI;N1.exports=ln.default});var nc=b((un,z1)=>{u();"use strict";un.__esModule=!0;un.default=void 0;var OI=AI(ga()),EI=xe();function AI(t){return t&&t.__esModule?t:{default:t}}function CI(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,ic(t,e)}function ic(t,e){return ic=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},ic(t,e)}var PI=function(t){CI(e,t);function e(i){var n;return n=t.call(this,i)||this,n.type=EI.PSEUDO,n}var r=e.prototype;return r.toString=function(){var n=this.length?"("+this.map(String).join(",")+")":"";return[this.rawSpaceBefore,this.stringifyProperty("value"),n,this.rawSpaceAfter].join("")},e}(OI.default);un.default=PI;z1.exports=un.default});var fc=b(pn=>{u();"use strict";pn.__esModule=!0;pn.unescapeValue=lc;pn.default=void 0;var fn=ac(Vt()),qI=ac(zf()),DI=ac(wa()),II=xe(),sc;function ac(t){return t&&t.__esModule?t:{default:t}}function $1(t,e){for(var r=0;r0&&!n.quoted&&o.before.length===0&&!(n.spaces.value&&n.spaces.value.after)&&(o.before=" "),j1(a,o)}))),s.push("]"),s.push(this.rawSpaceAfter),s.join("")},RI(e,[{key:"quoted",get:function(){var n=this.quoteMark;return n==="'"||n==='"'},set:function(n){FI()}},{key:"quoteMark",get:function(){return this._quoteMark},set:function(n){if(!this._constructed){this._quoteMark=n;return}this._quoteMark!==n&&(this._quoteMark=n,this._syncRawValue())}},{key:"qualifiedAttribute",get:function(){return this.qualifiedName(this.raws.attribute||this.attribute)}},{key:"insensitiveFlag",get:function(){return this.insensitive?"i":""}},{key:"value",get:function(){return this._value},set:function(n){if(this._constructed){var s=lc(n),a=s.deprecatedUsage,o=s.unescaped,l=s.quoteMark;if(a&&BI(),o===this._value&&l===this._quoteMark)return;this._value=o,this._quoteMark=l,this._syncRawValue()}else this._value=n}},{key:"attribute",get:function(){return this._attribute},set:function(n){this._handleEscapes("attribute",n),this._attribute=n}}]),e}(DI.default);pn.default=ya;ya.NO_QUOTE=null;ya.SINGLE_QUOTE="'";ya.DOUBLE_QUOTE='"';var uc=(sc={"'":{quotes:"single",wrap:!0},'"':{quotes:"double",wrap:!0}},sc[null]={isIdentifier:!0},sc);function j1(t,e){return""+e.before+t+e.after}});var pc=b((dn,U1)=>{u();"use strict";dn.__esModule=!0;dn.default=void 0;var $I=UI(wa()),jI=xe();function UI(t){return t&&t.__esModule?t:{default:t}}function VI(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,cc(t,e)}function cc(t,e){return cc=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},cc(t,e)}var WI=function(t){VI(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=jI.UNIVERSAL,i.value="*",i}return e}($I.default);dn.default=WI;U1.exports=dn.default});var hc=b((hn,V1)=>{u();"use strict";hn.__esModule=!0;hn.default=void 0;var GI=YI(bt()),HI=xe();function YI(t){return t&&t.__esModule?t:{default:t}}function QI(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,dc(t,e)}function dc(t,e){return dc=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},dc(t,e)}var JI=function(t){QI(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=HI.COMBINATOR,i}return e}(GI.default);hn.default=JI;V1.exports=hn.default});var gc=b((mn,W1)=>{u();"use strict";mn.__esModule=!0;mn.default=void 0;var XI=ZI(bt()),KI=xe();function ZI(t){return t&&t.__esModule?t:{default:t}}function e6(t,e){t.prototype=Object.create(e.prototype),t.prototype.constructor=t,mc(t,e)}function mc(t,e){return mc=Object.setPrototypeOf||function(i,n){return i.__proto__=n,i},mc(t,e)}var t6=function(t){e6(e,t);function e(r){var i;return i=t.call(this,r)||this,i.type=KI.NESTING,i.value="&",i}return e}(XI.default);mn.default=t6;W1.exports=mn.default});var H1=b((va,G1)=>{u();"use strict";va.__esModule=!0;va.default=r6;function r6(t){return t.sort(function(e,r){return e-r})}G1.exports=va.default});var wc=b(B=>{u();"use strict";B.__esModule=!0;B.combinator=B.word=B.comment=B.str=B.tab=B.newline=B.feed=B.cr=B.backslash=B.bang=B.slash=B.doubleQuote=B.singleQuote=B.space=B.greaterThan=B.pipe=B.equals=B.plus=B.caret=B.tilde=B.dollar=B.closeSquare=B.openSquare=B.closeParenthesis=B.openParenthesis=B.semicolon=B.colon=B.comma=B.at=B.asterisk=B.ampersand=void 0;var i6=38;B.ampersand=i6;var n6=42;B.asterisk=n6;var s6=64;B.at=s6;var a6=44;B.comma=a6;var o6=58;B.colon=o6;var l6=59;B.semicolon=l6;var u6=40;B.openParenthesis=u6;var f6=41;B.closeParenthesis=f6;var c6=91;B.openSquare=c6;var p6=93;B.closeSquare=p6;var d6=36;B.dollar=d6;var h6=126;B.tilde=h6;var m6=94;B.caret=m6;var g6=43;B.plus=g6;var w6=61;B.equals=w6;var y6=124;B.pipe=y6;var v6=62;B.greaterThan=v6;var b6=32;B.space=b6;var Y1=39;B.singleQuote=Y1;var x6=34;B.doubleQuote=x6;var k6=47;B.slash=k6;var S6=33;B.bang=S6;var _6=92;B.backslash=_6;var T6=13;B.cr=T6;var O6=12;B.feed=O6;var E6=10;B.newline=E6;var A6=9;B.tab=A6;var C6=Y1;B.str=C6;var P6=-1;B.comment=P6;var q6=-2;B.word=q6;var D6=-3;B.combinator=D6});var X1=b(gn=>{u();"use strict";gn.__esModule=!0;gn.default=N6;gn.FIELDS=void 0;var D=I6(wc()),$r,X;function Q1(){if(typeof WeakMap!="function")return null;var t=new WeakMap;return Q1=function(){return t},t}function I6(t){if(t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var e=Q1();if(e&&e.has(t))return e.get(t);var r={},i=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var n in t)if(Object.prototype.hasOwnProperty.call(t,n)){var s=i?Object.getOwnPropertyDescriptor(t,n):null;s&&(s.get||s.set)?Object.defineProperty(r,n,s):r[n]=t[n]}return r.default=t,e&&e.set(t,r),r}var R6=($r={},$r[D.tab]=!0,$r[D.newline]=!0,$r[D.cr]=!0,$r[D.feed]=!0,$r),L6=(X={},X[D.space]=!0,X[D.tab]=!0,X[D.newline]=!0,X[D.cr]=!0,X[D.feed]=!0,X[D.ampersand]=!0,X[D.asterisk]=!0,X[D.bang]=!0,X[D.comma]=!0,X[D.colon]=!0,X[D.semicolon]=!0,X[D.openParenthesis]=!0,X[D.closeParenthesis]=!0,X[D.openSquare]=!0,X[D.closeSquare]=!0,X[D.singleQuote]=!0,X[D.doubleQuote]=!0,X[D.plus]=!0,X[D.pipe]=!0,X[D.tilde]=!0,X[D.greaterThan]=!0,X[D.equals]=!0,X[D.dollar]=!0,X[D.caret]=!0,X[D.slash]=!0,X),yc={},J1="0123456789abcdefABCDEF";for(ba=0;ba0?(S=a+_,T=y-x[_].length):(S=a,T=s),P=D.comment,a=S,m=S,p=y-T):f===D.slash?(y=o,P=f,m=a,p=o-s,l=y+1):(y=M6(r,o),P=D.word,m=a,p=y-s),l=y+1;break}e.push([P,a,o-s,m,p,o,l]),T&&(s=T,T=null),o=l}return e}});var sx=b((wn,nx)=>{u();"use strict";wn.__esModule=!0;wn.default=void 0;var z6=ze(Uf()),vc=ze(Wf()),$6=ze(Hf()),K1=ze(Qf()),j6=ze(Xf()),U6=ze(ec()),bc=ze(rc()),V6=ze(nc()),Z1=xa(fc()),W6=ze(pc()),xc=ze(hc()),G6=ze(gc()),H6=ze(H1()),C=xa(X1()),R=xa(wc()),Y6=xa(xe()),le=Xi(),Xt,kc;function ex(){if(typeof WeakMap!="function")return null;var t=new WeakMap;return ex=function(){return t},t}function xa(t){if(t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var e=ex();if(e&&e.has(t))return e.get(t);var r={},i=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var n in t)if(Object.prototype.hasOwnProperty.call(t,n)){var s=i?Object.getOwnPropertyDescriptor(t,n):null;s&&(s.get||s.set)?Object.defineProperty(r,n,s):r[n]=t[n]}return r.default=t,e&&e.set(t,r),r}function ze(t){return t&&t.__esModule?t:{default:t}}function tx(t,e){for(var r=0;r0){var a=this.current.last;if(a){var o=this.convertWhitespaceNodesToSpace(s),l=o.space,f=o.rawSpace;f!==void 0&&(a.rawSpaceAfter+=f),a.spaces.after+=l}else s.forEach(function(P){return i.newNode(P)})}return}var c=this.currToken,p=void 0;n>this.position&&(p=this.parseWhitespaceEquivalentTokens(n));var m;if(this.isNamedCombinator()?m=this.namedCombinator():this.currToken[C.FIELDS.TYPE]===R.combinator?(m=new xc.default({value:this.content(),source:jr(this.currToken),sourceIndex:this.currToken[C.FIELDS.START_POS]}),this.position++):Sc[this.currToken[C.FIELDS.TYPE]]||p||this.unexpected(),m){if(p){var d=this.convertWhitespaceNodesToSpace(p),v=d.space,_=d.rawSpace;m.spaces.before=v,m.rawSpaceBefore=_}}else{var x=this.convertWhitespaceNodesToSpace(p,!0),y=x.space,S=x.rawSpace;S||(S=y);var T={},O={spaces:{}};y.endsWith(" ")&&S.endsWith(" ")?(T.before=y.slice(0,y.length-1),O.spaces.before=S.slice(0,S.length-1)):y.startsWith(" ")&&S.startsWith(" ")?(T.after=y.slice(1),O.spaces.after=S.slice(1)):O.value=S,m=new xc.default({value:" ",source:_c(c,this.tokens[this.position-1]),sourceIndex:c[C.FIELDS.START_POS],spaces:T,raws:O})}return this.currToken&&this.currToken[C.FIELDS.TYPE]===R.space&&(m.spaces.after=this.optionalSpace(this.content()),this.position++),this.newNode(m)},e.comma=function(){if(this.position===this.tokens.length-1){this.root.trailingComma=!0,this.position++;return}this.current._inferEndPosition();var i=new vc.default({source:{start:rx(this.tokens[this.position+1])}});this.current.parent.append(i),this.current=i,this.position++},e.comment=function(){var i=this.currToken;this.newNode(new K1.default({value:this.content(),source:jr(i),sourceIndex:i[C.FIELDS.START_POS]})),this.position++},e.error=function(i,n){throw this.root.error(i,n)},e.missingBackslash=function(){return this.error("Expected a backslash preceding the semicolon.",{index:this.currToken[C.FIELDS.START_POS]})},e.missingParenthesis=function(){return this.expected("opening parenthesis",this.currToken[C.FIELDS.START_POS])},e.missingSquareBracket=function(){return this.expected("opening square bracket",this.currToken[C.FIELDS.START_POS])},e.unexpected=function(){return this.error("Unexpected '"+this.content()+"'. Escaping special characters with \\ may help.",this.currToken[C.FIELDS.START_POS])},e.namespace=function(){var i=this.prevToken&&this.content(this.prevToken)||!0;if(this.nextToken[C.FIELDS.TYPE]===R.word)return this.position++,this.word(i);if(this.nextToken[C.FIELDS.TYPE]===R.asterisk)return this.position++,this.universal(i)},e.nesting=function(){if(this.nextToken){var i=this.content(this.nextToken);if(i==="|"){this.position++;return}}var n=this.currToken;this.newNode(new G6.default({value:this.content(),source:jr(n),sourceIndex:n[C.FIELDS.START_POS]})),this.position++},e.parentheses=function(){var i=this.current.last,n=1;if(this.position++,i&&i.type===Y6.PSEUDO){var s=new vc.default({source:{start:rx(this.tokens[this.position-1])}}),a=this.current;for(i.append(s),this.current=s;this.position1&&i.nextToken&&i.nextToken[C.FIELDS.TYPE]===R.openParenthesis&&i.error("Misplaced parenthesis.",{index:i.nextToken[C.FIELDS.START_POS]})});else return this.expected(["pseudo-class","pseudo-element"],this.currToken[C.FIELDS.START_POS])},e.space=function(){var i=this.content();this.position===0||this.prevToken[C.FIELDS.TYPE]===R.comma||this.prevToken[C.FIELDS.TYPE]===R.openParenthesis||this.current.nodes.every(function(n){return n.type==="comment"})?(this.spaces=this.optionalSpace(i),this.position++):this.position===this.tokens.length-1||this.nextToken[C.FIELDS.TYPE]===R.comma||this.nextToken[C.FIELDS.TYPE]===R.closeParenthesis?(this.current.last.spaces.after=this.optionalSpace(i),this.position++):this.combinator()},e.string=function(){var i=this.currToken;this.newNode(new bc.default({value:this.content(),source:jr(i),sourceIndex:i[C.FIELDS.START_POS]})),this.position++},e.universal=function(i){var n=this.nextToken;if(n&&this.content(n)==="|")return this.position++,this.namespace();var s=this.currToken;this.newNode(new W6.default({value:this.content(),source:jr(s),sourceIndex:s[C.FIELDS.START_POS]}),i),this.position++},e.splitWord=function(i,n){for(var s=this,a=this.nextToken,o=this.content();a&&~[R.dollar,R.caret,R.equals,R.word].indexOf(a[C.FIELDS.TYPE]);){this.position++;var l=this.content();if(o+=l,l.lastIndexOf("\\")===l.length-1){var f=this.nextToken;f&&f[C.FIELDS.TYPE]===R.space&&(o+=this.requiredSpace(this.content(f)),this.position++)}a=this.nextToken}var c=Tc(o,".").filter(function(v){var _=o[v-1]==="\\",x=/^\d+\.\d+%$/.test(o);return!_&&!x}),p=Tc(o,"#").filter(function(v){return o[v-1]!=="\\"}),m=Tc(o,"#{");m.length&&(p=p.filter(function(v){return!~m.indexOf(v)}));var d=(0,H6.default)(X6([0].concat(c,p)));d.forEach(function(v,_){var x=d[_+1]||o.length,y=o.slice(v,x);if(_===0&&n)return n.call(s,y,d.length);var S,T=s.currToken,O=T[C.FIELDS.START_POS]+d[_],P=Kt(T[1],T[2]+v,T[3],T[2]+(x-1));if(~c.indexOf(v)){var N={value:y.slice(1),source:P,sourceIndex:O};S=new $6.default(Ur(N,"value"))}else if(~p.indexOf(v)){var z={value:y.slice(1),source:P,sourceIndex:O};S=new j6.default(Ur(z,"value"))}else{var F={value:y,source:P,sourceIndex:O};Ur(F,"value"),S=new U6.default(F)}s.newNode(S,i),i=null}),this.position++},e.word=function(i){var n=this.nextToken;return n&&this.content(n)==="|"?(this.position++,this.namespace()):this.splitWord(i)},e.loop=function(){for(;this.position{u();"use strict";yn.__esModule=!0;yn.default=void 0;var Z6=eR(sx());function eR(t){return t&&t.__esModule?t:{default:t}}var tR=function(){function t(r,i){this.func=r||function(){},this.funcRes=null,this.options=i}var e=t.prototype;return e._shouldUpdateSelector=function(i,n){n===void 0&&(n={});var s=Object.assign({},this.options,n);return s.updateSelector===!1?!1:typeof i!="string"},e._isLossy=function(i){i===void 0&&(i={});var n=Object.assign({},this.options,i);return n.lossless===!1},e._root=function(i,n){n===void 0&&(n={});var s=new Z6.default(i,this._parseOptions(n));return s.root},e._parseOptions=function(i){return{lossy:this._isLossy(i)}},e._run=function(i,n){var s=this;return n===void 0&&(n={}),new Promise(function(a,o){try{var l=s._root(i,n);Promise.resolve(s.func(l)).then(function(f){var c=void 0;return s._shouldUpdateSelector(i,n)&&(c=l.toString(),i.selector=c),{transform:f,root:l,string:c}}).then(a,o)}catch(f){o(f);return}})},e._runSync=function(i,n){n===void 0&&(n={});var s=this._root(i,n),a=this.func(s);if(a&&typeof a.then=="function")throw new Error("Selector processor returned a promise to a synchronous call.");var o=void 0;return n.updateSelector&&typeof i!="string"&&(o=s.toString(),i.selector=o),{transform:a,root:s,string:o}},e.ast=function(i,n){return this._run(i,n).then(function(s){return s.root})},e.astSync=function(i,n){return this._runSync(i,n).root},e.transform=function(i,n){return this._run(i,n).then(function(s){return s.transform})},e.transformSync=function(i,n){return this._runSync(i,n).transform},e.process=function(i,n){return this._run(i,n).then(function(s){return s.string||s.root.toString()})},e.processSync=function(i,n){var s=this._runSync(i,n);return s.string||s.root.toString()},t}();yn.default=tR;ax.exports=yn.default});var lx=b(ie=>{u();"use strict";ie.__esModule=!0;ie.universal=ie.tag=ie.string=ie.selector=ie.root=ie.pseudo=ie.nesting=ie.id=ie.comment=ie.combinator=ie.className=ie.attribute=void 0;var rR=$e(fc()),iR=$e(Hf()),nR=$e(hc()),sR=$e(Qf()),aR=$e(Xf()),oR=$e(gc()),lR=$e(nc()),uR=$e(Uf()),fR=$e(Wf()),cR=$e(rc()),pR=$e(ec()),dR=$e(pc());function $e(t){return t&&t.__esModule?t:{default:t}}var hR=function(e){return new rR.default(e)};ie.attribute=hR;var mR=function(e){return new iR.default(e)};ie.className=mR;var gR=function(e){return new nR.default(e)};ie.combinator=gR;var wR=function(e){return new sR.default(e)};ie.comment=wR;var yR=function(e){return new aR.default(e)};ie.id=yR;var vR=function(e){return new oR.default(e)};ie.nesting=vR;var bR=function(e){return new lR.default(e)};ie.pseudo=bR;var xR=function(e){return new uR.default(e)};ie.root=xR;var kR=function(e){return new fR.default(e)};ie.selector=kR;var SR=function(e){return new cR.default(e)};ie.string=SR;var _R=function(e){return new pR.default(e)};ie.tag=_R;var TR=function(e){return new dR.default(e)};ie.universal=TR});var px=b(H=>{u();"use strict";H.__esModule=!0;H.isNode=Oc;H.isPseudoElement=cx;H.isPseudoClass=MR;H.isContainer=BR;H.isNamespace=FR;H.isUniversal=H.isTag=H.isString=H.isSelector=H.isRoot=H.isPseudo=H.isNesting=H.isIdentifier=H.isComment=H.isCombinator=H.isClassName=H.isAttribute=void 0;var ue=xe(),Pe,OR=(Pe={},Pe[ue.ATTRIBUTE]=!0,Pe[ue.CLASS]=!0,Pe[ue.COMBINATOR]=!0,Pe[ue.COMMENT]=!0,Pe[ue.ID]=!0,Pe[ue.NESTING]=!0,Pe[ue.PSEUDO]=!0,Pe[ue.ROOT]=!0,Pe[ue.SELECTOR]=!0,Pe[ue.STRING]=!0,Pe[ue.TAG]=!0,Pe[ue.UNIVERSAL]=!0,Pe);function Oc(t){return typeof t=="object"&&OR[t.type]}function je(t,e){return Oc(e)&&e.type===t}var ux=je.bind(null,ue.ATTRIBUTE);H.isAttribute=ux;var ER=je.bind(null,ue.CLASS);H.isClassName=ER;var AR=je.bind(null,ue.COMBINATOR);H.isCombinator=AR;var CR=je.bind(null,ue.COMMENT);H.isComment=CR;var PR=je.bind(null,ue.ID);H.isIdentifier=PR;var qR=je.bind(null,ue.NESTING);H.isNesting=qR;var Ec=je.bind(null,ue.PSEUDO);H.isPseudo=Ec;var DR=je.bind(null,ue.ROOT);H.isRoot=DR;var IR=je.bind(null,ue.SELECTOR);H.isSelector=IR;var RR=je.bind(null,ue.STRING);H.isString=RR;var fx=je.bind(null,ue.TAG);H.isTag=fx;var LR=je.bind(null,ue.UNIVERSAL);H.isUniversal=LR;function cx(t){return Ec(t)&&t.value&&(t.value.startsWith("::")||t.value.toLowerCase()===":before"||t.value.toLowerCase()===":after"||t.value.toLowerCase()===":first-letter"||t.value.toLowerCase()===":first-line")}function MR(t){return Ec(t)&&!cx(t)}function BR(t){return!!(Oc(t)&&t.walk)}function FR(t){return ux(t)||fx(t)}});var dx=b(Je=>{u();"use strict";Je.__esModule=!0;var Ac=xe();Object.keys(Ac).forEach(function(t){t==="default"||t==="__esModule"||t in Je&&Je[t]===Ac[t]||(Je[t]=Ac[t])});var Cc=lx();Object.keys(Cc).forEach(function(t){t==="default"||t==="__esModule"||t in Je&&Je[t]===Cc[t]||(Je[t]=Cc[t])});var Pc=px();Object.keys(Pc).forEach(function(t){t==="default"||t==="__esModule"||t in Je&&Je[t]===Pc[t]||(Je[t]=Pc[t])})});var gx=b((vn,mx)=>{u();"use strict";vn.__esModule=!0;vn.default=void 0;var NR=jR(ox()),zR=$R(dx());function hx(){if(typeof WeakMap!="function")return null;var t=new WeakMap;return hx=function(){return t},t}function $R(t){if(t&&t.__esModule)return t;if(t===null||typeof t!="object"&&typeof t!="function")return{default:t};var e=hx();if(e&&e.has(t))return e.get(t);var r={},i=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var n in t)if(Object.prototype.hasOwnProperty.call(t,n)){var s=i?Object.getOwnPropertyDescriptor(t,n):null;s&&(s.get||s.set)?Object.defineProperty(r,n,s):r[n]=t[n]}return r.default=t,e&&e.set(t,r),r}function jR(t){return t&&t.__esModule?t:{default:t}}var qc=function(e){return new NR.default(e)};Object.assign(qc,zR);delete qc.__esModule;var UR=qc;vn.default=UR;mx.exports=vn.default});var vx=b((A$,yx)=>{u();var VR=p1(),wx=gx(),WR=wx();yx.exports={isUsableColor(t,e){return VR(e)&&t!=="gray"&&e[600]},commonTrailingPseudos(t){let e=WR.astSync(t),r=[];for(let[n,s]of e.nodes.entries())for(let[a,o]of[...s.nodes].reverse().entries()){if(o.type!=="pseudo"||!o.value.startsWith("::"))break;r[a]=r[a]||[],r[a][n]=o}let i=wx.selector();for(let n of r){if(!n)continue;if(new Set([...n.map(a=>a.value)]).size>1)break;n.forEach(a=>a.remove()),i.prepend(n[0])}return i.nodes.length?[i.toString(),e.toString()]:[null,t]}}});var Sx=b((C$,kx)=>{u();var GR=(br(),vr).default,HR=n1(),YR=a1(),QR=l1(),{commonTrailingPseudos:JR}=vx(),bx={};function Dc(t,{className:e,modifier:r,prefix:i}){let n=i(`.not-${e}`).slice(1),s=t.startsWith(">")?`${r==="DEFAULT"?`.${e}`:`.${e}-${r}`} `:"",[a,o]=JR(t);return a?`:where(${s}${o}):not(:where([class~="${n}"] *))${a}`:`:where(${s}${t}):not(:where([class~="${n}"] *))`}function xx(t){return typeof t=="object"&&t!==null}function XR(t={},{target:e,className:r,modifier:i,prefix:n}){function s(a,o){return e==="legacy"?[a,o]:Array.isArray(o)?[a,o]:xx(o)?Object.values(o).some(xx)?[Dc(a,{className:r,modifier:i,prefix:n}),o,Object.fromEntries(Object.entries(o).map(([f,c])=>s(f,c)))]:[Dc(a,{className:r,modifier:i,prefix:n}),o]:[a,o]}return Object.fromEntries(Object.entries(HR({},...Object.keys(t).filter(a=>bx[a]).map(a=>bx[a](t[a])),...YR(t.css||{}))).map(([a,o])=>s(a,o)))}kx.exports=GR.withOptions(({className:t="prose",target:e="modern"}={})=>function({addVariant:r,addComponents:i,theme:n,prefix:s}){let a=n("typography"),o={className:t,prefix:s};for(let[l,...f]of[["headings","h1","h2","h3","h4","h5","h6","th"],["h1"],["h2"],["h3"],["h4"],["h5"],["h6"],["p"],["a"],["blockquote"],["figure"],["figcaption"],["strong"],["em"],["code"],["pre"],["ol"],["ul"],["li"],["table"],["thead"],["tr"],["th"],["td"],["img"],["video"],["hr"],["lead",'[class~="lead"]']]){f=f.length===0?[l]:f;let c=e==="legacy"?f.map(p=>`& ${p}`):f.join(", ");r(`${t}-${l}`,e==="legacy"?c:`& :is(${Dc(c,o)})`)}i(Object.keys(a).map(l=>({[l==="DEFAULT"?`.${t}`:`.${t}-${l}`]:XR(a[l],{target:e,className:t,modifier:l,prefix:s})})))},()=>({theme:{typography:QR}}))});var Ax=b((P$,Ex)=>{u();var KR=(br(),vr).default,_x={position:"relative",paddingBottom:"calc(var(--tw-aspect-h) / var(--tw-aspect-w) * 100%)"},Tx={position:"absolute",height:"100%",width:"100%",top:"0",right:"0",bottom:"0",left:"0"},Ox={".aspect-none":{position:"static",paddingBottom:"0"},".aspect-none > *":{position:"static",height:"auto",width:"auto",top:"auto",right:"auto",bottom:"auto",left:"auto"}},ZR=KR(function({addComponents:t,matchComponents:e,theme:r,variants:i,e:n}){let s=r("aspectRatio");if(e){e({"aspect-w":l=>[{..._x,"--tw-aspect-w":l},{"> *":Tx}],"aspect-h":l=>({"--tw-aspect-h":l})},{values:s}),t(Ox);return}let a=Object.entries(s).map(([l,f])=>`.${n(`aspect-w-${l}`)}`).join(`, -`),o=Object.entries(s).map(([l,f])=>`.${n(`aspect-w-${l}`)} > *`).join(`, -`);t([{[a]:_x,[o]:Tx},Ox,Object.entries(s).map(([l,f])=>({[`.${n(`aspect-w-${l}`)}`]:{"--tw-aspect-w":f}})),Object.entries(s).map(([l,f])=>({[`.${n(`aspect-h-${l}`)}`]:{"--tw-aspect-h":f}}))],i("aspectRatio"))},{theme:{aspectRatio:{1:"1",2:"2",3:"3",4:"4",5:"5",6:"6",7:"7",8:"8",9:"9",10:"10",11:"11",12:"12",13:"13",14:"14",15:"15",16:"16"}},variants:{aspectRatio:["responsive"]}});Ex.exports=ZR});var Cx={};Ve(Cx,{default:()=>eL});var eL,Px=E(()=>{u();eL=[Cb(),Sx(),Ax(),Wl()]});var Dx={};Ve(Dx,{default:()=>tL});var qx,tL,Ix=E(()=>{u();En();qx=he(Dn()),tL=Ot(qx.default)});u();"use strict";var rL=kt($0()),iL=kt(De()),nL=kt(vb()),sL=kt((Px(),Cx)),aL=kt((Af(),Ef)),oL=kt((Ix(),Dx)),lL=kt((Wr(),_n)),uL=kt((br(),vr)),fL=kt((ja(),Pp));function kt(t){return t&&t.__esModule?t:{default:t}}console.warn("cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation");var ka="tailwind",Ic="text/tailwindcss",Rx="/template.html",Zt,Lx=!0,Mx=0,Rc=new Set,Lc,Bx="",Fx=(t=!1)=>({get(e,r){return(!t||r==="config")&&typeof e[r]=="object"&&e[r]!==null?new Proxy(e[r],Fx()):e[r]},set(e,r,i){return e[r]=i,(!t||r==="config")&&Mc(!0),!0}});window[ka]=new Proxy({config:{},defaultTheme:aL.default,defaultConfig:oL.default,colors:lL.default,plugin:uL.default,resolveConfig:fL.default},Fx(!0));function Nx(t){Lc.observe(t,{attributes:!0,attributeFilter:["type"],characterData:!0,subtree:!0,childList:!0})}new MutationObserver(async t=>{let e=!1;if(!Lc){Lc=new MutationObserver(async()=>await Mc(!0));for(let r of document.querySelectorAll(`style[type="${Ic}"]`))Nx(r)}for(let r of t)for(let i of r.addedNodes)i.nodeType===1&&i.tagName==="STYLE"&&i.getAttribute("type")===Ic&&(Nx(i),e=!0);await Mc(e)}).observe(document.documentElement,{attributes:!0,attributeFilter:["class"],childList:!0,subtree:!0});async function Mc(t=!1){t&&(Mx++,Rc.clear());let e="";for(let i of document.querySelectorAll(`style[type="${Ic}"]`))e+=i.textContent;let r=new Set;for(let i of document.querySelectorAll("[class]"))for(let n of i.classList)Rc.has(n)||r.add(n);if(document.body&&(Lx||r.size>0||e!==Bx||!Zt||!Zt.isConnected)){for(let n of r)Rc.add(n);Lx=!1,Bx=e,self[Rx]=Array.from(r).join(" ");let i=(0,iL.default)([(0,rL.default)({...window[ka].config,_hash:Mx,content:[Rx],plugins:[...sL.default,...Array.isArray(window[ka].config.plugins)?window[ka].config.plugins:[]]}),(0,nL.default)({remove:!1})]).process(`@tailwind base;@tailwind components;@tailwind utilities;${e}`).css;(!Zt||!Zt.isConnected)&&(Zt=document.createElement("style"),document.head.append(Zt)),Zt.textContent=i}}})(); -/*! https://mths.be/cssesc v3.0.0 by @mathias */ diff --git a/spaces/jdhuka/StaticHTML5PlayCanvas/style.css b/spaces/jdhuka/StaticHTML5PlayCanvas/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/jdhuka/StaticHTML5PlayCanvas/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/PublicKey/ECC.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/PublicKey/ECC.py deleted file mode 100644 index dbb29d5b9138211321db0379dd76c98d76fc4415..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/PublicKey/ECC.py +++ /dev/null @@ -1,1800 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2015, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from __future__ import print_function - -import re -import struct -import binascii -from collections import namedtuple - -from Crypto.Util.py3compat import bord, tobytes, tostr, bchr, is_string -from Crypto.Util.number import bytes_to_long, long_to_bytes - -from Crypto.Math.Numbers import Integer -from Crypto.Util.asn1 import (DerObjectId, DerOctetString, DerSequence, - DerBitString) - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, - SmartPointer, c_size_t, c_uint8_ptr, - c_ulonglong, null_pointer) - -from Crypto.PublicKey import (_expand_subject_public_key_info, - _create_subject_public_key_info, - _extract_subject_public_key_info) - -from Crypto.Hash import SHA512, SHAKE256 - -from Crypto.Random import get_random_bytes -from Crypto.Random.random import getrandbits - - -_ec_lib = load_pycryptodome_raw_lib("Crypto.PublicKey._ec_ws", """ -typedef void EcContext; -typedef void EcPoint; -int ec_ws_new_context(EcContext **pec_ctx, - const uint8_t *modulus, - const uint8_t *b, - const uint8_t *order, - size_t len, - uint64_t seed); -void ec_free_context(EcContext *ec_ctx); -int ec_ws_new_point(EcPoint **pecp, - const uint8_t *x, - const uint8_t *y, - size_t len, - const EcContext *ec_ctx); -void ec_ws_free_point(EcPoint *ecp); -int ec_ws_get_xy(uint8_t *x, - uint8_t *y, - size_t len, - const EcPoint *ecp); -int ec_ws_double(EcPoint *p); -int ec_ws_add(EcPoint *ecpa, EcPoint *ecpb); -int ec_ws_scalar(EcPoint *ecp, - const uint8_t *k, - size_t len, - uint64_t seed); -int ec_ws_clone(EcPoint **pecp2, const EcPoint *ecp); -int ec_ws_cmp(const EcPoint *ecp1, const EcPoint *ecp2); -int ec_ws_neg(EcPoint *p); -""") - -_ed25519_lib = load_pycryptodome_raw_lib("Crypto.PublicKey._ed25519", """ -typedef void Point; -int ed25519_new_point(Point **out, - const uint8_t x[32], - const uint8_t y[32], - size_t modsize, - const void *context); -int ed25519_clone(Point **P, const Point *Q); -void ed25519_free_point(Point *p); -int ed25519_cmp(const Point *p1, const Point *p2); -int ed25519_neg(Point *p); -int ed25519_get_xy(uint8_t *xb, uint8_t *yb, size_t modsize, Point *p); -int ed25519_double(Point *p); -int ed25519_add(Point *P1, const Point *P2); -int ed25519_scalar(Point *P, const uint8_t *scalar, size_t scalar_len, uint64_t seed); -""") - -_ed448_lib = load_pycryptodome_raw_lib("Crypto.PublicKey._ed448", """ -typedef void EcContext; -typedef void PointEd448; -int ed448_new_context(EcContext **pec_ctx); -void ed448_context(EcContext *ec_ctx); -void ed448_free_context(EcContext *ec_ctx); -int ed448_new_point(PointEd448 **out, - const uint8_t x[56], - const uint8_t y[56], - size_t len, - const EcContext *context); -int ed448_clone(PointEd448 **P, const PointEd448 *Q); -void ed448_free_point(PointEd448 *p); -int ed448_cmp(const PointEd448 *p1, const PointEd448 *p2); -int ed448_neg(PointEd448 *p); -int ed448_get_xy(uint8_t *xb, uint8_t *yb, size_t len, const PointEd448 *p); -int ed448_double(PointEd448 *p); -int ed448_add(PointEd448 *P1, const PointEd448 *P2); -int ed448_scalar(PointEd448 *P, const uint8_t *scalar, size_t scalar_len, uint64_t seed); -""") - - -def lib_func(ecc_obj, func_name): - if ecc_obj._curve.desc == "Ed25519": - result = getattr(_ed25519_lib, "ed25519_" + func_name) - elif ecc_obj._curve.desc == "Ed448": - result = getattr(_ed448_lib, "ed448_" + func_name) - else: - result = getattr(_ec_lib, "ec_ws_" + func_name) - return result - -# -# _curves is a database of curve parameters. Items are indexed by their -# human-friendly name, suchas "P-256". Each item has the following fields: -# - p: the prime number that defines the finite field for all modulo operations -# - b: the constant in the Short Weierstrass curve equation -# - order: the number of elements in the group with the generator below -# - Gx the affine coordinate X of the generator point -# - Gy the affine coordinate Y of the generator point -# - G the generator, as an EccPoint object -# - modulus_bits the minimum number of bits for encoding the modulus p -# - oid an ASCII string with the registered ASN.1 Object ID -# - context a raw pointer to memory holding a context for all curve operations (can be NULL) -# - desc an ASCII string describing the curve -# - openssh the ASCII string used in OpenSSH id files for public keys on this curve -# - name the ASCII string which is also a valid key in _curves - - -_Curve = namedtuple("_Curve", "p b order Gx Gy G modulus_bits oid context desc openssh name") -_curves = {} - - -p192_names = ["p192", "NIST P-192", "P-192", "prime192v1", "secp192r1", - "nistp192"] - - -def init_p192(): - p = 0xfffffffffffffffffffffffffffffffeffffffffffffffff - b = 0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1 - order = 0xffffffffffffffffffffffff99def836146bc9b1b4d22831 - Gx = 0x188da80eb03090f67cbf20eb43a18800f4ff0afd82ff1012 - Gy = 0x07192b95ffc8da78631011ed6b24cdd573f977a11e794811 - - p192_modulus = long_to_bytes(p, 24) - p192_b = long_to_bytes(b, 24) - p192_order = long_to_bytes(order, 24) - - ec_p192_context = VoidPointer() - result = _ec_lib.ec_ws_new_context(ec_p192_context.address_of(), - c_uint8_ptr(p192_modulus), - c_uint8_ptr(p192_b), - c_uint8_ptr(p192_order), - c_size_t(len(p192_modulus)), - c_ulonglong(getrandbits(64)) - ) - if result: - raise ImportError("Error %d initializing P-192 context" % result) - - context = SmartPointer(ec_p192_context.get(), _ec_lib.ec_free_context) - p192 = _Curve(Integer(p), - Integer(b), - Integer(order), - Integer(Gx), - Integer(Gy), - None, - 192, - "1.2.840.10045.3.1.1", # ANSI X9.62 / SEC2 - context, - "NIST P-192", - "ecdsa-sha2-nistp192", - "p192") - global p192_names - _curves.update(dict.fromkeys(p192_names, p192)) - - -init_p192() -del init_p192 - - -p224_names = ["p224", "NIST P-224", "P-224", "prime224v1", "secp224r1", - "nistp224"] - - -def init_p224(): - p = 0xffffffffffffffffffffffffffffffff000000000000000000000001 - b = 0xb4050a850c04b3abf54132565044b0b7d7bfd8ba270b39432355ffb4 - order = 0xffffffffffffffffffffffffffff16a2e0b8f03e13dd29455c5c2a3d - Gx = 0xb70e0cbd6bb4bf7f321390b94a03c1d356c21122343280d6115c1d21 - Gy = 0xbd376388b5f723fb4c22dfe6cd4375a05a07476444d5819985007e34 - - p224_modulus = long_to_bytes(p, 28) - p224_b = long_to_bytes(b, 28) - p224_order = long_to_bytes(order, 28) - - ec_p224_context = VoidPointer() - result = _ec_lib.ec_ws_new_context(ec_p224_context.address_of(), - c_uint8_ptr(p224_modulus), - c_uint8_ptr(p224_b), - c_uint8_ptr(p224_order), - c_size_t(len(p224_modulus)), - c_ulonglong(getrandbits(64)) - ) - if result: - raise ImportError("Error %d initializing P-224 context" % result) - - context = SmartPointer(ec_p224_context.get(), _ec_lib.ec_free_context) - p224 = _Curve(Integer(p), - Integer(b), - Integer(order), - Integer(Gx), - Integer(Gy), - None, - 224, - "1.3.132.0.33", # SEC 2 - context, - "NIST P-224", - "ecdsa-sha2-nistp224", - "p224") - global p224_names - _curves.update(dict.fromkeys(p224_names, p224)) - - -init_p224() -del init_p224 - - -p256_names = ["p256", "NIST P-256", "P-256", "prime256v1", "secp256r1", - "nistp256"] - - -def init_p256(): - p = 0xffffffff00000001000000000000000000000000ffffffffffffffffffffffff - b = 0x5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b - order = 0xffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632551 - Gx = 0x6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296 - Gy = 0x4fe342e2fe1a7f9b8ee7eb4a7c0f9e162bce33576b315ececbb6406837bf51f5 - - p256_modulus = long_to_bytes(p, 32) - p256_b = long_to_bytes(b, 32) - p256_order = long_to_bytes(order, 32) - - ec_p256_context = VoidPointer() - result = _ec_lib.ec_ws_new_context(ec_p256_context.address_of(), - c_uint8_ptr(p256_modulus), - c_uint8_ptr(p256_b), - c_uint8_ptr(p256_order), - c_size_t(len(p256_modulus)), - c_ulonglong(getrandbits(64)) - ) - if result: - raise ImportError("Error %d initializing P-256 context" % result) - - context = SmartPointer(ec_p256_context.get(), _ec_lib.ec_free_context) - p256 = _Curve(Integer(p), - Integer(b), - Integer(order), - Integer(Gx), - Integer(Gy), - None, - 256, - "1.2.840.10045.3.1.7", # ANSI X9.62 / SEC2 - context, - "NIST P-256", - "ecdsa-sha2-nistp256", - "p256") - global p256_names - _curves.update(dict.fromkeys(p256_names, p256)) - - -init_p256() -del init_p256 - - -p384_names = ["p384", "NIST P-384", "P-384", "prime384v1", "secp384r1", - "nistp384"] - - -def init_p384(): - p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffff0000000000000000ffffffff - b = 0xb3312fa7e23ee7e4988e056be3f82d19181d9c6efe8141120314088f5013875ac656398d8a2ed19d2a85c8edd3ec2aef - order = 0xffffffffffffffffffffffffffffffffffffffffffffffffc7634d81f4372ddf581a0db248b0a77aecec196accc52973 - Gx = 0xaa87ca22be8b05378eb1c71ef320ad746e1d3b628ba79b9859f741e082542a385502f25dbf55296c3a545e3872760aB7 - Gy = 0x3617de4a96262c6f5d9e98bf9292dc29f8f41dbd289a147ce9da3113b5f0b8c00a60b1ce1d7e819d7a431d7c90ea0e5F - - p384_modulus = long_to_bytes(p, 48) - p384_b = long_to_bytes(b, 48) - p384_order = long_to_bytes(order, 48) - - ec_p384_context = VoidPointer() - result = _ec_lib.ec_ws_new_context(ec_p384_context.address_of(), - c_uint8_ptr(p384_modulus), - c_uint8_ptr(p384_b), - c_uint8_ptr(p384_order), - c_size_t(len(p384_modulus)), - c_ulonglong(getrandbits(64)) - ) - if result: - raise ImportError("Error %d initializing P-384 context" % result) - - context = SmartPointer(ec_p384_context.get(), _ec_lib.ec_free_context) - p384 = _Curve(Integer(p), - Integer(b), - Integer(order), - Integer(Gx), - Integer(Gy), - None, - 384, - "1.3.132.0.34", # SEC 2 - context, - "NIST P-384", - "ecdsa-sha2-nistp384", - "p384") - global p384_names - _curves.update(dict.fromkeys(p384_names, p384)) - - -init_p384() -del init_p384 - - -p521_names = ["p521", "NIST P-521", "P-521", "prime521v1", "secp521r1", - "nistp521"] - - -def init_p521(): - p = 0x000001ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff - b = 0x00000051953eb9618e1c9a1f929a21a0b68540eea2da725b99b315f3b8b489918ef109e156193951ec7e937b1652c0bd3bb1bf073573df883d2c34f1ef451fd46b503f00 - order = 0x000001fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffa51868783bf2f966b7fcc0148f709a5d03bb5c9b8899c47aebb6fb71e91386409 - Gx = 0x000000c6858e06b70404e9cd9e3ecb662395b4429c648139053fb521f828af606b4d3dbaa14b5e77efe75928fe1dc127a2ffa8de3348b3c1856a429bf97e7e31c2e5bd66 - Gy = 0x0000011839296a789a3bc0045c8a5fb42c7d1bd998f54449579b446817afbd17273e662c97ee72995ef42640c550b9013fad0761353c7086a272c24088be94769fd16650 - - p521_modulus = long_to_bytes(p, 66) - p521_b = long_to_bytes(b, 66) - p521_order = long_to_bytes(order, 66) - - ec_p521_context = VoidPointer() - result = _ec_lib.ec_ws_new_context(ec_p521_context.address_of(), - c_uint8_ptr(p521_modulus), - c_uint8_ptr(p521_b), - c_uint8_ptr(p521_order), - c_size_t(len(p521_modulus)), - c_ulonglong(getrandbits(64)) - ) - if result: - raise ImportError("Error %d initializing P-521 context" % result) - - context = SmartPointer(ec_p521_context.get(), _ec_lib.ec_free_context) - p521 = _Curve(Integer(p), - Integer(b), - Integer(order), - Integer(Gx), - Integer(Gy), - None, - 521, - "1.3.132.0.35", # SEC 2 - context, - "NIST P-521", - "ecdsa-sha2-nistp521", - "p521") - global p521_names - _curves.update(dict.fromkeys(p521_names, p521)) - - -init_p521() -del init_p521 - - -ed25519_names = ["ed25519", "Ed25519"] - - -def init_ed25519(): - p = 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed # 2**255 - 19 - order = 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed - Gx = 0x216936d3cd6e53fec0a4e231fdd6dc5c692cc7609525a7b2c9562d608f25d51a - Gy = 0x6666666666666666666666666666666666666666666666666666666666666658 - - ed25519 = _Curve(Integer(p), - None, - Integer(order), - Integer(Gx), - Integer(Gy), - None, - 255, - "1.3.101.112", # RFC8410 - None, - "Ed25519", # Used throughout; do not change - "ssh-ed25519", - "ed25519") - global ed25519_names - _curves.update(dict.fromkeys(ed25519_names, ed25519)) - - -init_ed25519() -del init_ed25519 - - -ed448_names = ["ed448", "Ed448"] - - -def init_ed448(): - p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffffffff # 2**448 - 2**224 - 1 - order = 0x3fffffffffffffffffffffffffffffffffffffffffffffffffffffff7cca23e9c44edb49aed63690216cc2728dc58f552378c292ab5844f3 - Gx = 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e - Gy = 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14 - - ed448_context = VoidPointer() - result = _ed448_lib.ed448_new_context(ed448_context.address_of()) - if result: - raise ImportError("Error %d initializing Ed448 context" % result) - - context = SmartPointer(ed448_context.get(), _ed448_lib.ed448_free_context) - - ed448 = _Curve(Integer(p), - None, - Integer(order), - Integer(Gx), - Integer(Gy), - None, - 448, - "1.3.101.113", # RFC8410 - context, - "Ed448", # Used throughout; do not change - None, - "ed448") - global ed448_names - _curves.update(dict.fromkeys(ed448_names, ed448)) - - -init_ed448() -del init_ed448 - - -class UnsupportedEccFeature(ValueError): - pass - - -class EccPoint(object): - """A class to model a point on an Elliptic Curve. - - The class supports operators for: - - * Adding two points: ``R = S + T`` - * In-place addition: ``S += T`` - * Negating a point: ``R = -T`` - * Comparing two points: ``if S == T: ...`` or ``if S != T: ...`` - * Multiplying a point by a scalar: ``R = S*k`` - * In-place multiplication by a scalar: ``T *= k`` - - :ivar x: The affine X-coordinate of the ECC point - :vartype x: integer - - :ivar y: The affine Y-coordinate of the ECC point - :vartype y: integer - - :ivar xy: The tuple with affine X- and Y- coordinates - """ - - def __init__(self, x, y, curve="p256"): - - try: - self._curve = _curves[curve] - except KeyError: - raise ValueError("Unknown curve name %s" % str(curve)) - self._curve_name = curve - - modulus_bytes = self.size_in_bytes() - - xb = long_to_bytes(x, modulus_bytes) - yb = long_to_bytes(y, modulus_bytes) - if len(xb) != modulus_bytes or len(yb) != modulus_bytes: - raise ValueError("Incorrect coordinate length") - - new_point = lib_func(self, "new_point") - free_func = lib_func(self, "free_point") - - self._point = VoidPointer() - try: - context = self._curve.context.get() - except AttributeError: - context = null_pointer - result = new_point(self._point.address_of(), - c_uint8_ptr(xb), - c_uint8_ptr(yb), - c_size_t(modulus_bytes), - context) - - if result: - if result == 15: - raise ValueError("The EC point does not belong to the curve") - raise ValueError("Error %d while instantiating an EC point" % result) - - # Ensure that object disposal of this Python object will (eventually) - # free the memory allocated by the raw library for the EC point - self._point = SmartPointer(self._point.get(), free_func) - - def set(self, point): - clone = lib_func(self, "clone") - free_func = lib_func(self, "free_point") - - self._point = VoidPointer() - result = clone(self._point.address_of(), - point._point.get()) - - if result: - raise ValueError("Error %d while cloning an EC point" % result) - - self._point = SmartPointer(self._point.get(), free_func) - return self - - def __eq__(self, point): - if not isinstance(point, EccPoint): - return False - - cmp_func = lib_func(self, "cmp") - return 0 == cmp_func(self._point.get(), point._point.get()) - - # Only needed for Python 2 - def __ne__(self, point): - return not self == point - - def __neg__(self): - neg_func = lib_func(self, "neg") - np = self.copy() - result = neg_func(np._point.get()) - if result: - raise ValueError("Error %d while inverting an EC point" % result) - return np - - def copy(self): - """Return a copy of this point.""" - x, y = self.xy - np = EccPoint(x, y, self._curve_name) - return np - - def _is_eddsa(self): - return self._curve.name in ("ed25519", "ed448") - - def is_point_at_infinity(self): - """``True`` if this is the *point-at-infinity*.""" - - if self._is_eddsa(): - return self.x == 0 - else: - return self.xy == (0, 0) - - def point_at_infinity(self): - """Return the *point-at-infinity* for the curve.""" - - if self._is_eddsa(): - return EccPoint(0, 1, self._curve_name) - else: - return EccPoint(0, 0, self._curve_name) - - @property - def x(self): - return self.xy[0] - - @property - def y(self): - return self.xy[1] - - @property - def xy(self): - modulus_bytes = self.size_in_bytes() - xb = bytearray(modulus_bytes) - yb = bytearray(modulus_bytes) - get_xy = lib_func(self, "get_xy") - result = get_xy(c_uint8_ptr(xb), - c_uint8_ptr(yb), - c_size_t(modulus_bytes), - self._point.get()) - if result: - raise ValueError("Error %d while encoding an EC point" % result) - - return (Integer(bytes_to_long(xb)), Integer(bytes_to_long(yb))) - - def size_in_bytes(self): - """Size of each coordinate, in bytes.""" - return (self.size_in_bits() + 7) // 8 - - def size_in_bits(self): - """Size of each coordinate, in bits.""" - return self._curve.modulus_bits - - def double(self): - """Double this point (in-place operation). - - Returns: - This same object (to enable chaining). - """ - - double_func = lib_func(self, "double") - result = double_func(self._point.get()) - if result: - raise ValueError("Error %d while doubling an EC point" % result) - return self - - def __iadd__(self, point): - """Add a second point to this one""" - - add_func = lib_func(self, "add") - result = add_func(self._point.get(), point._point.get()) - if result: - if result == 16: - raise ValueError("EC points are not on the same curve") - raise ValueError("Error %d while adding two EC points" % result) - return self - - def __add__(self, point): - """Return a new point, the addition of this one and another""" - - np = self.copy() - np += point - return np - - def __imul__(self, scalar): - """Multiply this point by a scalar""" - - scalar_func = lib_func(self, "scalar") - if scalar < 0: - raise ValueError("Scalar multiplication is only defined for non-negative integers") - sb = long_to_bytes(scalar) - result = scalar_func(self._point.get(), - c_uint8_ptr(sb), - c_size_t(len(sb)), - c_ulonglong(getrandbits(64))) - if result: - raise ValueError("Error %d during scalar multiplication" % result) - return self - - def __mul__(self, scalar): - """Return a new point, the scalar product of this one""" - - np = self.copy() - np *= scalar - return np - - def __rmul__(self, left_hand): - return self.__mul__(left_hand) - - -# Last piece of initialization -p192_G = EccPoint(_curves['p192'].Gx, _curves['p192'].Gy, "p192") -p192 = _curves['p192']._replace(G=p192_G) -_curves.update(dict.fromkeys(p192_names, p192)) -del p192_G, p192, p192_names - -p224_G = EccPoint(_curves['p224'].Gx, _curves['p224'].Gy, "p224") -p224 = _curves['p224']._replace(G=p224_G) -_curves.update(dict.fromkeys(p224_names, p224)) -del p224_G, p224, p224_names - -p256_G = EccPoint(_curves['p256'].Gx, _curves['p256'].Gy, "p256") -p256 = _curves['p256']._replace(G=p256_G) -_curves.update(dict.fromkeys(p256_names, p256)) -del p256_G, p256, p256_names - -p384_G = EccPoint(_curves['p384'].Gx, _curves['p384'].Gy, "p384") -p384 = _curves['p384']._replace(G=p384_G) -_curves.update(dict.fromkeys(p384_names, p384)) -del p384_G, p384, p384_names - -p521_G = EccPoint(_curves['p521'].Gx, _curves['p521'].Gy, "p521") -p521 = _curves['p521']._replace(G=p521_G) -_curves.update(dict.fromkeys(p521_names, p521)) -del p521_G, p521, p521_names - -ed25519_G = EccPoint(_curves['Ed25519'].Gx, _curves['Ed25519'].Gy, "Ed25519") -ed25519 = _curves['Ed25519']._replace(G=ed25519_G) -_curves.update(dict.fromkeys(ed25519_names, ed25519)) -del ed25519_G, ed25519, ed25519_names - -ed448_G = EccPoint(_curves['Ed448'].Gx, _curves['Ed448'].Gy, "Ed448") -ed448 = _curves['Ed448']._replace(G=ed448_G) -_curves.update(dict.fromkeys(ed448_names, ed448)) -del ed448_G, ed448, ed448_names - - -class EccKey(object): - r"""Class defining an ECC key. - Do not instantiate directly. - Use :func:`generate`, :func:`construct` or :func:`import_key` instead. - - :ivar curve: The name of the curve as defined in the `ECC table`_. - :vartype curve: string - - :ivar pointQ: an ECC point representating the public component. - :vartype pointQ: :class:`EccPoint` - - :ivar d: A scalar that represents the private component - in NIST P curves. It is smaller than the - order of the generator point. - :vartype d: integer - - :ivar seed: A seed that representats the private component - in EdDSA curves - (Ed25519, 32 bytes; Ed448, 57 bytes). - :vartype seed: bytes - """ - - def __init__(self, **kwargs): - """Create a new ECC key - - Keywords: - curve : string - The name of the curve. - d : integer - Mandatory for a private key one NIST P curves. - It must be in the range ``[1..order-1]``. - seed : bytes - Mandatory for a private key on the Ed25519 (32 bytes) - or Ed448 (57 bytes) curve. - point : EccPoint - Mandatory for a public key. If provided for a private key, - the implementation will NOT check whether it matches ``d``. - - Only one parameter among ``d``, ``seed`` or ``point`` may be used. - """ - - kwargs_ = dict(kwargs) - curve_name = kwargs_.pop("curve", None) - self._d = kwargs_.pop("d", None) - self._seed = kwargs_.pop("seed", None) - self._point = kwargs_.pop("point", None) - if curve_name is None and self._point: - curve_name = self._point._curve_name - if kwargs_: - raise TypeError("Unknown parameters: " + str(kwargs_)) - - if curve_name not in _curves: - raise ValueError("Unsupported curve (%s)" % curve_name) - self._curve = _curves[curve_name] - self.curve = self._curve.desc - - count = int(self._d is not None) + int(self._seed is not None) - - if count == 0: - if self._point is None: - raise ValueError("At lest one between parameters 'point', 'd' or 'seed' must be specified") - return - - if count == 2: - raise ValueError("Parameters d and seed are mutually exclusive") - - # NIST P curves work with d, EdDSA works with seed - - if not self._is_eddsa(): - if self._seed is not None: - raise ValueError("Parameter 'seed' can only be used with Ed25519 or Ed448") - self._d = Integer(self._d) - if not 1 <= self._d < self._curve.order: - raise ValueError("Parameter d must be an integer smaller than the curve order") - else: - if self._d is not None: - raise ValueError("Parameter d can only be used with NIST P curves") - # RFC 8032, 5.1.5 - if self._curve.name == "ed25519": - if len(self._seed) != 32: - raise ValueError("Parameter seed must be 32 bytes long for Ed25519") - seed_hash = SHA512.new(self._seed).digest() # h - self._prefix = seed_hash[32:] - tmp = bytearray(seed_hash[:32]) - tmp[0] &= 0xF8 - tmp[31] = (tmp[31] & 0x7F) | 0x40 - # RFC 8032, 5.2.5 - elif self._curve.name == "ed448": - if len(self._seed) != 57: - raise ValueError("Parameter seed must be 57 bytes long for Ed448") - seed_hash = SHAKE256.new(self._seed).read(114) # h - self._prefix = seed_hash[57:] - tmp = bytearray(seed_hash[:57]) - tmp[0] &= 0xFC - tmp[55] |= 0x80 - tmp[56] = 0 - self._d = Integer.from_bytes(tmp, byteorder='little') - - def _is_eddsa(self): - return self._curve.desc in ("Ed25519", "Ed448") - - def __eq__(self, other): - if not isinstance(other, EccKey): - return False - - if other.has_private() != self.has_private(): - return False - - return other.pointQ == self.pointQ - - def __repr__(self): - if self.has_private(): - if self._is_eddsa(): - extra = ", seed=%s" % tostr(binascii.hexlify(self._seed)) - else: - extra = ", d=%d" % int(self._d) - else: - extra = "" - x, y = self.pointQ.xy - return "EccKey(curve='%s', point_x=%d, point_y=%d%s)" % (self._curve.desc, x, y, extra) - - def has_private(self): - """``True`` if this key can be used for making signatures or decrypting data.""" - - return self._d is not None - - # ECDSA - def _sign(self, z, k): - assert 0 < k < self._curve.order - - order = self._curve.order - blind = Integer.random_range(min_inclusive=1, - max_exclusive=order) - - blind_d = self._d * blind - inv_blind_k = (blind * k).inverse(order) - - r = (self._curve.G * k).x % order - s = inv_blind_k * (blind * z + blind_d * r) % order - return (r, s) - - # ECDSA - def _verify(self, z, rs): - order = self._curve.order - sinv = rs[1].inverse(order) - point1 = self._curve.G * ((sinv * z) % order) - point2 = self.pointQ * ((sinv * rs[0]) % order) - return (point1 + point2).x == rs[0] - - @property - def d(self): - if not self.has_private(): - raise ValueError("This is not a private ECC key") - return self._d - - @property - def seed(self): - if not self.has_private(): - raise ValueError("This is not a private ECC key") - return self._seed - - @property - def pointQ(self): - if self._point is None: - self._point = self._curve.G * self._d - return self._point - - def public_key(self): - """A matching ECC public key. - - Returns: - a new :class:`EccKey` object - """ - - return EccKey(curve=self._curve.desc, point=self.pointQ) - - def _export_SEC1(self, compress): - if self._is_eddsa(): - raise ValueError("SEC1 format is unsupported for EdDSA curves") - - # See 2.2 in RFC5480 and 2.3.3 in SEC1 - # - # The first byte is: - # - 0x02: compressed, only X-coordinate, Y-coordinate is even - # - 0x03: compressed, only X-coordinate, Y-coordinate is odd - # - 0x04: uncompressed, X-coordinate is followed by Y-coordinate - # - # PAI is in theory encoded as 0x00. - - modulus_bytes = self.pointQ.size_in_bytes() - - if compress: - if self.pointQ.y.is_odd(): - first_byte = b'\x03' - else: - first_byte = b'\x02' - public_key = (first_byte + - self.pointQ.x.to_bytes(modulus_bytes)) - else: - public_key = (b'\x04' + - self.pointQ.x.to_bytes(modulus_bytes) + - self.pointQ.y.to_bytes(modulus_bytes)) - return public_key - - def _export_eddsa(self): - x, y = self.pointQ.xy - if self._curve.name == "ed25519": - result = bytearray(y.to_bytes(32, byteorder='little')) - result[31] = ((x & 1) << 7) | result[31] - elif self._curve.name == "ed448": - result = bytearray(y.to_bytes(57, byteorder='little')) - result[56] = (x & 1) << 7 - else: - raise ValueError("Not an EdDSA key to export") - return bytes(result) - - def _export_subjectPublicKeyInfo(self, compress): - if self._is_eddsa(): - oid = self._curve.oid - public_key = self._export_eddsa() - params = None - else: - oid = "1.2.840.10045.2.1" # unrestricted - public_key = self._export_SEC1(compress) - params = DerObjectId(self._curve.oid) - - return _create_subject_public_key_info(oid, - public_key, - params) - - def _export_rfc5915_private_der(self, include_ec_params=True): - - assert self.has_private() - - # ECPrivateKey ::= SEQUENCE { - # version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1), - # privateKey OCTET STRING, - # parameters [0] ECParameters {{ NamedCurve }} OPTIONAL, - # publicKey [1] BIT STRING OPTIONAL - # } - - # Public key - uncompressed form - modulus_bytes = self.pointQ.size_in_bytes() - public_key = (b'\x04' + - self.pointQ.x.to_bytes(modulus_bytes) + - self.pointQ.y.to_bytes(modulus_bytes)) - - seq = [1, - DerOctetString(self.d.to_bytes(modulus_bytes)), - DerObjectId(self._curve.oid, explicit=0), - DerBitString(public_key, explicit=1)] - - if not include_ec_params: - del seq[2] - - return DerSequence(seq).encode() - - def _export_pkcs8(self, **kwargs): - from Crypto.IO import PKCS8 - - if kwargs.get('passphrase', None) is not None and 'protection' not in kwargs: - raise ValueError("At least the 'protection' parameter should be present") - - if self._is_eddsa(): - oid = self._curve.oid - private_key = DerOctetString(self._seed).encode() - params = None - else: - oid = "1.2.840.10045.2.1" # unrestricted - private_key = self._export_rfc5915_private_der(include_ec_params=False) - params = DerObjectId(self._curve.oid) - - result = PKCS8.wrap(private_key, - oid, - key_params=params, - **kwargs) - return result - - def _export_public_pem(self, compress): - from Crypto.IO import PEM - - encoded_der = self._export_subjectPublicKeyInfo(compress) - return PEM.encode(encoded_der, "PUBLIC KEY") - - def _export_private_pem(self, passphrase, **kwargs): - from Crypto.IO import PEM - - encoded_der = self._export_rfc5915_private_der() - return PEM.encode(encoded_der, "EC PRIVATE KEY", passphrase, **kwargs) - - def _export_private_clear_pkcs8_in_clear_pem(self): - from Crypto.IO import PEM - - encoded_der = self._export_pkcs8() - return PEM.encode(encoded_der, "PRIVATE KEY") - - def _export_private_encrypted_pkcs8_in_clear_pem(self, passphrase, **kwargs): - from Crypto.IO import PEM - - assert passphrase - if 'protection' not in kwargs: - raise ValueError("At least the 'protection' parameter should be present") - encoded_der = self._export_pkcs8(passphrase=passphrase, **kwargs) - return PEM.encode(encoded_der, "ENCRYPTED PRIVATE KEY") - - def _export_openssh(self, compress): - if self.has_private(): - raise ValueError("Cannot export OpenSSH private keys") - - desc = self._curve.openssh - - if desc is None: - raise ValueError("Cannot export %s keys as OpenSSH" % self._curve.name) - elif desc == "ssh-ed25519": - public_key = self._export_eddsa() - comps = (tobytes(desc), tobytes(public_key)) - else: - modulus_bytes = self.pointQ.size_in_bytes() - - if compress: - first_byte = 2 + self.pointQ.y.is_odd() - public_key = (bchr(first_byte) + - self.pointQ.x.to_bytes(modulus_bytes)) - else: - public_key = (b'\x04' + - self.pointQ.x.to_bytes(modulus_bytes) + - self.pointQ.y.to_bytes(modulus_bytes)) - - middle = desc.split("-")[2] - comps = (tobytes(desc), tobytes(middle), public_key) - - blob = b"".join([struct.pack(">I", len(x)) + x for x in comps]) - return desc + " " + tostr(binascii.b2a_base64(blob)) - - def export_key(self, **kwargs): - """Export this ECC key. - - Args: - format (string): - The format to use for encoding the key: - - - ``'DER'``. The key will be encoded in ASN.1 DER format (binary). - For a public key, the ASN.1 ``subjectPublicKeyInfo`` structure - defined in `RFC5480`_ will be used. - For a private key, the ASN.1 ``ECPrivateKey`` structure defined - in `RFC5915`_ is used instead (possibly within a PKCS#8 envelope, - see the ``use_pkcs8`` flag below). - - ``'PEM'``. The key will be encoded in a PEM_ envelope (ASCII). - - ``'OpenSSH'``. The key will be encoded in the OpenSSH_ format - (ASCII, public keys only). - - ``'SEC1'``. The public key (i.e., the EC point) will be encoded - into ``bytes`` according to Section 2.3.3 of `SEC1`_ - (which is a subset of the older X9.62 ITU standard). - Only for NIST P-curves. - - ``'raw'``. The public key will be encoded as ``bytes``, - without any metadata. - - * For NIST P-curves: equivalent to ``'SEC1'``. - * For EdDSA curves: ``bytes`` in the format defined in `RFC8032`_. - - passphrase (byte string or string): - The passphrase to use for protecting the private key. - - use_pkcs8 (boolean): - Only relevant for private keys. - - If ``True`` (default and recommended), the `PKCS#8`_ representation - will be used. It must be ``True`` for EdDSA curves. - - protection (string): - When a private key is exported with password-protection - and PKCS#8 (both ``DER`` and ``PEM`` formats), this parameter MUST be - present and be a valid algorithm supported by :mod:`Crypto.IO.PKCS8`. - It is recommended to use ``PBKDF2WithHMAC-SHA1AndAES128-CBC``. - - compress (boolean): - If ``True``, the method returns a more compact representation - of the public key, with the X-coordinate only. - - If ``False`` (default), the method returns the full public key. - - This parameter is ignored for EdDSA curves, as compression is - mandatory. - - .. warning:: - If you don't provide a passphrase, the private key will be - exported in the clear! - - .. note:: - When exporting a private key with password-protection and `PKCS#8`_ - (both ``DER`` and ``PEM`` formats), any extra parameters - to ``export_key()`` will be passed to :mod:`Crypto.IO.PKCS8`. - - .. _PEM: http://www.ietf.org/rfc/rfc1421.txt - .. _`PEM encryption`: http://www.ietf.org/rfc/rfc1423.txt - .. _OpenSSH: http://www.openssh.com/txt/rfc5656.txt - .. _RFC5480: https://tools.ietf.org/html/rfc5480 - .. _SEC1: https://www.secg.org/sec1-v2.pdf - - Returns: - A multi-line string (for ``'PEM'`` and ``'OpenSSH'``) or - ``bytes`` (for ``'DER'``, ``'SEC1'``, and ``'raw'``) with the encoded key. - """ - - args = kwargs.copy() - ext_format = args.pop("format") - if ext_format not in ("PEM", "DER", "OpenSSH", "SEC1", "raw"): - raise ValueError("Unknown format '%s'" % ext_format) - - compress = args.pop("compress", False) - - if self.has_private(): - passphrase = args.pop("passphrase", None) - if is_string(passphrase): - passphrase = tobytes(passphrase) - if not passphrase: - raise ValueError("Empty passphrase") - use_pkcs8 = args.pop("use_pkcs8", True) - - if not use_pkcs8 and self._is_eddsa(): - raise ValueError("'pkcs8' must be True for EdDSA curves") - - if ext_format == "PEM": - if use_pkcs8: - if passphrase: - return self._export_private_encrypted_pkcs8_in_clear_pem(passphrase, **args) - else: - return self._export_private_clear_pkcs8_in_clear_pem() - else: - return self._export_private_pem(passphrase, **args) - elif ext_format == "DER": - # DER - if passphrase and not use_pkcs8: - raise ValueError("Private keys can only be encrpyted with DER using PKCS#8") - if use_pkcs8: - return self._export_pkcs8(passphrase=passphrase, **args) - else: - return self._export_rfc5915_private_der() - else: - raise ValueError("Private keys cannot be exported " - "in the '%s' format" % ext_format) - else: # Public key - if args: - raise ValueError("Unexpected parameters: '%s'" % args) - if ext_format == "PEM": - return self._export_public_pem(compress) - elif ext_format == "DER": - return self._export_subjectPublicKeyInfo(compress) - elif ext_format == "SEC1": - return self._export_SEC1(compress) - elif ext_format == "raw": - if self._curve.name in ('ed25519', 'ed448'): - return self._export_eddsa() - else: - return self._export_SEC1(compress) - else: - return self._export_openssh(compress) - - -def generate(**kwargs): - """Generate a new private key on the given curve. - - Args: - - curve (string): - Mandatory. It must be a curve name defined in the `ECC table`_. - - randfunc (callable): - Optional. The RNG to read randomness from. - If ``None``, :func:`Crypto.Random.get_random_bytes` is used. - """ - - curve_name = kwargs.pop("curve") - curve = _curves[curve_name] - randfunc = kwargs.pop("randfunc", get_random_bytes) - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - if _curves[curve_name].name == "ed25519": - seed = randfunc(32) - new_key = EccKey(curve=curve_name, seed=seed) - elif _curves[curve_name].name == "ed448": - seed = randfunc(57) - new_key = EccKey(curve=curve_name, seed=seed) - else: - d = Integer.random_range(min_inclusive=1, - max_exclusive=curve.order, - randfunc=randfunc) - new_key = EccKey(curve=curve_name, d=d) - - return new_key - - -def construct(**kwargs): - """Build a new ECC key (private or public) starting - from some base components. - - In most cases, you will already have an existing key - which you can read in with :func:`import_key` instead - of this function. - - Args: - curve (string): - Mandatory. The name of the elliptic curve, as defined in the `ECC table`_. - - d (integer): - Mandatory for a private key and a NIST P-curve (e.g., P-256): - the integer in the range ``[1..order-1]`` that represents the key. - - seed (bytes): - Mandatory for a private key and an EdDSA curve. - It must be 32 bytes for Ed25519, and 57 bytes for Ed448. - - point_x (integer): - Mandatory for a public key: the X coordinate (affine) of the ECC point. - - point_y (integer): - Mandatory for a public key: the Y coordinate (affine) of the ECC point. - - Returns: - :class:`EccKey` : a new ECC key object - """ - - curve_name = kwargs["curve"] - curve = _curves[curve_name] - point_x = kwargs.pop("point_x", None) - point_y = kwargs.pop("point_y", None) - - if "point" in kwargs: - raise TypeError("Unknown keyword: point") - - if None not in (point_x, point_y): - # ValueError is raised if the point is not on the curve - kwargs["point"] = EccPoint(point_x, point_y, curve_name) - - new_key = EccKey(**kwargs) - - # Validate that the private key matches the public one - # because EccKey will not do that automatically - if new_key.has_private() and 'point' in kwargs: - pub_key = curve.G * new_key.d - if pub_key.xy != (point_x, point_y): - raise ValueError("Private and public ECC keys do not match") - - return new_key - - -def _import_public_der(ec_point, curve_oid=None, curve_name=None): - """Convert an encoded EC point into an EccKey object - - ec_point: byte string with the EC point (SEC1-encoded) - curve_oid: string with the name the curve - curve_name: string with the OID of the curve - - Either curve_id or curve_name must be specified - - """ - - for _curve_name, curve in _curves.items(): - if curve_oid and curve.oid == curve_oid: - break - if curve_name == _curve_name: - break - else: - if curve_oid: - raise UnsupportedEccFeature("Unsupported ECC curve (OID: %s)" % curve_oid) - else: - raise UnsupportedEccFeature("Unsupported ECC curve (%s)" % curve_name) - - # See 2.2 in RFC5480 and 2.3.3 in SEC1 - # The first byte is: - # - 0x02: compressed, only X-coordinate, Y-coordinate is even - # - 0x03: compressed, only X-coordinate, Y-coordinate is odd - # - 0x04: uncompressed, X-coordinate is followed by Y-coordinate - # - # PAI is in theory encoded as 0x00. - - modulus_bytes = curve.p.size_in_bytes() - point_type = bord(ec_point[0]) - - # Uncompressed point - if point_type == 0x04: - if len(ec_point) != (1 + 2 * modulus_bytes): - raise ValueError("Incorrect EC point length") - x = Integer.from_bytes(ec_point[1:modulus_bytes+1]) - y = Integer.from_bytes(ec_point[modulus_bytes+1:]) - # Compressed point - elif point_type in (0x02, 0x03): - if len(ec_point) != (1 + modulus_bytes): - raise ValueError("Incorrect EC point length") - x = Integer.from_bytes(ec_point[1:]) - # Right now, we only support Short Weierstrass curves - y = (x**3 - x*3 + curve.b).sqrt(curve.p) - if point_type == 0x02 and y.is_odd(): - y = curve.p - y - if point_type == 0x03 and y.is_even(): - y = curve.p - y - else: - raise ValueError("Incorrect EC point encoding") - - return construct(curve=_curve_name, point_x=x, point_y=y) - - -def _import_subjectPublicKeyInfo(encoded, *kwargs): - """Convert a subjectPublicKeyInfo into an EccKey object""" - - # See RFC5480 - - # Parse the generic subjectPublicKeyInfo structure - oid, ec_point, params = _expand_subject_public_key_info(encoded) - - nist_p_oids = ( - "1.2.840.10045.2.1", # id-ecPublicKey (unrestricted) - "1.3.132.1.12", # id-ecDH - "1.3.132.1.13" # id-ecMQV - ) - eddsa_oids = { - "1.3.101.112": ("Ed25519", _import_ed25519_public_key), # id-Ed25519 - "1.3.101.113": ("Ed448", _import_ed448_public_key) # id-Ed448 - } - - if oid in nist_p_oids: - # See RFC5480 - - # Parameters are mandatory and encoded as ECParameters - # ECParameters ::= CHOICE { - # namedCurve OBJECT IDENTIFIER - # -- implicitCurve NULL - # -- specifiedCurve SpecifiedECDomain - # } - # implicitCurve and specifiedCurve are not supported (as per RFC) - if not params: - raise ValueError("Missing ECC parameters for ECC OID %s" % oid) - try: - curve_oid = DerObjectId().decode(params).value - except ValueError: - raise ValueError("Error decoding namedCurve") - - # ECPoint ::= OCTET STRING - return _import_public_der(ec_point, curve_oid=curve_oid) - - elif oid in eddsa_oids: - # See RFC8410 - curve_name, import_eddsa_public_key = eddsa_oids[oid] - - # Parameters must be absent - if params: - raise ValueError("Unexpected ECC parameters for ECC OID %s" % oid) - - x, y = import_eddsa_public_key(ec_point) - return construct(point_x=x, point_y=y, curve=curve_name) - else: - raise UnsupportedEccFeature("Unsupported ECC OID: %s" % oid) - - -def _import_rfc5915_der(encoded, passphrase, curve_oid=None): - - # See RFC5915 https://tools.ietf.org/html/rfc5915 - # - # ECPrivateKey ::= SEQUENCE { - # version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1), - # privateKey OCTET STRING, - # parameters [0] ECParameters {{ NamedCurve }} OPTIONAL, - # publicKey [1] BIT STRING OPTIONAL - # } - - private_key = DerSequence().decode(encoded, nr_elements=(3, 4)) - if private_key[0] != 1: - raise ValueError("Incorrect ECC private key version") - - try: - parameters = DerObjectId(explicit=0).decode(private_key[2]).value - if curve_oid is not None and parameters != curve_oid: - raise ValueError("Curve mismatch") - curve_oid = parameters - except ValueError: - pass - - if curve_oid is None: - raise ValueError("No curve found") - - for curve_name, curve in _curves.items(): - if curve.oid == curve_oid: - break - else: - raise UnsupportedEccFeature("Unsupported ECC curve (OID: %s)" % curve_oid) - - scalar_bytes = DerOctetString().decode(private_key[1]).payload - modulus_bytes = curve.p.size_in_bytes() - if len(scalar_bytes) != modulus_bytes: - raise ValueError("Private key is too small") - d = Integer.from_bytes(scalar_bytes) - - # Decode public key (if any) - if len(private_key) > 2: - public_key_enc = DerBitString(explicit=1).decode(private_key[-1]).value - public_key = _import_public_der(public_key_enc, curve_oid=curve_oid) - point_x = public_key.pointQ.x - point_y = public_key.pointQ.y - else: - point_x = point_y = None - - return construct(curve=curve_name, d=d, point_x=point_x, point_y=point_y) - - -def _import_pkcs8(encoded, passphrase): - from Crypto.IO import PKCS8 - - algo_oid, private_key, params = PKCS8.unwrap(encoded, passphrase) - - nist_p_oids = ( - "1.2.840.10045.2.1", # id-ecPublicKey (unrestricted) - "1.3.132.1.12", # id-ecDH - "1.3.132.1.13" # id-ecMQV - ) - eddsa_oids = { - "1.3.101.112": "Ed25519", # id-Ed25519 - "1.3.101.113": "Ed448", # id-Ed448 - } - - if algo_oid in nist_p_oids: - curve_oid = DerObjectId().decode(params).value - return _import_rfc5915_der(private_key, passphrase, curve_oid) - elif algo_oid in eddsa_oids: - if params is not None: - raise ValueError("EdDSA ECC private key must not have parameters") - curve_oid = None - seed = DerOctetString().decode(private_key).payload - return construct(curve=eddsa_oids[algo_oid], seed=seed) - else: - raise UnsupportedEccFeature("Unsupported ECC purpose (OID: %s)" % algo_oid) - - -def _import_x509_cert(encoded, *kwargs): - - sp_info = _extract_subject_public_key_info(encoded) - return _import_subjectPublicKeyInfo(sp_info) - - -def _import_der(encoded, passphrase): - - try: - return _import_subjectPublicKeyInfo(encoded, passphrase) - except UnsupportedEccFeature as err: - raise err - except (ValueError, TypeError, IndexError): - pass - - try: - return _import_x509_cert(encoded, passphrase) - except UnsupportedEccFeature as err: - raise err - except (ValueError, TypeError, IndexError): - pass - - try: - return _import_rfc5915_der(encoded, passphrase) - except UnsupportedEccFeature as err: - raise err - except (ValueError, TypeError, IndexError): - pass - - try: - return _import_pkcs8(encoded, passphrase) - except UnsupportedEccFeature as err: - raise err - except (ValueError, TypeError, IndexError): - pass - - raise ValueError("Not an ECC DER key") - - -def _import_openssh_public(encoded): - parts = encoded.split(b' ') - if len(parts) not in (2, 3): - raise ValueError("Not an openssh public key") - - try: - keystring = binascii.a2b_base64(parts[1]) - - keyparts = [] - while len(keystring) > 4: - lk = struct.unpack(">I", keystring[:4])[0] - keyparts.append(keystring[4:4 + lk]) - keystring = keystring[4 + lk:] - - if parts[0] != keyparts[0]: - raise ValueError("Mismatch in openssh public key") - - # NIST P curves - if parts[0].startswith(b"ecdsa-sha2-"): - - for curve_name, curve in _curves.items(): - if curve.openssh is None: - continue - if not curve.openssh.startswith("ecdsa-sha2"): - continue - middle = tobytes(curve.openssh.split("-")[2]) - if keyparts[1] == middle: - break - else: - raise ValueError("Unsupported ECC curve: " + middle) - - ecc_key = _import_public_der(keyparts[2], curve_oid=curve.oid) - - # EdDSA - elif parts[0] == b"ssh-ed25519": - x, y = _import_ed25519_public_key(keyparts[1]) - ecc_key = construct(curve="Ed25519", point_x=x, point_y=y) - else: - raise ValueError("Unsupported SSH key type: " + parts[0]) - - except (IndexError, TypeError, binascii.Error): - raise ValueError("Error parsing SSH key type: " + parts[0]) - - return ecc_key - - -def _import_openssh_private_ecc(data, password): - - from ._openssh import (import_openssh_private_generic, - read_bytes, read_string, check_padding) - - key_type, decrypted = import_openssh_private_generic(data, password) - - eddsa_keys = { - "ssh-ed25519": ("Ed25519", _import_ed25519_public_key, 32), - } - - # https://datatracker.ietf.org/doc/html/draft-miller-ssh-agent-04 - if key_type.startswith("ecdsa-sha2"): - - ecdsa_curve_name, decrypted = read_string(decrypted) - if ecdsa_curve_name not in _curves: - raise UnsupportedEccFeature("Unsupported ECC curve %s" % ecdsa_curve_name) - curve = _curves[ecdsa_curve_name] - modulus_bytes = (curve.modulus_bits + 7) // 8 - - public_key, decrypted = read_bytes(decrypted) - - if bord(public_key[0]) != 4: - raise ValueError("Only uncompressed OpenSSH EC keys are supported") - if len(public_key) != 2 * modulus_bytes + 1: - raise ValueError("Incorrect public key length") - - point_x = Integer.from_bytes(public_key[1:1+modulus_bytes]) - point_y = Integer.from_bytes(public_key[1+modulus_bytes:]) - - private_key, decrypted = read_bytes(decrypted) - d = Integer.from_bytes(private_key) - - params = {'d': d, 'curve': ecdsa_curve_name} - - elif key_type in eddsa_keys: - - curve_name, import_eddsa_public_key, seed_len = eddsa_keys[key_type] - - public_key, decrypted = read_bytes(decrypted) - point_x, point_y = import_eddsa_public_key(public_key) - - private_public_key, decrypted = read_bytes(decrypted) - seed = private_public_key[:seed_len] - - params = {'seed': seed, 'curve': curve_name} - else: - raise ValueError("Unsupport SSH agent key type:" + key_type) - - _, padded = read_string(decrypted) # Comment - check_padding(padded) - - return construct(point_x=point_x, point_y=point_y, **params) - - -def _import_ed25519_public_key(encoded): - """Import an Ed25519 ECC public key, encoded as raw bytes as described - in RFC8032_. - - Args: - encoded (bytes): - The Ed25519 public key to import. It must be 32 bytes long. - - Returns: - :class:`EccKey` : a new ECC key object - - Raises: - ValueError: when the given key cannot be parsed. - - .. _RFC8032: https://datatracker.ietf.org/doc/html/rfc8032 - """ - - if len(encoded) != 32: - raise ValueError("Incorrect length. Only Ed25519 public keys are supported.") - - p = Integer(0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed) # 2**255 - 19 - d = 37095705934669439343138083508754565189542113879843219016388785533085940283555 - - y = bytearray(encoded) - x_lsb = y[31] >> 7 - y[31] &= 0x7F - point_y = Integer.from_bytes(y, byteorder='little') - if point_y >= p: - raise ValueError("Invalid Ed25519 key (y)") - if point_y == 1: - return 0, 1 - - u = (point_y**2 - 1) % p - v = ((point_y**2 % p) * d + 1) % p - try: - v_inv = v.inverse(p) - x2 = (u * v_inv) % p - point_x = Integer._tonelli_shanks(x2, p) - if (point_x & 1) != x_lsb: - point_x = p - point_x - except ValueError: - raise ValueError("Invalid Ed25519 public key") - return point_x, point_y - - -def _import_ed448_public_key(encoded): - """Import an Ed448 ECC public key, encoded as raw bytes as described - in RFC8032_. - - Args: - encoded (bytes): - The Ed448 public key to import. It must be 57 bytes long. - - Returns: - :class:`EccKey` : a new ECC key object - - Raises: - ValueError: when the given key cannot be parsed. - - .. _RFC8032: https://datatracker.ietf.org/doc/html/rfc8032 - """ - - if len(encoded) != 57: - raise ValueError("Incorrect length. Only Ed448 public keys are supported.") - - p = Integer(0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffffffff) # 2**448 - 2**224 - 1 - d = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffff6756 - - y = encoded[:56] - x_lsb = bord(encoded[56]) >> 7 - point_y = Integer.from_bytes(y, byteorder='little') - if point_y >= p: - raise ValueError("Invalid Ed448 key (y)") - if point_y == 1: - return 0, 1 - - u = (point_y**2 - 1) % p - v = ((point_y**2 % p) * d - 1) % p - try: - v_inv = v.inverse(p) - x2 = (u * v_inv) % p - point_x = Integer._tonelli_shanks(x2, p) - if (point_x & 1) != x_lsb: - point_x = p - point_x - except ValueError: - raise ValueError("Invalid Ed448 public key") - return point_x, point_y - - -def import_key(encoded, passphrase=None, curve_name=None): - """Import an ECC key (public or private). - - Args: - encoded (bytes or multi-line string): - The ECC key to import. - The function will try to automatically detect the right format. - - Supported formats for an ECC **public** key: - - * X.509 certificate: binary (DER) or ASCII (PEM). - * X.509 ``subjectPublicKeyInfo``: binary (DER) or ASCII (PEM). - * SEC1_ (or X9.62), as ``bytes``. NIST P curves only. - You must also provide the ``curve_name`` (with a value from the `ECC table`_) - * OpenSSH line, defined in RFC5656_ and RFC8709_ (ASCII). - This is normally the content of files like ``~/.ssh/id_ecdsa.pub``. - - Supported formats for an ECC **private** key: - - * A binary ``ECPrivateKey`` structure, as defined in `RFC5915`_ (DER). - NIST P curves only. - * A `PKCS#8`_ structure (or the more recent Asymmetric Key Package, RFC5958_): binary (DER) or ASCII (PEM). - * `OpenSSH 6.5`_ and newer versions (ASCII). - - Private keys can be in the clear or password-protected. - - For details about the PEM encoding, see `RFC1421`_/`RFC1423`_. - - passphrase (byte string): - The passphrase to use for decrypting a private key. - Encryption may be applied protected at the PEM level (not recommended) - or at the PKCS#8 level (recommended). - This parameter is ignored if the key in input is not encrypted. - - curve_name (string): - For a SEC1 encoding only. This is the name of the curve, - as defined in the `ECC table`_. - - .. note:: - - To import EdDSA private and public keys, when encoded as raw ``bytes``, use: - - * :func:`Crypto.Signature.eddsa.import_public_key`, or - * :func:`Crypto.Signature.eddsa.import_private_key`. - - Returns: - :class:`EccKey` : a new ECC key object - - Raises: - ValueError: when the given key cannot be parsed (possibly because - the pass phrase is wrong). - - .. _RFC1421: https://datatracker.ietf.org/doc/html/rfc1421 - .. _RFC1423: https://datatracker.ietf.org/doc/html/rfc1423 - .. _RFC5915: https://datatracker.ietf.org/doc/html/rfc5915 - .. _RFC5656: https://datatracker.ietf.org/doc/html/rfc5656 - .. _RFC8709: https://datatracker.ietf.org/doc/html/rfc8709 - .. _RFC5958: https://datatracker.ietf.org/doc/html/rfc5958 - .. _`PKCS#8`: https://datatracker.ietf.org/doc/html/rfc5208 - .. _`OpenSSH 6.5`: https://flak.tedunangst.com/post/new-openssh-key-format-and-bcrypt-pbkdf - .. _SEC1: https://www.secg.org/sec1-v2.pdf - """ - - from Crypto.IO import PEM - - encoded = tobytes(encoded) - if passphrase is not None: - passphrase = tobytes(passphrase) - - # PEM - if encoded.startswith(b'-----BEGIN OPENSSH PRIVATE KEY'): - text_encoded = tostr(encoded) - openssh_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) - result = _import_openssh_private_ecc(openssh_encoded, passphrase) - return result - - elif encoded.startswith(b'-----'): - - text_encoded = tostr(encoded) - - # Remove any EC PARAMETERS section - # Ignore its content because the curve type must be already given in the key - ecparams_start = "-----BEGIN EC PARAMETERS-----" - ecparams_end = "-----END EC PARAMETERS-----" - text_encoded = re.sub(ecparams_start + ".*?" + ecparams_end, "", - text_encoded, - flags=re.DOTALL) - - der_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) - if enc_flag: - passphrase = None - try: - result = _import_der(der_encoded, passphrase) - except UnsupportedEccFeature as uef: - raise uef - except ValueError: - raise ValueError("Invalid DER encoding inside the PEM file") - return result - - # OpenSSH - if encoded.startswith((b'ecdsa-sha2-', b'ssh-ed25519')): - return _import_openssh_public(encoded) - - # DER - if len(encoded) > 0 and bord(encoded[0]) == 0x30: - return _import_der(encoded, passphrase) - - # SEC1 - if len(encoded) > 0 and bord(encoded[0]) in (0x02, 0x03, 0x04): - if curve_name is None: - raise ValueError("No curve name was provided") - return _import_public_der(encoded, curve_name=curve_name) - - raise ValueError("ECC key format is not supported") - - -if __name__ == "__main__": - - import time - - d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd - - point = _curves['p256'].G.copy() - count = 3000 - - start = time.time() - for x in range(count): - pointX = point * d - print("(P-256 G)", (time.time() - start) / count * 1000, "ms") - - start = time.time() - for x in range(count): - pointX = pointX * d - print("(P-256 arbitrary point)", (time.time() - start) / count * 1000, "ms") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/DHCID.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/DHCID.py deleted file mode 100644 index 65f858977c248f025cb5116b8b29163583da92c5..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/DHCID.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2006, 2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import base64 - -import dns.exception -import dns.immutable -import dns.rdata - - -@dns.immutable.immutable -class DHCID(dns.rdata.Rdata): - - """DHCID record""" - - # see: RFC 4701 - - __slots__ = ["data"] - - def __init__(self, rdclass, rdtype, data): - super().__init__(rdclass, rdtype) - self.data = self._as_bytes(data) - - def to_text(self, origin=None, relativize=True, **kw): - return dns.rdata._base64ify(self.data, **kw) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - b64 = tok.concatenate_remaining_identifiers().encode() - data = base64.b64decode(b64) - return cls(rdclass, rdtype, data) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - file.write(self.data) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - data = parser.get_remaining() - return cls(rdclass, rdtype, data) diff --git a/spaces/johngoad/prompt-extend/README.md b/spaces/johngoad/prompt-extend/README.md deleted file mode 100644 index d8e38ea3a526ab1f57292d4b527d8300c8a4d55c..0000000000000000000000000000000000000000 --- a/spaces/johngoad/prompt-extend/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Prompt Extend -emoji: ✍️ -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jskalbg/ChatDev01/camel/agents/base.py b/spaces/jskalbg/ChatDev01/camel/agents/base.py deleted file mode 100644 index 5f46beb1946b786dcf741a75b7fff567e042b369..0000000000000000000000000000000000000000 --- a/spaces/jskalbg/ChatDev01/camel/agents/base.py +++ /dev/null @@ -1,28 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from abc import ABC, abstractmethod - - -class BaseAgent(ABC): - r"""An abstract base class for all CAMEL agents.""" - - @abstractmethod - def reset(self) -> None: - r"""Resets the agent to its initial state.""" - pass - - @abstractmethod - def step(self) -> None: - r"""Performs a single step of the agent.""" - pass diff --git a/spaces/jskalbg/ChatDev01/camel/messages/chat_messages.py b/spaces/jskalbg/ChatDev01/camel/messages/chat_messages.py deleted file mode 100644 index 1a9406344fe519d47d90c987fdd9fc6e91bdad72..0000000000000000000000000000000000000000 --- a/spaces/jskalbg/ChatDev01/camel/messages/chat_messages.py +++ /dev/null @@ -1,89 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from dataclasses import dataclass -from typing import Dict, Optional - -from camel.messages import BaseMessage -from camel.typing import RoleType - - -@dataclass -class ChatMessage(BaseMessage): - r"""Base class for chat messages used in CAMEL chat system. - - Args: - role_name (str): The name of the user or assistant role. - role_type (RoleType): The type of role, either - :obj:`RoleType.ASSISTANT` or :obj:`RoleType.USER`. - meta_dict (Optional[Dict[str, str]]): Additional metadata dictionary - for the message. - role (str): The role of the message in OpenAI chat system. - content (str): The content of the message. (default: :obj:`""`) - """ - role_name: str - role_type: RoleType - meta_dict: Optional[Dict[str, str]] - role: str - content: str = "" - - def set_user_role_at_backend(self: BaseMessage): - return self.__class__( - role_name=self.role_name, - role_type=self.role_type, - meta_dict=self.meta_dict, - role="user", - content=self.content, - ) - - -@dataclass -class AssistantChatMessage(ChatMessage): - r"""Class for chat messages from the assistant role used in CAMEL chat - system. - - Attributes: - role_name (str): The name of the assistant role. - role_type (RoleType): The type of role, always - :obj:`RoleType.ASSISTANT`. - meta_dict (Optional[Dict[str, str]]): Additional metadata dictionary - for the message. - role (str): The role of the message in OpenAI chat system. - (default: :obj:`"assistant"`) - content (str): The content of the message. (default: :obj:`""`) - """ - role_name: str - role_type: RoleType = RoleType.ASSISTANT - meta_dict: Optional[Dict[str, str]] = None - role: str = "user" - content: str = "" - - -@dataclass -class UserChatMessage(ChatMessage): - r"""Class for chat messages from the user role used in CAMEL chat system. - - Args: - role_name (str): The name of the user role. - role_type (RoleType): The type of role, always :obj:`RoleType.USER`. - meta_dict (Optional[Dict[str, str]]): Additional metadata dictionary - for the message. - role (str): The role of the message in OpenAI chat system. - (default: :obj:`"user"`) - content (str): The content of the message. (default: :obj:`""`) - """ - role_name: str - role_type: RoleType = RoleType.USER - meta_dict: Optional[Dict[str, str]] = None - role: str = "user" - content: str = "" diff --git a/spaces/jskalbg/ChatDev01/online_log/static/js/main.js b/spaces/jskalbg/ChatDev01/online_log/static/js/main.js deleted file mode 100644 index 3776dae87524b5fea0ca2f2e5d40b2bf3e3cd0ca..0000000000000000000000000000000000000000 --- a/spaces/jskalbg/ChatDev01/online_log/static/js/main.js +++ /dev/null @@ -1,111 +0,0 @@ -function scrollToBottom() { - var scrollContainer = document.getElementById('chat-box'); - scrollContainer.scrollTop = scrollContainer.scrollHeight; -} - -function append_message(role, text, avatarUrl) { - - var message_container = $("
").addClass("message-container"); - var avatar_element = $("").addClass("avatar"); - var role_element = $("

").addClass("role").text(role); - - if (avatarUrl) { - avatar_element.css("background-image", `url(${avatarUrl})`); - } else { - avatar_element.css("background-color", "green"); - } - - message_container.append(role_element); - message_container.append(avatar_element); - - var parsedText = role === 'System' ? parseSystemMessage(text) : parseCodeBlocks(text, role); - - message_container.append(parsedText); - - $("#chat-box").append(message_container); - scrollToBottom(); -} - -function parseCodeBlocks(text, role) { - var parts = text.split(/(```[\s\S]*?```)/g); - var parsedText = $("
").addClass("message-text"); - parts.forEach(part => { - if (part.startsWith("```") && role != "System") { - var trimmedBlock = part.trim(); - var language = trimmedBlock.match(/^```(\w+)/); - if (language) { - language = language[1]; - var codeContent = trimmedBlock.replace(/^```(\w+)/, '').replace(/```$/, ''); - var codeBlockHTML = ` -
-
${role} - ${language}
-
${hljs.highlightAuto(codeContent, [language]).value}
-
- `; - parsedText.append(codeBlockHTML); - } - } else { - parsedText.append(marked(_.escape(part), {breaks: true})); - } - }); - return parsedText; -} - - -function get_new_messages() { - - $.getJSON("/get_messages", function (data) { - var lastDisplayedMessageIndex = $("#chat-box .message-container").length; - - for (var i = lastDisplayedMessageIndex; i < data.length; i++) { - var role = data[i].role; - var text = data[i].text; - var avatarUrl = data[i].avatarUrl; - - append_message(role, text, avatarUrl); - - } - }); -} - -function parseSystemMessage(text) { - var message = $("
").addClass("message-text").addClass("system-message"); - var firstLine = text.split('\n')[0]; - var collapsed = true; - - var messageContent = $("
").html(marked(firstLine, { breaks: true })).addClass("original-markdown"); - var originalMarkdown = $("
").html(marked(text, { breaks: true })).addClass("original-markdown"); - - var expandButton = $("") - .addClass("expand-button") - .text("Expand") - .click(function () { - if (collapsed) { - messageContent.hide(); - originalMarkdown.show(); - expandButton.text("Collapse"); - } else { - messageContent.show(); - originalMarkdown.hide(); - expandButton.text("Expand"); - } - collapsed = !collapsed; - }); - - message.append(messageContent); - message.append(originalMarkdown); - message.append(expandButton); - - originalMarkdown.hide(); - - return message; -} - - -$(document).ready(function () { - get_new_messages(); - setInterval(function () { - get_new_messages(); - }, 1000); -}); - diff --git a/spaces/jungwoonshin/deepfake_detection_reimplementation/app.py b/spaces/jungwoonshin/deepfake_detection_reimplementation/app.py deleted file mode 100644 index c00489dacb1ce1d0486346312b2c3be65649b3fe..0000000000000000000000000000000000000000 --- a/spaces/jungwoonshin/deepfake_detection_reimplementation/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import gradio as gr -import argparse -import os -import re -import time - -import torch -import pandas as pd - -# import os, sys -# root_folder = os.path.abspath( -# os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) -# ) -# sys.path.append(root_folder) -from kernel_utils import VideoReader, FaceExtractor, confident_strategy, predict_on_video_set -from classifiers import DeepFakeClassifier -import gradio as gr - - - -def predict(video): - - frames_per_video = 32 - video_reader = VideoReader() - video_read_fn = lambda x: video_reader.read_frames(x, num_frames=frames_per_video) - face_extractor = FaceExtractor(video_read_fn) - input_size = 380 - strategy = confident_strategy - - # test_videos = sorted([x for x in os.listdir(args.test_dir) if x[-4:] == ".mp4"])[video_index] - # print(f"Predicting {video_index} videos") - predictions = predict_on_video_set(face_extractor=face_extractor, input_size=input_size, models=models, - strategy=strategy, frames_per_video=frames_per_video, videos=video, - num_workers=6, test_dir=args.test_dir) - return predictions - -def get_args_models(): - parser = argparse.ArgumentParser("Predict test videos") - arg = parser.add_argument - arg('--weights-dir', type=str, default=".", help="path to directory with checkpoints") - arg('--models', type=str, default='classifier_DeepFakeClassifier_tf_efficientnet_b7_ns_1_best_dice', help="checkpoint files") # nargs='+', - arg('--test-dir', type=str, default='test_dataset', help="path to directory with videos") - arg('--output', type=str, required=False, help="path to output csv", default="submission.csv") - args = parser.parse_args() - - models = [] - # model_paths = [os.path.join(args.weights_dir, model) for model in args.models] - model_paths = [os.path.join(args.weights_dir, args.models)] - for path in model_paths: - model = DeepFakeClassifier(encoder="tf_efficientnet_b7_ns").to("cpu") - print("loading state dict {}".format(path)) - checkpoint = torch.load(path, map_location="cpu") - state_dict = checkpoint.get("state_dict", checkpoint) - model.load_state_dict({re.sub("^module.", "", k): v for k, v in state_dict.items()}, strict=True) - model.eval() - del checkpoint - models.append(model) - return args, models - -def greet(name): - return "Hello " + name + "!!" - -if __name__ == '__main__': - global args, models - args, models = get_args_models() - - # stime = time.time() - # print("Elapsed:", time.time() - stime) - - demo = gr.Interface(fn=predict, inputs="video", outputs="text") - demo.launch() \ No newline at end of file diff --git a/spaces/justest/gpt4free/models_for_langchain/model.py b/spaces/justest/gpt4free/models_for_langchain/model.py deleted file mode 100644 index 0fdd170f92f9d03e4065cda5d49a896dfe4cfc94..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/models_for_langchain/model.py +++ /dev/null @@ -1,67 +0,0 @@ -from typing import Any, List, Mapping, Optional -from g4f.Provider import ( - Ails, - You, - Bing, - Yqcloud, - Theb, - Aichat, - Bard, - Vercel, - Forefront, - Lockchat, - Liaobots, - H2o, - ChatgptLogin, - DeepAi, - GetGpt -) -import g4f -from langchain.callbacks.manager import CallbackManagerForLLMRun -from langchain.llms.base import LLM -provider_dict = { - 'Ails': Ails, - 'You': You, - 'Bing': Bing, - 'Yqcloud': Yqcloud, - 'Theb': Theb, - 'Aichat': Aichat, - 'Bard': Bard, - 'Vercel': Vercel, - 'Forefront': Forefront, - 'Lockchat': Lockchat, - 'Liaobots': Liaobots, - 'H2o': H2o, - 'ChatgptLogin': ChatgptLogin, - 'DeepAi': DeepAi, - 'GetGpt': GetGpt -} - -class CustomLLM(LLM): - model_name: str="gpt-3.5-turbo" - provider_name: str="GetGpt" - @property - def _llm_type(self) -> str: - return "custom" - - def _call( - self, - prompt: str, - stop: Optional[List[str]] = None, - run_manager: Optional[CallbackManagerForLLMRun] = None, - model_name = 'gpt-3.5-turbo', - provider = GetGpt - ) -> str: - if stop is not None: - raise ValueError("stop kwargs are not permitted.") - bot_msg = g4f.ChatCompletion.create(model=self.model_name, - provider=provider_dict[self.provider_name], - messages=[{"role": "user", - "content": prompt}], - stream=False) - return bot_msg - - @property - def _identifying_params(self) -> Mapping[str, Any]: - """Get the identifying parameters.""" - return {"model:": "gpt-3.5-turbo"} \ No newline at end of file diff --git a/spaces/kangvcar/RealChar/client/web/src/components/Auth/styles.css b/spaces/kangvcar/RealChar/client/web/src/components/Auth/styles.css deleted file mode 100644 index 8c65b61bdd793ad6212387554f3090aec5bdb32f..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/client/web/src/components/Auth/styles.css +++ /dev/null @@ -1,45 +0,0 @@ -.auth-btn { - transition: background-color .3s, box-shadow .3s; - - padding: 5px 8px; - border: none; - border-radius: 8px; - - color: #757575; - font-size: 14px; - font-weight: 500; - font-family: -apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen,Ubuntu,Cantarell,"Fira Sans","Droid Sans","Helvetica Neue",sans-serif; - - &:hover { - box-shadow: 0 -1px 0 rgba(255, 255, 255, 0.9), 0 2px 4px rgba(255, 255, 255, 0.7); - cursor: pointer; - } - - &:active { - background-color: #eeeeee; - } - - &:focus { - outline: none; - box-shadow: - 0 -1px 0 rgba(0, 0, 0, .04), - 0 2px 4px rgba(0, 0, 0, .25), - 0 0 0 3px #c8dafc; - } - - &:disabled { - filter: grayscale(100%); - background-color: #ebebeb; - box-shadow: 0 -1px 0 rgba(0, 0, 0, .04), 0 1px 1px rgba(0, 0, 0, .25); - cursor: not-allowed; - } - } - -.signout-container { - display: flex; - align-items: center; - gap: 10px; -} - - - \ No newline at end of file diff --git a/spaces/kastan/ai-teaching-assistant-beta/clip_for_ppts.py b/spaces/kastan/ai-teaching-assistant-beta/clip_for_ppts.py deleted file mode 100644 index 093c95ea8fc88bd622653ecf65329c4ba13d588c..0000000000000000000000000000000000000000 --- a/spaces/kastan/ai-teaching-assistant-beta/clip_for_ppts.py +++ /dev/null @@ -1,158 +0,0 @@ -import os - -import clip -import torch -from PIL import Image - -# import sys -# from pptx import Presentation -# from pptx.enum.shapes import MSO_SHAPE_TYPE -# import time - - -class ClipImage: - - def __init__(self, path_of_ppt_folders, path_to_save_image_features, mode='image', device='cuda'): - """ - :param input_image_path: path of the input image (mode = 'image') or the actual text to be searched (mode='text') - :param path_of_ppt_folders: path of the folder containing all the ppt folders - :param path_to_save_image_features: path to save the image features - :param mode: 'image' or 'text' based on the type of input - :param device: device to run the model on - """ - print("HEADS UPP -- ALWAYS using CPU for this 'spaces' version of the project. Otherwise we get FP32/16 conflicts.") - # device = "cuda" if torch.cuda.is_available() else "cpu" - device = "cpu" - # Path - directory = 'input_features' - path = os.path.join(path_to_save_image_features, directory) - if not os.path.exists(path): - # Create the directory - os.mkdir(path) - print("Directory '% s' created" % directory) - - self.res = [] - if not os.path.isdir(path_of_ppt_folders): - raise TypeError(f"{path_of_ppt_folders} is not a directory. Please only enter a directory") - - # if mode == 'image' and not os.path.exists(input_image_path): - # raise FileNotFoundError(f"{input_image_path} does not exist.") - if not os.path.exists(path_to_save_image_features) or not os.path.isdir(path_to_save_image_features): - raise FileNotFoundError(f"{path_to_save_image_features} is not a directory or doesn't exist.") - self.mode = mode - self.path_of_ppt_folders = path_of_ppt_folders - self.path_to_save_image_features = path_to_save_image_features - self.device = device - - # consider ViT-L/14 should be the best one - self.model, self.preprocess = clip.load('ViT-B/32', self.device) - - #print("👉 RUNNING CLIP'S ONE-TIME ENCODING STEP... will be slow the first time, and hopefully only the first time.") - # passing in an image as a cheap hack, to make one funciton work for initial embedding. - #self.calculate_similarity('/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc/lecture_slides/001/Slide1.jpeg') - #print("🔥 DONE with CLIP's ONE TIME ENCODING") - - def text_to_image_search(self, search_text: str, top_k_to_return: int = 4): - """ Written after the fact by kastan, so that we don't have to call init every time. """ - assert type(search_text) == str, f"Must provide a single string, instead I got type {type(search_text)}" - # self.create_input_features(search_text, mode='text') - self.mode = 'text' - return self.calculate_similarity(search_text, top_k_to_return) - - # TODO: WIP. - def image_to_images_search(self, input_image, top_k_to_return: int = 4): - """ Written after the fact by kastan, so that we don't have to call init every time. """ - self.mode = 'image' - return self.calculate_similarity(input_image, top_k_to_return) - - def create_input_features(self, input_text_or_img): - if self.mode == 'image': - # Load the image - #input_image = Image.open(input_text_or_img) # Not needed as image comes from gradio in PIL format - # Preprocess the image - input_arr = torch.cat([self.preprocess(input_text_or_img).unsqueeze(0)]).to(self.device) - - elif self.mode == 'text': - # Preprocess the text - input_arr = torch.cat([clip.tokenize(f"{input_text_or_img}")]).to(self.device) - - # Encode the image or text - with torch.no_grad(): - if self.mode == 'image': - input_features = self.model.encode_image(input_arr) - elif self.mode == 'text': - input_features = self.model.encode_text(input_arr) - input_features /= input_features.norm(dim=-1, keepdim=True) - return input_features - - def new_most_similar_slide_file(self, top_k: int): - # Sort the results - ans = sorted(self.res, key=lambda x: x[2], reverse=True) - return ans[:top_k] - - def calculate_similarity(self, input_text_or_img, topk_val: int = 4): - ## Similarities across folders - self.res = [] - all_similarities = [] - slide_numbers = [] - # Create the input features - input_features = self.create_input_features(input_text_or_img) - - # Iterate through all the folders - ppts = list(os.listdir(self.path_of_ppt_folders)) - #start_time = time.monotonic() - for i in ppts: - # Get the path of the folder containing the ppt images - imgs = list(os.listdir(os.path.join(self.path_of_ppt_folders, i))) - slide_numbers.append(imgs) - # Iterate through all the images and preprocess them - - # Check if the preprocessed file exists and load it - img_flag = os.path.exists(self.path_to_save_image_features + '/input_features' + "/slides_" + i + "_tensor.pt") - if img_flag: - image_features = torch.load(self.path_to_save_image_features + '/input_features' + "/slides_" + i + "_tensor.pt", - map_location=self.device) - else: - # Encode the images and save the encoding - with torch.no_grad(): - image_input = torch.cat([ - self.preprocess(Image.open(os.path.join(self.path_of_ppt_folders, i, image))).unsqueeze(0) for image in imgs - ]).to(self.device) - image_features = self.model.encode_image(image_input) - image_features /= image_features.norm(dim=-1, keepdim=True) - torch.save(image_features, self.path_to_save_image_features + '/input_features' + "/slides_" + i + "_tensor.pt") - print("Saved the image features (for faster future loading) to: ", self.path_to_save_image_features + "/slides_" + i + "_tensor.pt") - - # Calculate the similarity between the input image and the images in the folder - - # TODO: THIS REQUIRES REFACTOR. We're only looking in a SINGLE FOLDER. need to APPEND to similarity. - if self.mode == 'image': - similarity = (100.0 * input_features @ image_features.T).softmax(dim=-1) - all_similarities.append((i, similarity)) - elif self.mode == 'text': - similarity = (100.0 * input_features @ image_features.T).softmax(dim=-1) - all_similarities.append((i, similarity)) - - ## Looking over all the folders - similarity_results = [] - - for j in range(0, len(all_similarities)): - folder_name = all_similarities[j][0] - folder_values = all_similarities[j][1][0] - for i in range(0, len(folder_values)): - self.res.append((folder_name, slide_numbers[j][i], folder_values[i])) - - #print(self.res) - - return self.new_most_similar_slide_file(topk_val) - # Return the sorted results - - -# if __name__ == "__main__": - -# demo = ClipImage('/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc/lecture_slides','/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc') -# #op = demo.image_to_images_search('/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc/lecture_slides/01c/Slide5.jpeg') -# op = demo.text_to_image_search("Unsigned Bit Pattern") -# print(op) -# op = demo.text_to_image_search("Graycode") -# print(op) \ No newline at end of file diff --git a/spaces/kepl/gpt/g4f/active_providers.py b/spaces/kepl/gpt/g4f/active_providers.py deleted file mode 100644 index cc3857dbaf1a9020fde2c72d52c490b23f678dc0..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/active_providers.py +++ /dev/null @@ -1,124 +0,0 @@ -import uuid -import g4f -from g4f import ChatCompletion - -TEST_PROMPT = "Generate a sentence with 'ocean'" -EXPECTED_RESPONSE_CONTAINS = "ocean" - - -class Provider: - def __init__(self, name, models): - """ - Initialize the provider with its name and models. - """ - self.name = name - self.models = models if isinstance(models, list) else [models] - - def __str__(self): - return self.name - - -class ModelProviderManager: - def __init__(self): - """ - Initialize the manager that manages the working (active) providers for each model. - """ - self._working_model_providers = {} - - def add_provider(self, model, provider_name): - """ - Add a provider to the working provider list of the specified model. - """ - if model not in self._working_model_providers: - self._working_model_providers[model] = [] - self._working_model_providers[model].append(provider_name) - - def get_working_providers(self): - """ - Return the currently active providers for each model. - """ - return self._working_model_providers - - -def _fetch_providers_having_models(): - """ - Get providers that have models from g4f.Providers. - """ - model_providers = [] - - for provider_name in dir(g4f.Provider): - provider = getattr(g4f.Provider, provider_name) - - if _is_provider_applicable(provider): - model_providers.append(Provider(provider_name, provider.model)) - - return model_providers - - -def _is_provider_applicable(provider): - """ - Check if the provider has a model and doesn't require authentication. - """ - return (hasattr(provider, 'model') and - hasattr(provider, '_create_completion') and - hasattr(provider, 'needs_auth') and - not provider.needs_auth) - - -def _generate_test_messages(): - """ - Generate messages for testing. - """ - return [{"role": "system", "content": "You are a trained AI assistant."}, - {"role": "user", "content": TEST_PROMPT}] - - -def _manage_chat_completion(manager, model_providers, test_messages): - """ - Generate chat completion for each provider's models and handle positive and negative results. - """ - for provider in model_providers: - for model in provider.models: - try: - response = _generate_chat_response( - provider.name, model, test_messages) - if EXPECTED_RESPONSE_CONTAINS in response.lower(): - _print_success_response(provider, model) - manager.add_provider(model, provider.name) - else: - raise Exception(f"Unexpected response: {response}") - except Exception as error: - _print_error_response(provider, model, error) - - -def _generate_chat_response(provider_name, model, test_messages): - """ - Generate a chat response given a provider name, a model, and test messages. - """ - return ChatCompletion.create( - model=model, - messages=test_messages, - chatId=str(uuid.uuid4()), - provider=getattr(g4f.Provider, provider_name) - ) - - -def _print_success_response(provider, model): - print(f"\u2705 [{provider}] - [{model}]: Success") - - -def _print_error_response(provider, model, error): - print(f"\u26D4 [{provider}] - [{model}]: Error - {str(error)}") - - -def get_active_model_providers(): - """ - Get providers that are currently working (active). - """ - model_providers = _fetch_providers_having_models() - test_messages = _generate_test_messages() - manager = ModelProviderManager() - - _manage_chat_completion(manager, model_providers, test_messages) - - return manager.get_working_providers() diff --git a/spaces/keras-io/EDSR/README.md b/spaces/keras-io/EDSR/README.md deleted file mode 100644 index 227547321a05da19cc51856f78ffea6e11bc7413..0000000000000000000000000000000000000000 --- a/spaces/keras-io/EDSR/README.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: EDSR Keras -emoji: 🚀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.18.0 -python_version: 3.10.9 -app_file: app.py -pinned: false -license: mit ---- - -This space is the demo for the EDSR (Enhanced Deep Residual Networks for Single Image Super-Resolution) model. This model surpassed the performace of the current available SOTA models. - -Paper Link - https://arxiv.org/pdf/1707.02921 - -Keras Example link - https://keras.io/examples/vision/edsr/ - - -TODO: - -Hack to make this work for any image size. Currently the model takes input of image size 150 x 150. -We pad the input image with transparant pixels so that it is a square image, which is a multiple of 150 x 150 -Then we chop the image into multiple 150 x 150 sub images -Upscale it and stich it together. - -The output image might look a bit off, because each sub-image dosent have data about other sub-images. -This approach assumes that the subimage has enough data about its surroundings diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/audio.py b/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/audio.py deleted file mode 100644 index 2fcb77ad1d3a85f523e24f84691886736a5686cb..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/audio.py +++ /dev/null @@ -1,107 +0,0 @@ -from scipy.ndimage.morphology import binary_dilation -from speaker_encoder.params_data import * -from pathlib import Path -from typing import Optional, Union -import numpy as np -import webrtcvad -import librosa -import struct - -int16_max = (2 ** 15) - 1 - - -def preprocess_wav(fpath_or_wav: Union[str, Path, np.ndarray], - source_sr: Optional[int] = None): - """ - Applies the preprocessing operations used in training the Speaker Encoder to a waveform - either on disk or in memory. The waveform will be resampled to match the data hyperparameters. - - :param fpath_or_wav: either a filepath to an audio file (many extensions are supported, not - just .wav), either the waveform as a numpy array of floats. - :param source_sr: if passing an audio waveform, the sampling rate of the waveform before - preprocessing. After preprocessing, the waveform's sampling rate will match the data - hyperparameters. If passing a filepath, the sampling rate will be automatically detected and - this argument will be ignored. - """ - # Load the wav from disk if needed - if isinstance(fpath_or_wav, str) or isinstance(fpath_or_wav, Path): - wav, source_sr = librosa.load(fpath_or_wav, sr=None) - else: - wav = fpath_or_wav - - # Resample the wav if needed - if source_sr is not None and source_sr != sampling_rate: - wav = librosa.resample(wav, source_sr, sampling_rate) - - # Apply the preprocessing: normalize volume and shorten long silences - wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True) - wav = trim_long_silences(wav) - - return wav - - -def wav_to_mel_spectrogram(wav): - """ - Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform. - Note: this not a log-mel spectrogram. - """ - frames = librosa.feature.melspectrogram( - y=wav, - sr=sampling_rate, - n_fft=int(sampling_rate * mel_window_length / 1000), - hop_length=int(sampling_rate * mel_window_step / 1000), - n_mels=mel_n_channels - ) - return frames.astype(np.float32).T - - -def trim_long_silences(wav): - """ - Ensures that segments without voice in the waveform remain no longer than a - threshold determined by the VAD parameters in params.py. - - :param wav: the raw waveform as a numpy array of floats - :return: the same waveform with silences trimmed away (length <= original wav length) - """ - # Compute the voice detection window size - samples_per_window = (vad_window_length * sampling_rate) // 1000 - - # Trim the end of the audio to have a multiple of the window size - wav = wav[:len(wav) - (len(wav) % samples_per_window)] - - # Convert the float waveform to 16-bit mono PCM - pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16)) - - # Perform voice activation detection - voice_flags = [] - vad = webrtcvad.Vad(mode=3) - for window_start in range(0, len(wav), samples_per_window): - window_end = window_start + samples_per_window - voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2], - sample_rate=sampling_rate)) - voice_flags = np.array(voice_flags) - - # Smooth the voice detection with a moving average - def moving_average(array, width): - array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2))) - ret = np.cumsum(array_padded, dtype=float) - ret[width:] = ret[width:] - ret[:-width] - return ret[width - 1:] / width - - audio_mask = moving_average(voice_flags, vad_moving_average_width) - audio_mask = np.round(audio_mask).astype(np.bool) - - # Dilate the voiced regions - audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1)) - audio_mask = np.repeat(audio_mask, samples_per_window) - - return wav[audio_mask == True] - - -def normalize_volume(wav, target_dBFS, increase_only=False, decrease_only=False): - if increase_only and decrease_only: - raise ValueError("Both increase only and decrease only are set") - dBFS_change = target_dBFS - 10 * np.log10(np.mean(wav ** 2)) - if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only): - return wav - return wav * (10 ** (dBFS_change / 20)) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/facerecon_model.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/facerecon_model.py deleted file mode 100644 index 7de8ca6eebc50ff1ed52c5ba37d31b43f977b5e1..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/facerecon_model.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -from src.face3d.models.base_model import BaseModel -from src.face3d.models import networks -from src.face3d.models.bfm import ParametricFaceModel -from src.face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss -from src.face3d.util import util -from src.face3d.util.nvdiffrast import MeshRenderer -# from src.face3d.util.preprocess import estimate_norm_torch - -import trimesh -from scipy.io import savemat - -class FaceReconModel(BaseModel): - - @staticmethod - def modify_commandline_options(parser, is_train=False): - """ Configures options specific for CUT model - """ - # net structure and parameters - parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure') - parser.add_argument('--init_path', type=str, default='./checkpoints/init_model/resnet50-0676ba61.pth') - parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc') - parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/') - parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model') - - # renderer parameters - parser.add_argument('--focal', type=float, default=1015.) - parser.add_argument('--center', type=float, default=112.) - parser.add_argument('--camera_d', type=float, default=10.) - parser.add_argument('--z_near', type=float, default=5.) - parser.add_argument('--z_far', type=float, default=15.) - - if is_train: - # training parameters - parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure') - parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth') - parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss') - parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face') - - - # augmentation parameters - parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels') - parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor') - parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree') - - # loss weights - parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss') - parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss') - parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss') - parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss') - parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss') - parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss') - parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss') - parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss') - parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss') - - opt, _ = parser.parse_known_args() - parser.set_defaults( - focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15. - ) - if is_train: - parser.set_defaults( - use_crop_face=True, use_predef_M=False - ) - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - - self.visual_names = ['output_vis'] - self.model_names = ['net_recon'] - self.parallel_names = self.model_names + ['renderer'] - - self.facemodel = ParametricFaceModel( - bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center, - is_train=self.isTrain, default_name=opt.bfm_model - ) - - fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi - self.renderer = MeshRenderer( - rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center) - ) - - if self.isTrain: - self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc'] - - self.net_recog = networks.define_net_recog( - net_recog=opt.net_recog, pretrained_path=opt.net_recog_path - ) - # loss func name: (compute_%s_loss) % loss_name - self.compute_feat_loss = perceptual_loss - self.comupte_color_loss = photo_loss - self.compute_lm_loss = landmark_loss - self.compute_reg_loss = reg_loss - self.compute_reflc_loss = reflectance_loss - - self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr) - self.optimizers = [self.optimizer] - self.parallel_names += ['net_recog'] - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - self.input_img = input['imgs'].to(self.device) - self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None - self.gt_lm = input['lms'].to(self.device) if 'lms' in input else None - self.trans_m = input['M'].to(self.device) if 'M' in input else None - self.image_paths = input['im_paths'] if 'im_paths' in input else None - - def forward(self, output_coeff, device): - self.facemodel.to(device) - self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \ - self.facemodel.compute_for_render(output_coeff) - self.pred_mask, _, self.pred_face = self.renderer( - self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color) - - self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff) - - - def compute_losses(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - - assert self.net_recog.training == False - trans_m = self.trans_m - if not self.opt.use_predef_M: - trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2]) - - pred_feat = self.net_recog(self.pred_face, trans_m) - gt_feat = self.net_recog(self.input_img, self.trans_m) - self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat) - - face_mask = self.pred_mask - if self.opt.use_crop_face: - face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf) - - face_mask = face_mask.detach() - self.loss_color = self.opt.w_color * self.comupte_color_loss( - self.pred_face, self.input_img, self.atten_mask * face_mask) - - loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt) - self.loss_reg = self.opt.w_reg * loss_reg - self.loss_gamma = self.opt.w_gamma * loss_gamma - - self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm) - - self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask) - - self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \ - + self.loss_lm + self.loss_reflc - - - def optimize_parameters(self, isTrain=True): - self.forward() - self.compute_losses() - """Update network weights; it will be called in every training iteration.""" - if isTrain: - self.optimizer.zero_grad() - self.loss_all.backward() - self.optimizer.step() - - def compute_visuals(self): - with torch.no_grad(): - input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy() - output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img - output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy() - - if self.gt_lm is not None: - gt_lm_numpy = self.gt_lm.cpu().numpy() - pred_lm_numpy = self.pred_lm.detach().cpu().numpy() - output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b') - output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r') - - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw, output_vis_numpy), axis=-2) - else: - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw), axis=-2) - - self.output_vis = torch.tensor( - output_vis_numpy / 255., dtype=torch.float32 - ).permute(0, 3, 1, 2).to(self.device) - - def save_mesh(self, name): - - recon_shape = self.pred_vertex # get reconstructed shape - recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space - recon_shape = recon_shape.cpu().numpy()[0] - recon_color = self.pred_color - recon_color = recon_color.cpu().numpy()[0] - tri = self.facemodel.face_buf.cpu().numpy() - mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8)) - mesh.export(name) - - def save_coeff(self,name): - - pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict} - pred_lm = self.pred_lm.cpu().numpy() - pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate - pred_coeffs['lm68'] = pred_lm - savemat(name,pred_coeffs) - - - diff --git a/spaces/kevinwang676/VoiceChangers/config.py b/spaces/kevinwang676/VoiceChangers/config.py deleted file mode 100644 index e07d93cf81ea0d72ffe318cc37bc1064bc94533b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/config.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch - -import util - -device = ( - 'cuda:0' if torch.cuda.is_available() - else ( - 'mps' if util.has_mps() - else 'cpu' - ) -) -is_half = util.is_half(device) - -x_pad = 3 if is_half else 1 -x_query = 10 if is_half else 6 -x_center = 60 if is_half else 38 -x_max = 65 if is_half else 41 diff --git a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_depth.py b/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_depth.py deleted file mode 100644 index 75f173659f81b686e9b638a897222dcee43a2427..0000000000000000000000000000000000000000 --- a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_depth.py +++ /dev/null @@ -1,228 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from diffusers import ControlNetModel -from PIL import Image -from transformers import pipeline - -from diffusion_webui.diffusion_models.controlnet.controlnet_inpaint.pipeline_stable_diffusion_controlnet_inpaint import ( - StableDiffusionControlNetInpaintPipeline, -) -from diffusion_webui.utils.model_list import ( - controlnet_depth_model_list, - stable_inpiant_model_list, -) -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_LIST, - get_scheduler_list, -) - -# https://github.com/mikonvergence/ControlNetInpaint - - -class StableDiffusionControlInpaintNetDepthGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, controlnet_model_path, scheduler): - if self.pipe is None: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - self.pipe = ( - StableDiffusionControlNetInpaintPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - ) - - self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def load_image(self, image_path): - image = np.array(image_path) - image = Image.fromarray(image) - return image - - def controlnet_inpaint_depth(self, image_path: str): - depth_estimator = pipeline("depth-estimation") - image = image_path["image"].convert("RGB").resize((512, 512)) - image = depth_estimator(image)["depth"] - image = np.array(image) - image = image[:, :, None] - image = np.concatenate([image, image, image], axis=2) - image = Image.fromarray(image) - - return image - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - controlnet_conditioning_scale: int, - scheduler: str, - seed_generator: int, - ): - normal_image = image_path["image"].convert("RGB").resize((512, 512)) - mask_image = image_path["mask"].convert("RGB").resize((512, 512)) - - normal_image = self.load_image(image_path=normal_image) - mask_image = self.load_image(image_path=mask_image) - - control_image = self.controlnet_inpaint_depth(image_path=image_path) - - pipe = self.load_model( - stable_model_path=stable_model_path, - controlnet_model_path=controlnet_model_path, - scheduler=scheduler, - ) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=normal_image, - mask_image=mask_image, - control_image=control_image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - controlnet_conditioning_scale=controlnet_conditioning_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_depth_inpaint_image_file = gr.Image( - source="upload", - tool="sketch", - elem_id="image_upload", - type="pil", - label="Upload", - ) - - controlnet_depth_inpaint_prompt = gr.Textbox( - lines=1, placeholder="Prompt", show_label=False - ) - - controlnet_depth_inpaint_negative_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Negative Prompt", - ) - with gr.Row(): - with gr.Column(): - controlnet_depth_inpaint_stable_model_id = ( - gr.Dropdown( - choices=stable_inpiant_model_list, - value=stable_inpiant_model_list[0], - label="Stable Model Id", - ) - ) - - controlnet_depth_inpaint_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - - controlnet_depth_inpaint_num_inference_step = ( - gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - ) - controlnet_depth_inpaint_num_images_per_prompt = ( - gr.Slider( - minimum=1, - maximum=10, - step=1, - value=1, - label="Number Of Images", - ) - ) - with gr.Row(): - with gr.Column(): - controlnet_depth_inpaint_model_id = gr.Dropdown( - choices=controlnet_depth_model_list, - value=controlnet_depth_model_list[0], - label="Controlnet Model Id", - ) - controlnet_depth_inpaint_scheduler = ( - gr.Dropdown( - choices=SCHEDULER_LIST, - value=SCHEDULER_LIST[0], - label="Scheduler", - ) - ) - controlnet_depth_inpaint_controlnet_conditioning_scale = gr.Slider( - minimum=0.1, - maximum=1.0, - step=0.1, - value=0.5, - label="Controlnet Conditioning Scale", - ) - - controlnet_depth_inpaint_seed_generator = ( - gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed Generator", - ) - ) - - controlnet_depth_inpaint_predict = gr.Button( - value="Generator" - ) - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - controlnet_depth_inpaint_predict.click( - fn=StableDiffusionControlInpaintNetDepthGenerator().generate_image, - inputs=[ - controlnet_depth_inpaint_image_file, - controlnet_depth_inpaint_stable_model_id, - controlnet_depth_inpaint_model_id, - controlnet_depth_inpaint_prompt, - controlnet_depth_inpaint_negative_prompt, - controlnet_depth_inpaint_num_images_per_prompt, - controlnet_depth_inpaint_guidance_scale, - controlnet_depth_inpaint_num_inference_step, - controlnet_depth_inpaint_controlnet_conditioning_scale, - controlnet_depth_inpaint_scheduler, - controlnet_depth_inpaint_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/utils/symbols.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/utils/symbols.py deleted file mode 100644 index 2036dded914cc5490d556a2022b40e57e584b742..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/utils/symbols.py +++ /dev/null @@ -1,18 +0,0 @@ -""" -Defines the set of symbols used in text input to the model. - -The default is a set of ASCII characters that works well for English or text that has been run -through Unidecode. For other data, you can modify _characters. See TRAINING_DATA.md for details. -""" -# from . import cmudict - -_pad = "_" -_eos = "~" -_characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890!\'(),-.:;? ' - -#_characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz12340!\'(),-.:;? ' # use this old one if you want to train old model -# Prepend "@" to ARPAbet symbols to ensure uniqueness (some are the same as uppercase letters): -#_arpabet = ["@' + s for s in cmudict.valid_symbols] - -# Export all symbols: -symbols = [_pad, _eos] + list(_characters) #+ _arpabet diff --git a/spaces/koajoel/PolyFormer/criterions/label_smoothed_cross_entropy.py b/spaces/koajoel/PolyFormer/criterions/label_smoothed_cross_entropy.py deleted file mode 100644 index 718adc1a97a49a6846ce3cc14dff9efc816d575c..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/criterions/label_smoothed_cross_entropy.py +++ /dev/null @@ -1,394 +0,0 @@ -# ------------------------------------------------------------------------ -# Modified from OFA (https://github.com/OFA-Sys/OFA) -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. -# ------------------------------------------------------------------------ -# Modifications Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. -# SPDX-License-Identifier: Apache-2.0 - -import math -from dataclasses import dataclass, field -from typing import Optional - -import torch -import torch.nn.functional as F -import numpy as np -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class AdjustLabelSmoothedCrossEntropyCriterionConfig(FairseqDataclass): - label_smoothing: float = field( - default=0.0, - metadata={"help": "epsilon for label smoothing, 0 means no label smoothing"}, - ) - report_accuracy: bool = field( - default=False, - metadata={"help": "report accuracy metric"}, - ) - det_weight: float = field( - default=1.0, - metadata={"help": "weight of detection loss"}, - ) - cls_weight: float = field( - default=1.0, - metadata={"help": "weight of classification loss"}, - ) - - ignore_prefix_size: int = field( - default=0, - metadata={"help": "Ignore first N tokens"}, - ) - ignore_eos: bool = field( - default=False, - metadata={"help": "Ignore eos token"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - drop_worst_ratio: float = field( - default=0.0, - metadata={"help": "ratio for discarding bad samples"}, - ) - drop_worst_after: int = field( - default=0, - metadata={"help": "steps for discarding bad samples"}, - ) - use_rdrop: bool = field( - default=False, metadata={"help": "use R-Drop"} - ) - reg_alpha: float = field( - default=1.0, metadata={"help": "weight for R-Drop"} - ) - sample_patch_num: int = field( - default=196, metadata={"help": "sample patches for v1"} - ) - constraint_range: Optional[str] = field( - default=None, - metadata={"help": "constraint range"} - ) - - -def construct_rdrop_sample(x): - if isinstance(x, dict): - for key in x: - x[key] = construct_rdrop_sample(x[key]) - return x - elif isinstance(x, torch.Tensor): - return x.repeat(2, *([1] * (x.dim() - 1))) - elif isinstance(x, int): - return x * 2 - elif isinstance(x, np.ndarray): - return x.repeat(2) - else: - raise NotImplementedError - - -def kl_loss(p, q): - p_loss = F.kl_div(p, torch.exp(q), reduction='sum') - q_loss = F.kl_div(q, torch.exp(p), reduction='sum') - loss = (p_loss + q_loss) / 2 - return loss - - -def label_smoothed_nll_loss( - lprobs, target, epsilon, update_num, reduce=True, - drop_worst_ratio=0.0, drop_worst_after=0, use_rdrop=False, reg_alpha=1.0, - constraint_masks=None, constraint_start=None, constraint_end=None -): - if target.dim() == lprobs.dim() - 1: - target = target.unsqueeze(-1) - nll_loss = -lprobs.gather(dim=-1, index=target).squeeze(-1) - if constraint_masks is not None: - smooth_loss = -lprobs.masked_fill(~constraint_masks, 0).sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (constraint_masks.sum(1) - 1 + 1e-6) - elif constraint_start is not None and constraint_end is not None: - constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end)) - smooth_loss = -lprobs[:, constraint_range].sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (len(constraint_range) - 1 + 1e-6) - else: - smooth_loss = -lprobs.sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (lprobs.size(-1) - 1) - loss = (1.0 - epsilon - eps_i) * nll_loss + eps_i * smooth_loss - if drop_worst_ratio > 0 and update_num > drop_worst_after: - if use_rdrop: - true_batch_size = loss.size(0) // 2 - _, indices = torch.topk(loss[:true_batch_size], k=int(true_batch_size * (1 - drop_worst_ratio)), largest=False) - loss = torch.cat([loss[indices], loss[indices+true_batch_size]]) - nll_loss = torch.cat([nll_loss[indices], nll_loss[indices+true_batch_size]]) - lprobs = torch.cat([lprobs[indices], lprobs[indices+true_batch_size]]) - else: - loss, indices = torch.topk(loss, k=int(loss.shape[0] * (1 - drop_worst_ratio)), largest=False) - nll_loss = nll_loss[indices] - lprobs = lprobs[indices] - - - ntokens = loss.numel() - nll_loss = nll_loss.sum() - - loss = loss.sum() - if use_rdrop: - true_batch_size = lprobs.size(0) // 2 - p = lprobs[:true_batch_size] - q = lprobs[true_batch_size:] - if constraint_start is not None and constraint_end is not None: - constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end)) - p = p[:, constraint_range] - q = q[:, constraint_range] - loss += kl_loss(p, q) * reg_alpha - - return loss, nll_loss, ntokens - -@register_criterion( - "adjust_label_smoothed_cross_entropy", dataclass=AdjustLabelSmoothedCrossEntropyCriterionConfig -) -class AdjustLabelSmoothedCrossEntropyCriterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - label_smoothing, - ignore_prefix_size=0, - ignore_eos=False, - report_accuracy=False, - drop_worst_ratio=0, - drop_worst_after=0, - use_rdrop=False, - reg_alpha=1.0, - sample_patch_num=196, - constraint_range=None, - det_weight=1.0, - cls_weight=1.0 - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.eps = label_smoothing - self.ignore_prefix_size = ignore_prefix_size - self.ignore_eos = ignore_eos - self.report_accuracy = report_accuracy - self.drop_worst_ratio = drop_worst_ratio - self.drop_worst_after = drop_worst_after - self.use_rdrop = use_rdrop - self.reg_alpha = reg_alpha - self.sample_patch_num = sample_patch_num - - self.det_weight = det_weight - self.cls_weight = cls_weight - - self.constraint_start = None - self.constraint_end = None - if constraint_range is not None: - constraint_start, constraint_end = constraint_range.split(',') - self.constraint_start = int(constraint_start) - self.constraint_end = int(constraint_end) - - def forward(self, model, sample, update_num=0, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - if isinstance(sample, list): - if self.sample_patch_num > 0: - sample[0]['net_input']['sample_patch_num'] = self.sample_patch_num - loss_v1, sample_size_v1, logging_output_v1 = self.forward(model, sample[0], update_num, reduce) - loss_v2, sample_size_v2, logging_output_v2 = self.forward(model, sample[1], update_num, reduce) - loss = loss_v1 / sample_size_v1 + loss_v2 / sample_size_v2 - sample_size = 1 - logging_output = { - "loss": loss.data, - "loss_v1": loss_v1.data, - "loss_v2": loss_v2.data, - "nll_loss": logging_output_v1["nll_loss"].data / sample_size_v1 + logging_output_v2[ - "nll_loss"].data / sample_size_v2, - "ntokens": logging_output_v1["ntokens"] + logging_output_v2["ntokens"], - "nsentences": logging_output_v1["nsentences"] + logging_output_v2["nsentences"], - "sample_size": 1, - "sample_size_v1": sample_size_v1, - "sample_size_v2": sample_size_v2, - } - return loss, sample_size, logging_output - - if self.use_rdrop: - construct_rdrop_sample(sample) - - net_output = model(**sample["net_input"]) - loss, nll_loss, ntokens = self.compute_loss(model, net_output, sample, update_num, det_weight=self.det_weight, - cls_weight=self.cls_weight, reduce=reduce) - sample_size = ( - sample["target"].size(0) - ) - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - } - if self.report_accuracy: - n_correct, total = self.compute_accuracy(model, net_output, sample) - logging_output["n_correct"] = utils.item(n_correct.data) - logging_output["total"] = utils.item(total.data) - return loss, sample_size, logging_output - - def get_lprobs_and_target(self, model, net_output, sample): - conf = sample['conf'][:, None, None] if 'conf' in sample and sample['conf'] is not None else 1 - constraint_masks = None - if "constraint_masks" in sample and sample["constraint_masks"] is not None: - constraint_masks = sample["constraint_masks"] - net_output[0].masked_fill_(~constraint_masks, -math.inf) - if self.constraint_start is not None and self.constraint_end is not None: - net_output[0][:, :, 4:self.constraint_start] = -math.inf - net_output[0][:, :, self.constraint_end:] = -math.inf - lprobs = model.get_normalized_probs(net_output, log_probs=True) * conf - target = sample["token_type"] - if self.ignore_prefix_size > 0: - lprobs = lprobs[:, self.ignore_prefix_size:, :].contiguous() - target = target[:, self.ignore_prefix_size:].contiguous() - if constraint_masks is not None: - constraint_masks = constraint_masks[:, self.ignore_prefix_size:, :].contiguous() - if self.ignore_eos: - bsz, seq_len, embed_dim = lprobs.size() - eos_indices = target.eq(self.task.tgt_dict.eos()) - lprobs = lprobs[~eos_indices].reshape(bsz, seq_len - 1, embed_dim) - target = target[~eos_indices].reshape(bsz, seq_len - 1) - if constraint_masks is not None: - constraint_masks = constraint_masks[~eos_indices].reshape(bsz, seq_len - 1, embed_dim) - if constraint_masks is not None: - constraint_masks = constraint_masks.view(-1, constraint_masks.size(-1)) - - # index = torch.zeros(lprobs.shape[:2]).to(lprobs.device) - # index[:, :4] = 1 # 1 indicates the location of detection results - - return lprobs.view(-1, lprobs.size(-1)), target.view(-1), constraint_masks, None # index.view(-1) - - def compute_loss(self, model, net_output, sample, update_num, det_weight=1.0, cls_weight=1.0, reduce=True): - b = sample['target'].shape[0] - lprobs, target, constraint_masks, index = self.get_lprobs_and_target(model, net_output, sample) - if constraint_masks is not None: - constraint_masks = constraint_masks[target != -1] - # index = index[target != self.padding_idx] - lprobs = lprobs[target != -1] - target = target[target != -1] - - loss_cls, nll_loss, ntokens = label_smoothed_nll_loss( - lprobs, - target, - self.eps, - update_num, - reduce=reduce, - drop_worst_ratio=self.drop_worst_ratio, - drop_worst_after=self.drop_worst_after, - use_rdrop=self.use_rdrop, - reg_alpha=self.reg_alpha, - constraint_masks=constraint_masks, - constraint_start=self.constraint_start, - constraint_end=self.constraint_end - ) - loss_cls = cls_weight * loss_cls/b - - # compute regression loss - token_type = sample["token_type"] - token_type = torch.stack([token_type, token_type], -1) - target = sample["target"] - index = torch.zeros_like(target).to(target.device) - index[:, :2, :] = 1 # the first two tokens are bbox points; 1 indicates the location of detection results - - target = target[token_type == 0] - index = index[token_type == 0] - regression_output = net_output[1].squeeze(-1) - regression_output = regression_output[token_type == 0] - - loss_reg = F.l1_loss(target[index == 1], regression_output[index == 1]) * det_weight - if (index == 0).any(): - loss_reg += F.l1_loss(target[index == 0], regression_output[index == 0]) - - loss = loss_reg + loss_cls - if update_num % 5000 == 1: - print(f"loss_reg: {loss_reg.item()} loss_cls: {loss_cls.item()}") - - return loss, nll_loss, ntokens - - def compute_accuracy(self, model, net_output, sample): - lprobs, target = self.get_lprobs_and_target(model, net_output, sample) - mask = target.ne(self.padding_idx) - n_correct = torch.sum( - lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask)) - ) - total = torch.sum(mask) - return n_correct, total - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - loss_sum_v1 = sum(log.get("loss_v1", 0) for log in logging_outputs) - loss_sum_v2 = sum(log.get("loss_v2", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - sample_size_v1 = sum(log.get("sample_size_v1", 0) for log in logging_outputs) - sample_size_v2 = sum(log.get("sample_size_v2", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size, sample_size, round=3 - ) - metrics.log_scalar( - "loss_v1", loss_sum_v1 / max(sample_size_v1, 1), max(sample_size_v1, 1), round=3 - ) - metrics.log_scalar( - "loss_v2", loss_sum_v2 / max(sample_size_v2, 1), max(sample_size_v2, 1), round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / sample_size, ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - metrics.log_scalar( - "ntokens", ntokens, 1, round=3 - ) - metrics.log_scalar( - "nsentences", nsentences, 1, round=3 - ) - metrics.log_scalar( - "sample_size", sample_size, 1, round=3 - ) - metrics.log_scalar( - "sample_size_v1", sample_size_v1, 1, round=3 - ) - metrics.log_scalar( - "sample_size_v2", sample_size_v2, 1, round=3 - ) - - total = utils.item(sum(log.get("total", 0) for log in logging_outputs)) - if total > 0: - metrics.log_scalar("total", total) - n_correct = utils.item( - sum(log.get("n_correct", 0) for log in logging_outputs) - ) - metrics.log_scalar("n_correct", n_correct) - metrics.log_derived( - "accuracy", - lambda meters: round( - meters["n_correct"].sum * 100.0 / meters["total"].sum, 3 - ) - if meters["total"].sum > 0 - else float("nan"), - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/criss/mining/mine.py b/spaces/koajoel/PolyFormer/fairseq/examples/criss/mining/mine.py deleted file mode 100644 index c872da196fe0df776622365748ad7963fee1f0a0..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/criss/mining/mine.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import glob -from subprocess import check_call - -try: - import faiss - - has_faiss = True -except ImportError: - has_faiss = False -import numpy as np - - -GB = 1024 * 1024 * 1024 - - -def call(cmd): - print(cmd) - check_call(cmd, shell=True) - - -def get_batches(directory, lang, prefix="all_avg_pool"): - print(f"Finding in {directory}/{prefix}.{lang}*") - files = glob.glob(f"{directory}/{prefix}.{lang}*") - emb_files = [] - txt_files = [] - for emb_fi in files: - emb_files.append(emb_fi) - txt_fi = emb_fi.replace(prefix, "sentences") - txt_files.append(txt_fi) - return emb_files, txt_files - - -def load_batch(emb_file, dim): - embeddings = np.fromfile(emb_file, dtype=np.float32) - num_rows = int(embeddings.shape[0] / dim) - embeddings = embeddings.reshape((num_rows, dim)) - faiss.normalize_L2(embeddings) - return embeddings - - -def knnGPU_sharded(x_batches_f, y_batches_f, dim, k, direction="x2y"): - if not has_faiss: - raise ImportError("Please install Faiss") - sims = [] - inds = [] - xfrom = 0 - xto = 0 - for x_batch_f in x_batches_f: - yfrom = 0 - yto = 0 - x_batch = load_batch(x_batch_f, dim) - xto = xfrom + x_batch.shape[0] - bsims, binds = [], [] - for y_batch_f in y_batches_f: - y_batch = load_batch(y_batch_f, dim) - neighbor_size = min(k, y_batch.shape[0]) - yto = yfrom + y_batch.shape[0] - print("{}-{} -> {}-{}".format(xfrom, xto, yfrom, yto)) - idx = faiss.IndexFlatIP(dim) - idx = faiss.index_cpu_to_all_gpus(idx) - idx.add(y_batch) - bsim, bind = idx.search(x_batch, neighbor_size) - - bsims.append(bsim) - binds.append(bind + yfrom) - yfrom += y_batch.shape[0] - del idx - del y_batch - bsims = np.concatenate(bsims, axis=1) - binds = np.concatenate(binds, axis=1) - aux = np.argsort(-bsims, axis=1) - sim_batch = np.zeros((x_batch.shape[0], k), dtype=np.float32) - ind_batch = np.zeros((x_batch.shape[0], k), dtype=np.int64) - for i in range(x_batch.shape[0]): - for j in range(k): - sim_batch[i, j] = bsims[i, aux[i, j]] - ind_batch[i, j] = binds[i, aux[i, j]] - sims.append(sim_batch) - inds.append(ind_batch) - xfrom += x_batch.shape[0] - del x_batch - sim = np.concatenate(sims, axis=0) - ind = np.concatenate(inds, axis=0) - return sim, ind - - -def score(sim, fwd_mean, bwd_mean, margin): - return margin(sim, (fwd_mean + bwd_mean) / 2) - - -def score_candidates( - sim_mat, candidate_inds, fwd_mean, bwd_mean, margin, verbose=False -): - print(" - scoring {:d} candidates".format(sim_mat.shape[0])) - scores = np.zeros(candidate_inds.shape) - for i in range(scores.shape[0]): - for j in range(scores.shape[1]): - k = int(candidate_inds[i, j]) - scores[i, j] = score(sim_mat[i, j], fwd_mean[i], bwd_mean[k], margin) - return scores - - -def load_text(files): - all_sentences = [] - for fi in files: - with open(fi) as sentence_fi: - for line in sentence_fi: - all_sentences.append(line.strip()) - print(f"Read {len(all_sentences)} sentences") - return all_sentences - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Mine bitext") - parser.add_argument("--src-lang", help="Source language") - parser.add_argument("--tgt-lang", help="Target language") - parser.add_argument( - "--dict-path", help="Path to dictionary file", default="dict.txt" - ) - parser.add_argument( - "--spm-path", help="Path to SPM model file", default="sentence.bpe.model" - ) - parser.add_argument("--dim", type=int, default=1024, help="Embedding dimension") - parser.add_argument("--mem", type=int, default=5, help="Memory in GB") - parser.add_argument("--src-dir", help="Source directory") - parser.add_argument("--tgt-dir", help="Target directory") - parser.add_argument("--output", help="Output path") - parser.add_argument( - "--neighborhood", type=int, default=4, help="Embedding dimension" - ) - parser.add_argument( - "--threshold", type=float, default=1.06, help="Threshold on mined bitext" - ) - parser.add_argument( - "--valid-size", - type=int, - default=2000, - help="Number of sentences used for validation set", - ) - parser.add_argument( - "--min-count", - type=int, - default=50000, - help="Min num sentences used for each language", - ) - args = parser.parse_args() - - x_batches_f, x_sents_f = get_batches(args.src_dir, args.src_lang) - y_batches_f, y_sents_f = get_batches(args.tgt_dir, args.tgt_lang) - margin = lambda a, b: a / b - y2x_sim, y2x_ind = knnGPU_sharded( - y_batches_f, x_batches_f, args.dim, args.neighborhood, direction="y2x" - ) - x2y_sim, x2y_ind = knnGPU_sharded( - x_batches_f, y_batches_f, args.dim, args.neighborhood, direction="x2y" - ) - - x2y_mean = x2y_sim.mean(axis=1) - y2x_mean = y2x_sim.mean(axis=1) - fwd_scores = score_candidates(x2y_sim, x2y_ind, x2y_mean, y2x_mean, margin) - bwd_scores = score_candidates(y2x_sim, y2x_ind, y2x_mean, x2y_mean, margin) - fwd_best = x2y_ind[np.arange(x2y_sim.shape[0]), fwd_scores.argmax(axis=1)] - bwd_best = y2x_ind[np.arange(y2x_sim.shape[0]), bwd_scores.argmax(axis=1)] - indices = np.stack( - ( - np.concatenate((np.arange(x2y_ind.shape[0]), bwd_best)), - np.concatenate((fwd_best, np.arange(y2x_ind.shape[0]))), - ), - axis=1, - ) - scores = np.concatenate((fwd_scores.max(axis=1), bwd_scores.max(axis=1))) - - x_sentences = load_text(x_sents_f) - y_sentences = load_text(y_sents_f) - - threshold = args.threshold - min_count = args.min_count - seen_src, seen_trg = set(), set() - directory = args.output - call(f"mkdir -p {directory}") - src_out = open( - f"{directory}/all.{args.src_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - tgt_out = open( - f"{directory}/all.{args.tgt_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - scores_out = open( - f"{directory}/all.scores", mode="w", encoding="utf-8", errors="surrogateescape" - ) - count = 0 - for i in np.argsort(-scores): - src_ind, trg_ind = indices[i] - if src_ind not in seen_src and trg_ind not in seen_trg: - seen_src.add(src_ind) - seen_trg.add(trg_ind) - if scores[i] > threshold or count < min_count: - if x_sentences[src_ind]: - print(scores[i], file=scores_out) - print(x_sentences[src_ind], file=src_out) - print(y_sentences[trg_ind], file=tgt_out) - count += 1 - else: - print(f"Ignoring sentence: {x_sentences[src_ind]}") - src_out.close() - tgt_out.close() - scores_out.close() - - print(f"Found {count} pairs for threshold={threshold}") - with open(f"{directory}/all.{args.src_lang}") as all_s, open( - f"{directory}/all.{args.tgt_lang}" - ) as all_t, open(f"{directory}/valid.{args.src_lang}", "w") as valid_s, open( - f"{directory}/valid.{args.tgt_lang}", "w" - ) as valid_t, open( - f"{directory}/train.{args.src_lang}", "w" - ) as train_s, open( - f"{directory}/train.{args.tgt_lang}", "w" - ) as train_t: - count = 0 - for s_line, t_line in zip(all_s, all_t): - s_line = s_line.split("\t")[1] - t_line = t_line.split("\t")[1] - if count >= args.valid_size: - train_s.write(s_line) - train_t.write(t_line) - else: - valid_s.write(s_line) - valid_t.write(t_line) - count += 1 diff --git a/spaces/konfuzio-com/PP-OCRv3-ch/app.py b/spaces/konfuzio-com/PP-OCRv3-ch/app.py deleted file mode 100644 index 7629742fcfd5c2531064afc27fc5fbb30ddd1df7..0000000000000000000000000000000000000000 --- a/spaces/konfuzio-com/PP-OCRv3-ch/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import tempfile -import os - -import gradio as gr -import paddlehub as hub -from PIL import Image - -pp_ocrv3 = hub.Module(name="ch_pp-ocrv3") - -def inference(img): - with tempfile.TemporaryDirectory() as tempdir_name: - pp_ocrv3.recognize_text(paths=[img],use_gpu=False,output_dir=tempdir_name,visualization=True) - result_names = os.listdir(tempdir_name) - output_image = Image.open(os.path.join(tempdir_name, result_names[0])) - return [output_image] - -title="ch_PP-OCRv3" -description="ch_PP-OCRv3 is a practical ultra-lightweight OCR system developed by PaddleOCR." - -examples=[['test.png']] - -gr.Interface(inference,gr.inputs.Image(type="filepath"),outputs=[gr.Gallery(label="Result", show_label=False).style(grid=[1, 1], height="auto")],title=title,description=description,examples=examples).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/krrishD/vasudevgupta_bigbird-roberta-natural-questions/README.md b/spaces/krrishD/vasudevgupta_bigbird-roberta-natural-questions/README.md deleted file mode 100644 index 7fe8bbb848d0694e4348ab256f45b009982fb233..0000000000000000000000000000000000000000 --- a/spaces/krrishD/vasudevgupta_bigbird-roberta-natural-questions/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Vasudevgupta Bigbird-roberta-natural-questions -emoji: 🐠 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/otlLib/optimize/gpos.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/otlLib/optimize/gpos.py deleted file mode 100644 index 0acd9ed04c141c532cf7fafda220b3a898106415..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/otlLib/optimize/gpos.py +++ /dev/null @@ -1,452 +0,0 @@ -import logging -import os -from collections import defaultdict, namedtuple -from functools import reduce -from itertools import chain -from math import log2 -from typing import DefaultDict, Dict, Iterable, List, Sequence, Tuple - -from fontTools.config import OPTIONS -from fontTools.misc.intTools import bit_count, bit_indices -from fontTools.ttLib import TTFont -from fontTools.ttLib.tables import otBase, otTables - -log = logging.getLogger(__name__) - -COMPRESSION_LEVEL = OPTIONS[f"{__name__}:COMPRESSION_LEVEL"] - -# Kept because ufo2ft depends on it, to be removed once ufo2ft uses the config instead -# https://github.com/fonttools/fonttools/issues/2592 -GPOS_COMPACT_MODE_ENV_KEY = "FONTTOOLS_GPOS_COMPACT_MODE" -GPOS_COMPACT_MODE_DEFAULT = str(COMPRESSION_LEVEL.default) - - -def _compression_level_from_env() -> int: - env_level = GPOS_COMPACT_MODE_DEFAULT - if GPOS_COMPACT_MODE_ENV_KEY in os.environ: - import warnings - - warnings.warn( - f"'{GPOS_COMPACT_MODE_ENV_KEY}' environment variable is deprecated. " - "Please set the 'fontTools.otlLib.optimize.gpos:COMPRESSION_LEVEL' option " - "in TTFont.cfg.", - DeprecationWarning, - ) - - env_level = os.environ[GPOS_COMPACT_MODE_ENV_KEY] - if len(env_level) == 1 and env_level in "0123456789": - return int(env_level) - raise ValueError(f"Bad {GPOS_COMPACT_MODE_ENV_KEY}={env_level}") - - -def compact(font: TTFont, level: int) -> TTFont: - # Ideal plan: - # 1. Find lookups of Lookup Type 2: Pair Adjustment Positioning Subtable - # https://docs.microsoft.com/en-us/typography/opentype/spec/gpos#lookup-type-2-pair-adjustment-positioning-subtable - # 2. Extract glyph-glyph kerning and class-kerning from all present subtables - # 3. Regroup into different subtable arrangements - # 4. Put back into the lookup - # - # Actual implementation: - # 2. Only class kerning is optimized currently - # 3. If the input kerning is already in several subtables, the subtables - # are not grouped together first; instead each subtable is treated - # independently, so currently this step is: - # Split existing subtables into more smaller subtables - gpos = font["GPOS"] - for lookup in gpos.table.LookupList.Lookup: - if lookup.LookupType == 2: - compact_lookup(font, level, lookup) - elif lookup.LookupType == 9 and lookup.SubTable[0].ExtensionLookupType == 2: - compact_ext_lookup(font, level, lookup) - return font - - -def compact_lookup(font: TTFont, level: int, lookup: otTables.Lookup) -> None: - new_subtables = compact_pair_pos(font, level, lookup.SubTable) - lookup.SubTable = new_subtables - lookup.SubTableCount = len(new_subtables) - - -def compact_ext_lookup(font: TTFont, level: int, lookup: otTables.Lookup) -> None: - new_subtables = compact_pair_pos( - font, level, [ext_subtable.ExtSubTable for ext_subtable in lookup.SubTable] - ) - new_ext_subtables = [] - for subtable in new_subtables: - ext_subtable = otTables.ExtensionPos() - ext_subtable.Format = 1 - ext_subtable.ExtSubTable = subtable - new_ext_subtables.append(ext_subtable) - lookup.SubTable = new_ext_subtables - lookup.SubTableCount = len(new_ext_subtables) - - -def compact_pair_pos( - font: TTFont, level: int, subtables: Sequence[otTables.PairPos] -) -> Sequence[otTables.PairPos]: - new_subtables = [] - for subtable in subtables: - if subtable.Format == 1: - # Not doing anything to Format 1 (yet?) - new_subtables.append(subtable) - elif subtable.Format == 2: - new_subtables.extend(compact_class_pairs(font, level, subtable)) - return new_subtables - - -def compact_class_pairs( - font: TTFont, level: int, subtable: otTables.PairPos -) -> List[otTables.PairPos]: - from fontTools.otlLib.builder import buildPairPosClassesSubtable - - subtables = [] - classes1: DefaultDict[int, List[str]] = defaultdict(list) - for g in subtable.Coverage.glyphs: - classes1[subtable.ClassDef1.classDefs.get(g, 0)].append(g) - classes2: DefaultDict[int, List[str]] = defaultdict(list) - for g, i in subtable.ClassDef2.classDefs.items(): - classes2[i].append(g) - all_pairs = {} - for i, class1 in enumerate(subtable.Class1Record): - for j, class2 in enumerate(class1.Class2Record): - if is_really_zero(class2): - continue - all_pairs[(tuple(sorted(classes1[i])), tuple(sorted(classes2[j])))] = ( - getattr(class2, "Value1", None), - getattr(class2, "Value2", None), - ) - grouped_pairs = cluster_pairs_by_class2_coverage_custom_cost(font, all_pairs, level) - for pairs in grouped_pairs: - subtables.append(buildPairPosClassesSubtable(pairs, font.getReverseGlyphMap())) - return subtables - - -def is_really_zero(class2: otTables.Class2Record) -> bool: - v1 = getattr(class2, "Value1", None) - v2 = getattr(class2, "Value2", None) - return (v1 is None or v1.getEffectiveFormat() == 0) and ( - v2 is None or v2.getEffectiveFormat() == 0 - ) - - -Pairs = Dict[ - Tuple[Tuple[str, ...], Tuple[str, ...]], - Tuple[otBase.ValueRecord, otBase.ValueRecord], -] - -# Adapted from https://github.com/fonttools/fonttools/blob/f64f0b42f2d1163b2d85194e0979def539f5dca3/Lib/fontTools/ttLib/tables/otTables.py#L935-L958 -def _getClassRanges(glyphIDs: Iterable[int]): - glyphIDs = sorted(glyphIDs) - last = glyphIDs[0] - ranges = [[last]] - for glyphID in glyphIDs[1:]: - if glyphID != last + 1: - ranges[-1].append(last) - ranges.append([glyphID]) - last = glyphID - ranges[-1].append(last) - return ranges, glyphIDs[0], glyphIDs[-1] - - -# Adapted from https://github.com/fonttools/fonttools/blob/f64f0b42f2d1163b2d85194e0979def539f5dca3/Lib/fontTools/ttLib/tables/otTables.py#L960-L989 -def _classDef_bytes( - class_data: List[Tuple[List[Tuple[int, int]], int, int]], - class_ids: List[int], - coverage=False, -): - if not class_ids: - return 0 - first_ranges, min_glyph_id, max_glyph_id = class_data[class_ids[0]] - range_count = len(first_ranges) - for i in class_ids[1:]: - data = class_data[i] - range_count += len(data[0]) - min_glyph_id = min(min_glyph_id, data[1]) - max_glyph_id = max(max_glyph_id, data[2]) - glyphCount = max_glyph_id - min_glyph_id + 1 - # https://docs.microsoft.com/en-us/typography/opentype/spec/chapter2#class-definition-table-format-1 - format1_bytes = 6 + glyphCount * 2 - # https://docs.microsoft.com/en-us/typography/opentype/spec/chapter2#class-definition-table-format-2 - format2_bytes = 4 + range_count * 6 - return min(format1_bytes, format2_bytes) - - -ClusteringContext = namedtuple( - "ClusteringContext", - [ - "lines", - "all_class1", - "all_class1_data", - "all_class2_data", - "valueFormat1_bytes", - "valueFormat2_bytes", - ], -) - - -class Cluster: - # TODO(Python 3.7): Turn this into a dataclass - # ctx: ClusteringContext - # indices: int - # Caches - # TODO(Python 3.8): use functools.cached_property instead of the - # manually cached properties, and remove the cache fields listed below. - # _indices: Optional[List[int]] = None - # _column_indices: Optional[List[int]] = None - # _cost: Optional[int] = None - - __slots__ = "ctx", "indices_bitmask", "_indices", "_column_indices", "_cost" - - def __init__(self, ctx: ClusteringContext, indices_bitmask: int): - self.ctx = ctx - self.indices_bitmask = indices_bitmask - self._indices = None - self._column_indices = None - self._cost = None - - @property - def indices(self): - if self._indices is None: - self._indices = bit_indices(self.indices_bitmask) - return self._indices - - @property - def column_indices(self): - if self._column_indices is None: - # Indices of columns that have a 1 in at least 1 line - # => binary OR all the lines - bitmask = reduce(int.__or__, (self.ctx.lines[i] for i in self.indices)) - self._column_indices = bit_indices(bitmask) - return self._column_indices - - @property - def width(self): - # Add 1 because Class2=0 cannot be used but needs to be encoded. - return len(self.column_indices) + 1 - - @property - def cost(self): - if self._cost is None: - self._cost = ( - # 2 bytes to store the offset to this subtable in the Lookup table above - 2 - # Contents of the subtable - # From: https://docs.microsoft.com/en-us/typography/opentype/spec/gpos#pair-adjustment-positioning-format-2-class-pair-adjustment - # uint16 posFormat Format identifier: format = 2 - + 2 - # Offset16 coverageOffset Offset to Coverage table, from beginning of PairPos subtable. - + 2 - + self.coverage_bytes - # uint16 valueFormat1 ValueRecord definition — for the first glyph of the pair (may be zero). - + 2 - # uint16 valueFormat2 ValueRecord definition — for the second glyph of the pair (may be zero). - + 2 - # Offset16 classDef1Offset Offset to ClassDef table, from beginning of PairPos subtable — for the first glyph of the pair. - + 2 - + self.classDef1_bytes - # Offset16 classDef2Offset Offset to ClassDef table, from beginning of PairPos subtable — for the second glyph of the pair. - + 2 - + self.classDef2_bytes - # uint16 class1Count Number of classes in classDef1 table — includes Class 0. - + 2 - # uint16 class2Count Number of classes in classDef2 table — includes Class 0. - + 2 - # Class1Record class1Records[class1Count] Array of Class1 records, ordered by classes in classDef1. - + (self.ctx.valueFormat1_bytes + self.ctx.valueFormat2_bytes) - * len(self.indices) - * self.width - ) - return self._cost - - @property - def coverage_bytes(self): - format1_bytes = ( - # From https://docs.microsoft.com/en-us/typography/opentype/spec/chapter2#coverage-format-1 - # uint16 coverageFormat Format identifier — format = 1 - # uint16 glyphCount Number of glyphs in the glyph array - 4 - # uint16 glyphArray[glyphCount] Array of glyph IDs — in numerical order - + sum(len(self.ctx.all_class1[i]) for i in self.indices) * 2 - ) - ranges = sorted( - chain.from_iterable(self.ctx.all_class1_data[i][0] for i in self.indices) - ) - merged_range_count = 0 - last = None - for (start, end) in ranges: - if last is not None and start != last + 1: - merged_range_count += 1 - last = end - format2_bytes = ( - # From https://docs.microsoft.com/en-us/typography/opentype/spec/chapter2#coverage-format-2 - # uint16 coverageFormat Format identifier — format = 2 - # uint16 rangeCount Number of RangeRecords - 4 - # RangeRecord rangeRecords[rangeCount] Array of glyph ranges — ordered by startGlyphID. - # uint16 startGlyphID First glyph ID in the range - # uint16 endGlyphID Last glyph ID in the range - # uint16 startCoverageIndex Coverage Index of first glyph ID in range - + merged_range_count * 6 - ) - return min(format1_bytes, format2_bytes) - - @property - def classDef1_bytes(self): - # We can skip encoding one of the Class1 definitions, and use - # Class1=0 to represent it instead, because Class1 is gated by the - # Coverage definition. Use Class1=0 for the highest byte savings. - # Going through all options takes too long, pick the biggest class - # = what happens in otlLib.builder.ClassDefBuilder.classes() - biggest_index = max(self.indices, key=lambda i: len(self.ctx.all_class1[i])) - return _classDef_bytes( - self.ctx.all_class1_data, [i for i in self.indices if i != biggest_index] - ) - - @property - def classDef2_bytes(self): - # All Class2 need to be encoded because we can't use Class2=0 - return _classDef_bytes(self.ctx.all_class2_data, self.column_indices) - - -def cluster_pairs_by_class2_coverage_custom_cost( - font: TTFont, - pairs: Pairs, - compression: int = 5, -) -> List[Pairs]: - if not pairs: - # The subtable was actually empty? - return [pairs] - - # Sorted for reproducibility/determinism - all_class1 = sorted(set(pair[0] for pair in pairs)) - all_class2 = sorted(set(pair[1] for pair in pairs)) - - # Use Python's big ints for binary vectors representing each line - lines = [ - sum( - 1 << i if (class1, class2) in pairs else 0 - for i, class2 in enumerate(all_class2) - ) - for class1 in all_class1 - ] - - # Map glyph names to ids and work with ints throughout for ClassDef formats - name_to_id = font.getReverseGlyphMap() - # Each entry in the arrays below is (range_count, min_glyph_id, max_glyph_id) - all_class1_data = [ - _getClassRanges(name_to_id[name] for name in cls) for cls in all_class1 - ] - all_class2_data = [ - _getClassRanges(name_to_id[name] for name in cls) for cls in all_class2 - ] - - format1 = 0 - format2 = 0 - for pair, value in pairs.items(): - format1 |= value[0].getEffectiveFormat() if value[0] else 0 - format2 |= value[1].getEffectiveFormat() if value[1] else 0 - valueFormat1_bytes = bit_count(format1) * 2 - valueFormat2_bytes = bit_count(format2) * 2 - - ctx = ClusteringContext( - lines, - all_class1, - all_class1_data, - all_class2_data, - valueFormat1_bytes, - valueFormat2_bytes, - ) - - cluster_cache: Dict[int, Cluster] = {} - - def make_cluster(indices: int) -> Cluster: - cluster = cluster_cache.get(indices, None) - if cluster is not None: - return cluster - cluster = Cluster(ctx, indices) - cluster_cache[indices] = cluster - return cluster - - def merge(cluster: Cluster, other: Cluster) -> Cluster: - return make_cluster(cluster.indices_bitmask | other.indices_bitmask) - - # Agglomerative clustering by hand, checking the cost gain of the new - # cluster against the previously separate clusters - # Start with 1 cluster per line - # cluster = set of lines = new subtable - clusters = [make_cluster(1 << i) for i in range(len(lines))] - - # Cost of 1 cluster with everything - # `(1 << len) - 1` gives a bitmask full of 1's of length `len` - cost_before_splitting = make_cluster((1 << len(lines)) - 1).cost - log.debug(f" len(clusters) = {len(clusters)}") - - while len(clusters) > 1: - lowest_cost_change = None - best_cluster_index = None - best_other_index = None - best_merged = None - for i, cluster in enumerate(clusters): - for j, other in enumerate(clusters[i + 1 :]): - merged = merge(cluster, other) - cost_change = merged.cost - cluster.cost - other.cost - if lowest_cost_change is None or cost_change < lowest_cost_change: - lowest_cost_change = cost_change - best_cluster_index = i - best_other_index = i + 1 + j - best_merged = merged - assert lowest_cost_change is not None - assert best_cluster_index is not None - assert best_other_index is not None - assert best_merged is not None - - # If the best merge we found is still taking down the file size, then - # there's no question: we must do it, because it's beneficial in both - # ways (lower file size and lower number of subtables). However, if the - # best merge we found is not reducing file size anymore, then we need to - # look at the other stop criteria = the compression factor. - if lowest_cost_change > 0: - # Stop critera: check whether we should keep merging. - # Compute size reduction brought by splitting - cost_after_splitting = sum(c.cost for c in clusters) - # size_reduction so that after = before * (1 - size_reduction) - # E.g. before = 1000, after = 800, 1 - 800/1000 = 0.2 - size_reduction = 1 - cost_after_splitting / cost_before_splitting - - # Force more merging by taking into account the compression number. - # Target behaviour: compression number = 1 to 9, default 5 like gzip - # - 1 = accept to add 1 subtable to reduce size by 50% - # - 5 = accept to add 5 subtables to reduce size by 50% - # See https://github.com/harfbuzz/packtab/blob/master/Lib/packTab/__init__.py#L690-L691 - # Given the size reduction we have achieved so far, compute how many - # new subtables are acceptable. - max_new_subtables = -log2(1 - size_reduction) * compression - log.debug( - f" len(clusters) = {len(clusters):3d} size_reduction={size_reduction:5.2f} max_new_subtables={max_new_subtables}", - ) - if compression == 9: - # Override level 9 to mean: create any number of subtables - max_new_subtables = len(clusters) - - # If we have managed to take the number of new subtables below the - # threshold, then we can stop. - if len(clusters) <= max_new_subtables + 1: - break - - # No reason to stop yet, do the merge and move on to the next. - del clusters[best_other_index] - clusters[best_cluster_index] = best_merged - - # All clusters are final; turn bitmasks back into the "Pairs" format - pairs_by_class1: Dict[Tuple[str, ...], Pairs] = defaultdict(dict) - for pair, values in pairs.items(): - pairs_by_class1[pair[0]][pair] = values - pairs_groups: List[Pairs] = [] - for cluster in clusters: - pairs_group: Pairs = dict() - for i in cluster.indices: - class1 = all_class1[i] - pairs_group.update(pairs_by_class1[class1]) - pairs_groups.append(pairs_group) - return pairs_groups diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_block/table.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_block/table.py deleted file mode 100644 index e3db8584f53a99fd010bc8627bee930e96f23c97..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_block/table.py +++ /dev/null @@ -1,238 +0,0 @@ -# GFM table, https://github.github.com/gfm/#tables-extension- -import re - -from ..common.utils import charCodeAt, isSpace -from .state_block import StateBlock - -headerLineRe = re.compile(r"^:?-+:?$") -enclosingPipesRe = re.compile(r"^\||\|$") - - -def getLine(state: StateBlock, line: int): - pos = state.bMarks[line] + state.tShift[line] - maximum = state.eMarks[line] - - # return state.src.substr(pos, max - pos) - return state.src[pos:maximum] - - -def escapedSplit(string): - result = [] - pos = 0 - max = len(string) - isEscaped = False - lastPos = 0 - current = "" - ch = charCodeAt(string, pos) - - while pos < max: - if ch == 0x7C: # /* | */ - if not isEscaped: - # pipe separating cells, '|' - result.append(current + string[lastPos:pos]) - current = "" - lastPos = pos + 1 - else: - # escaped pipe, '\|' - current += string[lastPos : pos - 1] - lastPos = pos - - isEscaped = ch == 0x5C # /* \ */ - pos += 1 - - ch = charCodeAt(string, pos) - - result.append(current + string[lastPos:]) - - return result - - -def table(state: StateBlock, startLine: int, endLine: int, silent: bool): - tbodyLines = None - - # should have at least two lines - if startLine + 2 > endLine: - return False - - nextLine = startLine + 1 - - if state.sCount[nextLine] < state.blkIndent: - return False - - # if it's indented more than 3 spaces, it should be a code block - if state.sCount[nextLine] - state.blkIndent >= 4: - return False - - # first character of the second line should be '|', '-', ':', - # and no other characters are allowed but spaces; - # basically, this is the equivalent of /^[-:|][-:|\s]*$/ regexp - - pos = state.bMarks[nextLine] + state.tShift[nextLine] - if pos >= state.eMarks[nextLine]: - return False - first_ch = state.srcCharCode[pos] - pos += 1 - if first_ch not in {0x7C, 0x2D, 0x3A}: # not in {"|", "-", ":"} - return False - - if pos >= state.eMarks[nextLine]: - return False - second_ch = state.srcCharCode[pos] - pos += 1 - # not in {"|", "-", ":"} and not space - if second_ch not in {0x7C, 0x2D, 0x3A} and not isSpace(second_ch): - return False - - # if first character is '-', then second character must not be a space - # (due to parsing ambiguity with list) - if first_ch == 0x2D and isSpace(second_ch): - return False - - while pos < state.eMarks[nextLine]: - ch = state.srcCharCode[pos] - - # /* | */ /* - */ /* : */ - if ch not in {0x7C, 0x2D, 0x3A} and not isSpace(ch): - return False - - pos += 1 - - lineText = getLine(state, startLine + 1) - - columns = lineText.split("|") - aligns = [] - for i in range(len(columns)): - t = columns[i].strip() - if not t: - # allow empty columns before and after table, but not in between columns; - # e.g. allow ` |---| `, disallow ` ---||--- ` - if i == 0 or i == len(columns) - 1: - continue - else: - return False - - if not headerLineRe.search(t): - return False - if charCodeAt(t, len(t) - 1) == 0x3A: # /* : */ - # /* : */ - aligns.append("center" if charCodeAt(t, 0) == 0x3A else "right") - elif charCodeAt(t, 0) == 0x3A: # /* : */ - aligns.append("left") - else: - aligns.append("") - - lineText = getLine(state, startLine).strip() - if "|" not in lineText: - return False - if state.sCount[startLine] - state.blkIndent >= 4: - return False - columns = escapedSplit(lineText) - if columns and columns[0] == "": - columns.pop(0) - if columns and columns[-1] == "": - columns.pop() - - # header row will define an amount of columns in the entire table, - # and align row should be exactly the same (the rest of the rows can differ) - columnCount = len(columns) - if columnCount == 0 or columnCount != len(aligns): - return False - - if silent: - return True - - oldParentType = state.parentType - state.parentType = "table" - - # use 'blockquote' lists for termination because it's - # the most similar to tables - terminatorRules = state.md.block.ruler.getRules("blockquote") - - token = state.push("table_open", "table", 1) - token.map = tableLines = [startLine, 0] - - token = state.push("thead_open", "thead", 1) - token.map = [startLine, startLine + 1] - - token = state.push("tr_open", "tr", 1) - token.map = [startLine, startLine + 1] - - for i in range(len(columns)): - token = state.push("th_open", "th", 1) - if aligns[i]: - token.attrs = {"style": "text-align:" + aligns[i]} - - token = state.push("inline", "", 0) - # note in markdown-it this map was removed in v12.0.0 however, we keep it, - # since it is helpful to propagate to children tokens - token.map = [startLine, startLine + 1] - token.content = columns[i].strip() - token.children = [] - - token = state.push("th_close", "th", -1) - - token = state.push("tr_close", "tr", -1) - token = state.push("thead_close", "thead", -1) - - nextLine = startLine + 2 - while nextLine < endLine: - if state.sCount[nextLine] < state.blkIndent: - break - - terminate = False - for i in range(len(terminatorRules)): - if terminatorRules[i](state, nextLine, endLine, True): - terminate = True - break - - if terminate: - break - lineText = getLine(state, nextLine).strip() - if not lineText: - break - if state.sCount[nextLine] - state.blkIndent >= 4: - break - columns = escapedSplit(lineText) - if columns and columns[0] == "": - columns.pop(0) - if columns and columns[-1] == "": - columns.pop() - - if nextLine == startLine + 2: - token = state.push("tbody_open", "tbody", 1) - token.map = tbodyLines = [startLine + 2, 0] - - token = state.push("tr_open", "tr", 1) - token.map = [nextLine, nextLine + 1] - - for i in range(columnCount): - token = state.push("td_open", "td", 1) - if aligns[i]: - token.attrs = {"style": "text-align:" + aligns[i]} - - token = state.push("inline", "", 0) - # note in markdown-it this map was removed in v12.0.0 however, we keep it, - # since it is helpful to propagate to children tokens - token.map = [nextLine, nextLine + 1] - try: - token.content = columns[i].strip() if columns[i] else "" - except IndexError: - token.content = "" - token.children = [] - - token = state.push("td_close", "td", -1) - - token = state.push("tr_close", "tr", -1) - - nextLine += 1 - - if tbodyLines: - token = state.push("tbody_close", "tbody", -1) - tbodyLines[1] = nextLine - - token = state.push("table_close", "table", -1) - - tableLines[1] = nextLine - state.parentType = oldParentType - state.line = nextLine - return True diff --git a/spaces/lanhuan1111/hello_world/README.md b/spaces/lanhuan1111/hello_world/README.md deleted file mode 100644 index 217b82939e158f653912881920c2a3a7ceeb2bec..0000000000000000000000000000000000000000 --- a/spaces/lanhuan1111/hello_world/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hello World -emoji: 🏆 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lavrtishakov/EleutherAI-gpt-j-6B/app.py b/spaces/lavrtishakov/EleutherAI-gpt-j-6B/app.py deleted file mode 100644 index 4a01f04f8d058a138f66decb2a9da8580d991771..0000000000000000000000000000000000000000 --- a/spaces/lavrtishakov/EleutherAI-gpt-j-6B/app.py +++ /dev/null @@ -1,5 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/EleutherAI/gpt-j-6B").launch() - -import requests \ No newline at end of file diff --git a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/img_util.py b/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/img_util.py deleted file mode 100644 index d409a132ff216e6943a276fb5d8cd5f410824883..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/img_util.py +++ /dev/null @@ -1,170 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import torch -from torchvision.utils import make_grid - - -def img2tensor(imgs, bgr2rgb=True, float32=True): - """Numpy array to tensor. - - Args: - imgs (list[ndarray] | ndarray): Input images. - bgr2rgb (bool): Whether to change bgr to rgb. - float32 (bool): Whether to change to float32. - - Returns: - list[tensor] | tensor: Tensor images. If returned results only have - one element, just return tensor. - """ - - def _totensor(img, bgr2rgb, float32): - if img.shape[2] == 3 and bgr2rgb: - if img.dtype == 'float64': - img = img.astype('float32') - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = torch.from_numpy(img.transpose(2, 0, 1)) - if float32: - img = img.float() - return img - - if isinstance(imgs, list): - return [_totensor(img, bgr2rgb, float32) for img in imgs] - else: - return _totensor(imgs, bgr2rgb, float32) - - -def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to [min, max], values will be normalized to [0, 1]. - - Args: - tensor (Tensor or list[Tensor]): Accept shapes: - 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); - 2) 3D Tensor of shape (3/1 x H x W); - 3) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - rgb2bgr (bool): Whether to change rgb to bgr. - out_type (numpy type): output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple[int]): min and max values for clamp. - - Returns: - (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of - shape (H x W). The channel order is BGR. - """ - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): - raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') - - if torch.is_tensor(tensor): - tensor = [tensor] - result = [] - for _tensor in tensor: - _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() - img_np = img_np.transpose(1, 2, 0) - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = img_np.transpose(1, 2, 0) - if img_np.shape[2] == 1: # gray image - img_np = np.squeeze(img_np, axis=2) - else: - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise TypeError('Only support 4D, 3D or 2D tensor. ' f'But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - if len(result) == 1: - result = result[0] - return result - - -def tensor2img_fast(tensor, rgb2bgr=True, min_max=(0, 1)): - """This implementation is slightly faster than tensor2img. - It now only supports torch tensor with shape (1, c, h, w). - - Args: - tensor (Tensor): Now only support torch tensor with (1, c, h, w). - rgb2bgr (bool): Whether to change rgb to bgr. Default: True. - min_max (tuple[int]): min and max values for clamp. - """ - output = tensor.squeeze(0).detach().clamp_(*min_max).permute(1, 2, 0) - output = (output - min_max[0]) / (min_max[1] - min_max[0]) * 255 - output = output.type(torch.uint8).cpu().numpy() - if rgb2bgr: - output = cv2.cvtColor(output, cv2.COLOR_RGB2BGR) - return output - - -def imfrombytes(content, flag='color', float32=False): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale` and `unchanged`. - float32 (bool): Whether to change to float32., If True, will also norm - to [0, 1]. Default: False. - - Returns: - ndarray: Loaded image array. - """ - img_np = np.frombuffer(content, np.uint8) - imread_flags = {'color': cv2.IMREAD_COLOR, 'grayscale': cv2.IMREAD_GRAYSCALE, 'unchanged': cv2.IMREAD_UNCHANGED} - img = cv2.imdecode(img_np, imread_flags[flag]) - if float32: - img = img.astype(np.float32) / 255. - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv's :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = os.path.abspath(os.path.dirname(file_path)) - os.makedirs(dir_name, exist_ok=True) - return cv2.imwrite(file_path, img, params) - - -def crop_border(imgs, crop_border): - """Crop borders of images. - - Args: - imgs (list[ndarray] | ndarray): Images with shape (h, w, c). - crop_border (int): Crop border for each end of height and weight. - - Returns: - list[ndarray]: Cropped images. - """ - if crop_border == 0: - return imgs - else: - if isinstance(imgs, list): - return [v[crop_border:-crop_border, crop_border:-crop_border, ...] for v in imgs] - else: - return imgs[crop_border:-crop_border, crop_border:-crop_border, ...] diff --git a/spaces/leurez/moss/src/views/chat/hooks/useScroll.ts b/spaces/leurez/moss/src/views/chat/hooks/useScroll.ts deleted file mode 100644 index 16987adbe152a8a2d6e519883133e8157024c10a..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/views/chat/hooks/useScroll.ts +++ /dev/null @@ -1,44 +0,0 @@ -import type { Ref } from 'vue' -import { nextTick, ref } from 'vue' - -type ScrollElement = HTMLDivElement | null - -interface ScrollReturn { - scrollRef: Ref - scrollToBottom: () => Promise - scrollToTop: () => Promise - scrollToBottomIfAtBottom: () => Promise -} - -export function useScroll(): ScrollReturn { - const scrollRef = ref(null) - - const scrollToBottom = async () => { - await nextTick() - if (scrollRef.value) - scrollRef.value.scrollTop = scrollRef.value.scrollHeight - } - - const scrollToTop = async () => { - await nextTick() - if (scrollRef.value) - scrollRef.value.scrollTop = 0 - } - - const scrollToBottomIfAtBottom = async () => { - await nextTick() - if (scrollRef.value) { - const threshold = 100 // 阈值,表示滚动条到底部的距离阈值 - const distanceToBottom = scrollRef.value.scrollHeight - scrollRef.value.scrollTop - scrollRef.value.clientHeight - if (distanceToBottom <= threshold) - scrollRef.value.scrollTop = scrollRef.value.scrollHeight - } - } - - return { - scrollRef, - scrollToBottom, - scrollToTop, - scrollToBottomIfAtBottom, - } -} diff --git a/spaces/lewisliuX123/wechatglm_demo/common/log.py b/spaces/lewisliuX123/wechatglm_demo/common/log.py deleted file mode 100644 index e00456e93b09f41ff5c1688883f2c72c201b38a5..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatglm_demo/common/log.py +++ /dev/null @@ -1,16 +0,0 @@ -import logging -import sys - - -def _get_logger(): - log = logging.getLogger('log') - log.setLevel(logging.INFO) - console_handle = logging.StreamHandler(sys.stdout) - console_handle.setFormatter(logging.Formatter('[%(levelname)s][%(asctime)s][%(filename)s:%(lineno)d] - %(message)s', - datefmt='%Y-%m-%d %H:%M:%S')) - log.addHandler(console_handle) - return log - - -# 日志句柄 -logger = _get_logger() \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Corel Draw X7 Free Download Full Version With Crack For Windows 8.1 64 Bit BEST.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Corel Draw X7 Free Download Full Version With Crack For Windows 8.1 64 Bit BEST.md deleted file mode 100644 index 9a3dc0eafe3e07ef69b9a2b8d5d3d4ee300df88c..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Corel Draw X7 Free Download Full Version With Crack For Windows 8.1 64 Bit BEST.md +++ /dev/null @@ -1,24 +0,0 @@ -

corel draw x7 free download full version with crack for windows 8.1 64 bit


Download ===> https://bytlly.com/2uGxgT



-
-Minimum hardware requirements for the software are:  a computer with a CPU running at least 2 GHz, 1 GB of RAM, and 2 GB of disk space. Due to its size, disk space required can be much more than the other elements.  - -Commercial applications - -Academy Award winner is available with Gold Awards from IM Global, which are awarded by the International Movie Database (IMDb). - -With over five million students in 190 countries, and millions of downloads of their textbooks, Camtasia delivers an affordable and easy to use solution for classroom teachers. There are a variety of free Camtasia components that can be used to create video tutorials for internal use and for students to use outside of class. Students can also integrate their Camtasia videos into their blogs and other social media sites.  - -Microsoft's Express Editions are offered at no cost to students and faculty for educational, outreach, and professional use. A limited number of downloads are available from the Microsoft Downloads site. - -Portable Document Format (PDF) is a format for electronic document files (including PostScript and Portable Document Format files) that has emerged as the standard for electronic documents, in particular for technical manuals and reference works. - -Open Document Format (ODF) is a file format used by some office software and peer-to-peer file sharing programs for document interchange. ODF is a specification by The OpenOffice.org project to create a single, open, collaborative office suite, including word processor, spreadsheet, presentation program, graphics, database, drawing and other applications. ODF conforms to the ISO standard ISO/IEC 29500, which specifies file format for electronic documents. ODF files are not proprietary and, for this reason, can be read and converted by other programs than OpenOffice.org. Other office software that supports OpenOffice.org (such as Microsoft Office) can also read and convert ODF files. - -The ISO/IEC 29500 standard for Portable Document Format (PDF) is based on the Open Document Format (ODF). The two are essentially equivalent, and a PDF file is usually a reformatted ODF file. In particular, the conversion from PDF to ODF can be done with the free PDF Converter provided with OpenOffice.org. - -The Portable Document Format (PDF) standard is based on the OpenDocument Format standard (ODF), which was based on the Open Office format specification, first version. - -In 4fefd39f24
-
-
-

diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/FruticulturaManuelAgustipdf.md b/spaces/lincquiQcaudo/Top-20-Diffusion/FruticulturaManuelAgustipdf.md deleted file mode 100644 index ad15bb40a9562210675a682809c41d9b38729f82..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/FruticulturaManuelAgustipdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

FruticulturaManuelAgustipdf


Download Zip ✺✺✺ https://bytlly.com/2uGy2t



-
-Libros similares Manual De Fruticultura Pdf Gratis Fruticultura Manuel Agustí Pdf Gratis Libros De Fruticultura Para Descargar Gratis manual de fruticultura ... 4d29de3e1b
-
-
-

diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py b/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py deleted file mode 100644 index 46ae777cc97af41a531cba4e5d1ff31f2efcb468..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_mbf.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "cosface" -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.1 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 2e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/glint360k" -config.num_classes = 360232 -config.num_image = 17091657 -config.num_epoch = 20 -config.warmup_epoch = -1 -config.decay_epoch = [8, 12, 15, 18] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/modules/enhancer.py b/spaces/lllqqq/so-vits-svc-models-pcr/modules/enhancer.py deleted file mode 100644 index 37676311f7d8dc4ddc2a5244dedc27b2437e04f5..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/modules/enhancer.py +++ /dev/null @@ -1,105 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from vdecoder.nsf_hifigan.nvSTFT import STFT -from vdecoder.nsf_hifigan.models import load_model -from torchaudio.transforms import Resample - -class Enhancer: - def __init__(self, enhancer_type, enhancer_ckpt, device=None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - - if enhancer_type == 'nsf-hifigan': - self.enhancer = NsfHifiGAN(enhancer_ckpt, device=self.device) - else: - raise ValueError(f" [x] Unknown enhancer: {enhancer_type}") - - self.resample_kernel = {} - self.enhancer_sample_rate = self.enhancer.sample_rate() - self.enhancer_hop_size = self.enhancer.hop_size() - - def enhance(self, - audio, # 1, T - sample_rate, - f0, # 1, n_frames, 1 - hop_size, - adaptive_key = 0, - silence_front = 0 - ): - # enhancer start time - start_frame = int(silence_front * sample_rate / hop_size) - real_silence_front = start_frame * hop_size / sample_rate - audio = audio[:, int(np.round(real_silence_front * sample_rate)) : ] - f0 = f0[: , start_frame :, :] - - # adaptive parameters - adaptive_factor = 2 ** ( -adaptive_key / 12) - adaptive_sample_rate = 100 * int(np.round(self.enhancer_sample_rate / adaptive_factor / 100)) - real_factor = self.enhancer_sample_rate / adaptive_sample_rate - - # resample the ddsp output - if sample_rate == adaptive_sample_rate: - audio_res = audio - else: - key_str = str(sample_rate) + str(adaptive_sample_rate) - if key_str not in self.resample_kernel: - self.resample_kernel[key_str] = Resample(sample_rate, adaptive_sample_rate, lowpass_filter_width = 128).to(self.device) - audio_res = self.resample_kernel[key_str](audio) - - n_frames = int(audio_res.size(-1) // self.enhancer_hop_size + 1) - - # resample f0 - f0_np = f0.squeeze(0).squeeze(-1).cpu().numpy() - f0_np *= real_factor - time_org = (hop_size / sample_rate) * np.arange(len(f0_np)) / real_factor - time_frame = (self.enhancer_hop_size / self.enhancer_sample_rate) * np.arange(n_frames) - f0_res = np.interp(time_frame, time_org, f0_np, left=f0_np[0], right=f0_np[-1]) - f0_res = torch.from_numpy(f0_res).unsqueeze(0).float().to(self.device) # 1, n_frames - - # enhance - enhanced_audio, enhancer_sample_rate = self.enhancer(audio_res, f0_res) - - # resample the enhanced output - if adaptive_factor != 0: - key_str = str(adaptive_sample_rate) + str(enhancer_sample_rate) - if key_str not in self.resample_kernel: - self.resample_kernel[key_str] = Resample(adaptive_sample_rate, enhancer_sample_rate, lowpass_filter_width = 128).to(self.device) - enhanced_audio = self.resample_kernel[key_str](enhanced_audio) - - # pad the silence frames - if start_frame > 0: - enhanced_audio = F.pad(enhanced_audio, (int(np.round(enhancer_sample_rate * real_silence_front)), 0)) - - return enhanced_audio, enhancer_sample_rate - - -class NsfHifiGAN(torch.nn.Module): - def __init__(self, model_path, device=None): - super().__init__() - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - print('| Load HifiGAN: ', model_path) - self.model, self.h = load_model(model_path, device=self.device) - - def sample_rate(self): - return self.h.sampling_rate - - def hop_size(self): - return self.h.hop_size - - def forward(self, audio, f0): - stft = STFT( - self.h.sampling_rate, - self.h.num_mels, - self.h.n_fft, - self.h.win_size, - self.h.hop_size, - self.h.fmin, - self.h.fmax) - with torch.no_grad(): - mel = stft.get_mel(audio) - enhanced_audio = self.model(mel, f0[:,:mel.size(-1)]).view(-1) - return enhanced_audio, self.h.sampling_rate \ No newline at end of file diff --git a/spaces/lojban/text-to-speech/README.md b/spaces/lojban/text-to-speech/README.md deleted file mode 100644 index a226399c1ac41953f5e3a7396f7ce12688bb1f21..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Lojban text-to-speech -emoji: 🌼⚙️ -colorFrom: green -colorTo: yellow -task: text-to-speech -tags: - - audio - - text-to-speech -language: jbo -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false -license: mit ---- - -tts for Lojban using [VITS TTS models](https://github.com/jaywalnut310/vits). \ No newline at end of file diff --git a/spaces/lost123/DeepDanbooru_string/README.md b/spaces/lost123/DeepDanbooru_string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/lost123/DeepDanbooru_string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/lu2000/anything-midjourney-v4-1/app.py b/spaces/lu2000/anything-midjourney-v4-1/app.py deleted file mode 100644 index 262436d8b50f87b0953c645576cc3184b3b27b43..0000000000000000000000000000000000000000 --- a/spaces/lu2000/anything-midjourney-v4-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Joeythemonster/anything-midjourney-v-4-1").launch() \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/detail/init.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/detail/init.h deleted file mode 100644 index 3ef78c1179f5b533c3ba3f637420c8125d632a7f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/detail/init.h +++ /dev/null @@ -1,336 +0,0 @@ -/* - pybind11/detail/init.h: init factory function implementation and support code. - - Copyright (c) 2017 Jason Rhinelander - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "class.h" - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) -PYBIND11_NAMESPACE_BEGIN(detail) - -template <> -class type_caster { -public: - bool load(handle h, bool) { - value = reinterpret_cast(h.ptr()); - return true; - } - - template using cast_op_type = value_and_holder &; - operator value_and_holder &() { return *value; } - static constexpr auto name = _(); - -private: - value_and_holder *value = nullptr; -}; - -PYBIND11_NAMESPACE_BEGIN(initimpl) - -inline void no_nullptr(void *ptr) { - if (!ptr) throw type_error("pybind11::init(): factory function returned nullptr"); -} - -// Implementing functions for all forms of py::init<...> and py::init(...) -template using Cpp = typename Class::type; -template using Alias = typename Class::type_alias; -template using Holder = typename Class::holder_type; - -template using is_alias_constructible = std::is_constructible, Cpp &&>; - -// Takes a Cpp pointer and returns true if it actually is a polymorphic Alias instance. -template = 0> -bool is_alias(Cpp *ptr) { - return dynamic_cast *>(ptr) != nullptr; -} -// Failing fallback version of the above for a no-alias class (always returns false) -template -constexpr bool is_alias(void *) { return false; } - -// Constructs and returns a new object; if the given arguments don't map to a constructor, we fall -// back to brace aggregate initiailization so that for aggregate initialization can be used with -// py::init, e.g. `py::init` to initialize a `struct T { int a; int b; }`. For -// non-aggregate types, we need to use an ordinary T(...) constructor (invoking as `T{...}` usually -// works, but will not do the expected thing when `T` has an `initializer_list` constructor). -template ::value, int> = 0> -inline Class *construct_or_initialize(Args &&...args) { return new Class(std::forward(args)...); } -template ::value, int> = 0> -inline Class *construct_or_initialize(Args &&...args) { return new Class{std::forward(args)...}; } - -// Attempts to constructs an alias using a `Alias(Cpp &&)` constructor. This allows types with -// an alias to provide only a single Cpp factory function as long as the Alias can be -// constructed from an rvalue reference of the base Cpp type. This means that Alias classes -// can, when appropriate, simply define a `Alias(Cpp &&)` constructor rather than needing to -// inherit all the base class constructors. -template -void construct_alias_from_cpp(std::true_type /*is_alias_constructible*/, - value_and_holder &v_h, Cpp &&base) { - v_h.value_ptr() = new Alias(std::move(base)); -} -template -[[noreturn]] void construct_alias_from_cpp(std::false_type /*!is_alias_constructible*/, - value_and_holder &, Cpp &&) { - throw type_error("pybind11::init(): unable to convert returned instance to required " - "alias class: no `Alias(Class &&)` constructor available"); -} - -// Error-generating fallback for factories that don't match one of the below construction -// mechanisms. -template -void construct(...) { - static_assert(!std::is_same::value /* always false */, - "pybind11::init(): init function must return a compatible pointer, " - "holder, or value"); -} - -// Pointer return v1: the factory function returns a class pointer for a registered class. -// If we don't need an alias (because this class doesn't have one, or because the final type is -// inherited on the Python side) we can simply take over ownership. Otherwise we need to try to -// construct an Alias from the returned base instance. -template -void construct(value_and_holder &v_h, Cpp *ptr, bool need_alias) { - no_nullptr(ptr); - if (Class::has_alias && need_alias && !is_alias(ptr)) { - // We're going to try to construct an alias by moving the cpp type. Whether or not - // that succeeds, we still need to destroy the original cpp pointer (either the - // moved away leftover, if the alias construction works, or the value itself if we - // throw an error), but we can't just call `delete ptr`: it might have a special - // deleter, or might be shared_from_this. So we construct a holder around it as if - // it was a normal instance, then steal the holder away into a local variable; thus - // the holder and destruction happens when we leave the C++ scope, and the holder - // class gets to handle the destruction however it likes. - v_h.value_ptr() = ptr; - v_h.set_instance_registered(true); // To prevent init_instance from registering it - v_h.type->init_instance(v_h.inst, nullptr); // Set up the holder - Holder temp_holder(std::move(v_h.holder>())); // Steal the holder - v_h.type->dealloc(v_h); // Destroys the moved-out holder remains, resets value ptr to null - v_h.set_instance_registered(false); - - construct_alias_from_cpp(is_alias_constructible{}, v_h, std::move(*ptr)); - } else { - // Otherwise the type isn't inherited, so we don't need an Alias - v_h.value_ptr() = ptr; - } -} - -// Pointer return v2: a factory that always returns an alias instance ptr. We simply take over -// ownership of the pointer. -template = 0> -void construct(value_and_holder &v_h, Alias *alias_ptr, bool) { - no_nullptr(alias_ptr); - v_h.value_ptr() = static_cast *>(alias_ptr); -} - -// Holder return: copy its pointer, and move or copy the returned holder into the new instance's -// holder. This also handles types like std::shared_ptr and std::unique_ptr where T is a -// derived type (through those holder's implicit conversion from derived class holder constructors). -template -void construct(value_and_holder &v_h, Holder holder, bool need_alias) { - auto *ptr = holder_helper>::get(holder); - no_nullptr(ptr); - // If we need an alias, check that the held pointer is actually an alias instance - if (Class::has_alias && need_alias && !is_alias(ptr)) - throw type_error("pybind11::init(): construction failed: returned holder-wrapped instance " - "is not an alias instance"); - - v_h.value_ptr() = ptr; - v_h.type->init_instance(v_h.inst, &holder); -} - -// return-by-value version 1: returning a cpp class by value. If the class has an alias and an -// alias is required the alias must have an `Alias(Cpp &&)` constructor so that we can construct -// the alias from the base when needed (i.e. because of Python-side inheritance). When we don't -// need it, we simply move-construct the cpp value into a new instance. -template -void construct(value_and_holder &v_h, Cpp &&result, bool need_alias) { - static_assert(std::is_move_constructible>::value, - "pybind11::init() return-by-value factory function requires a movable class"); - if (Class::has_alias && need_alias) - construct_alias_from_cpp(is_alias_constructible{}, v_h, std::move(result)); - else - v_h.value_ptr() = new Cpp(std::move(result)); -} - -// return-by-value version 2: returning a value of the alias type itself. We move-construct an -// Alias instance (even if no the python-side inheritance is involved). The is intended for -// cases where Alias initialization is always desired. -template -void construct(value_and_holder &v_h, Alias &&result, bool) { - static_assert(std::is_move_constructible>::value, - "pybind11::init() return-by-alias-value factory function requires a movable alias class"); - v_h.value_ptr() = new Alias(std::move(result)); -} - -// Implementing class for py::init<...>() -template -struct constructor { - template = 0> - static void execute(Class &cl, const Extra&... extra) { - cl.def("__init__", [](value_and_holder &v_h, Args... args) { - v_h.value_ptr() = construct_or_initialize>(std::forward(args)...); - }, is_new_style_constructor(), extra...); - } - - template , Args...>::value, int> = 0> - static void execute(Class &cl, const Extra&... extra) { - cl.def("__init__", [](value_and_holder &v_h, Args... args) { - if (Py_TYPE(v_h.inst) == v_h.type->type) - v_h.value_ptr() = construct_or_initialize>(std::forward(args)...); - else - v_h.value_ptr() = construct_or_initialize>(std::forward(args)...); - }, is_new_style_constructor(), extra...); - } - - template , Args...>::value, int> = 0> - static void execute(Class &cl, const Extra&... extra) { - cl.def("__init__", [](value_and_holder &v_h, Args... args) { - v_h.value_ptr() = construct_or_initialize>(std::forward(args)...); - }, is_new_style_constructor(), extra...); - } -}; - -// Implementing class for py::init_alias<...>() -template struct alias_constructor { - template , Args...>::value, int> = 0> - static void execute(Class &cl, const Extra&... extra) { - cl.def("__init__", [](value_and_holder &v_h, Args... args) { - v_h.value_ptr() = construct_or_initialize>(std::forward(args)...); - }, is_new_style_constructor(), extra...); - } -}; - -// Implementation class for py::init(Func) and py::init(Func, AliasFunc) -template , typename = function_signature_t> -struct factory; - -// Specialization for py::init(Func) -template -struct factory { - remove_reference_t class_factory; - - factory(Func &&f) : class_factory(std::forward(f)) { } - - // The given class either has no alias or has no separate alias factory; - // this always constructs the class itself. If the class is registered with an alias - // type and an alias instance is needed (i.e. because the final type is a Python class - // inheriting from the C++ type) the returned value needs to either already be an alias - // instance, or the alias needs to be constructible from a `Class &&` argument. - template - void execute(Class &cl, const Extra &...extra) && { - #if defined(PYBIND11_CPP14) - cl.def("__init__", [func = std::move(class_factory)] - #else - auto &func = class_factory; - cl.def("__init__", [func] - #endif - (value_and_holder &v_h, Args... args) { - construct(v_h, func(std::forward(args)...), - Py_TYPE(v_h.inst) != v_h.type->type); - }, is_new_style_constructor(), extra...); - } -}; - -// Specialization for py::init(Func, AliasFunc) -template -struct factory { - static_assert(sizeof...(CArgs) == sizeof...(AArgs), - "pybind11::init(class_factory, alias_factory): class and alias factories " - "must have identical argument signatures"); - static_assert(all_of...>::value, - "pybind11::init(class_factory, alias_factory): class and alias factories " - "must have identical argument signatures"); - - remove_reference_t class_factory; - remove_reference_t alias_factory; - - factory(CFunc &&c, AFunc &&a) - : class_factory(std::forward(c)), alias_factory(std::forward(a)) { } - - // The class factory is called when the `self` type passed to `__init__` is the direct - // class (i.e. not inherited), the alias factory when `self` is a Python-side subtype. - template - void execute(Class &cl, const Extra&... extra) && { - static_assert(Class::has_alias, "The two-argument version of `py::init()` can " - "only be used if the class has an alias"); - #if defined(PYBIND11_CPP14) - cl.def("__init__", [class_func = std::move(class_factory), alias_func = std::move(alias_factory)] - #else - auto &class_func = class_factory; - auto &alias_func = alias_factory; - cl.def("__init__", [class_func, alias_func] - #endif - (value_and_holder &v_h, CArgs... args) { - if (Py_TYPE(v_h.inst) == v_h.type->type) - // If the instance type equals the registered type we don't have inheritance, so - // don't need the alias and can construct using the class function: - construct(v_h, class_func(std::forward(args)...), false); - else - construct(v_h, alias_func(std::forward(args)...), true); - }, is_new_style_constructor(), extra...); - } -}; - -/// Set just the C++ state. Same as `__init__`. -template -void setstate(value_and_holder &v_h, T &&result, bool need_alias) { - construct(v_h, std::forward(result), need_alias); -} - -/// Set both the C++ and Python states -template ::value, int> = 0> -void setstate(value_and_holder &v_h, std::pair &&result, bool need_alias) { - construct(v_h, std::move(result.first), need_alias); - setattr((PyObject *) v_h.inst, "__dict__", result.second); -} - -/// Implementation for py::pickle(GetState, SetState) -template , typename = function_signature_t> -struct pickle_factory; - -template -struct pickle_factory { - static_assert(std::is_same, intrinsic_t>::value, - "The type returned by `__getstate__` must be the same " - "as the argument accepted by `__setstate__`"); - - remove_reference_t get; - remove_reference_t set; - - pickle_factory(Get get, Set set) - : get(std::forward(get)), set(std::forward(set)) { } - - template - void execute(Class &cl, const Extra &...extra) && { - cl.def("__getstate__", std::move(get)); - -#if defined(PYBIND11_CPP14) - cl.def("__setstate__", [func = std::move(set)] -#else - auto &func = set; - cl.def("__setstate__", [func] -#endif - (value_and_holder &v_h, ArgState state) { - setstate(v_h, func(std::forward(state)), - Py_TYPE(v_h.inst) != v_h.type->type); - }, is_new_style_constructor(), extra...); - } -}; - -PYBIND11_NAMESPACE_END(initimpl) -PYBIND11_NAMESPACE_END(detail) -PYBIND11_NAMESPACE_END(pybind11) diff --git a/spaces/ma-xu/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h b/spaces/ma-xu/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h deleted file mode 100644 index 73cd1f76af298ab1e88aad2c91c9266be77d793f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -namespace thrust -{ - -// define Boost's traversal tags -struct no_traversal_tag {}; - -struct incrementable_traversal_tag - : no_traversal_tag {}; - -struct single_pass_traversal_tag - : incrementable_traversal_tag {}; - -struct forward_traversal_tag - : single_pass_traversal_tag {}; - -struct bidirectional_traversal_tag - : forward_traversal_tag {}; - -struct random_access_traversal_tag - : bidirectional_traversal_tag {}; - -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/sequence.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/sequence.h deleted file mode 100644 index c33b2d4333ce2ded0ffe73c23c20a80c5a35b928..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/sequence.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits sequence -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/tuple.h b/spaces/ma-xu/LIVE/thrust/thrust/tuple.h deleted file mode 100644 index 930f9032611d9f86caf9a50adb576f047eafd14d..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/tuple.h +++ /dev/null @@ -1,585 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file tuple.h - * \brief A type encapsulating a heterogeneous collection of elements - */ - -/* - * Copyright (C) 1999, 2000 Jaakko Järvi (jaakko.jarvi@cs.utu.fi) - * - * Distributed under the Boost Software License, Version 1.0. - * (See accompanying NOTICE file for the complete license) - * - * For more information, see http://www.boost.org - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ - -/*! \addtogroup utility - * \{ - */ - -/*! \addtogroup tuple - * \{ - */ - -/*! \cond - */ - -struct null_type; - -/*! \endcond - */ - -/*! This metafunction returns the type of a - * \p tuple's Nth element. - * - * \tparam N This parameter selects the element of interest. - * \tparam T A \c tuple type of interest. - * - * \see pair - * \see tuple - */ -template - struct tuple_element -{ - private: - typedef typename T::tail_type Next; - - public: - /*! The result of this metafunction is returned in \c type. - */ - typedef typename tuple_element::type type; -}; // end tuple_element - -/*! This metafunction returns the number of elements - * of a \p tuple type of interest. - * - * \tparam T A \c tuple type of interest. - * - * \see pair - * \see tuple - */ -template - struct tuple_size -{ - /*! The result of this metafunction is returned in \c value. - */ - static const int value = 1 + tuple_size::value; -}; // end tuple_size - -// get function for non-const cons-lists, returns a reference to the element - -/*! The \p get function returns a reference to a \p tuple element of - * interest. - * - * \param t A reference to a \p tuple of interest. - * \return A reference to \p t's Nth element. - * - * \tparam N The index of the element of interest. - * - * The following code snippet demonstrates how to use \p get to print - * the value of a \p tuple element. - * - * \code - * #include - * #include - * ... - * thrust::tuple t(13, "thrust"); - * - * std::cout << "The 1st value of t is " << thrust::get<0>(t) << std::endl; - * \endcode - * - * \see pair - * \see tuple - */ -template -__host__ __device__ -inline typename access_traits< - typename tuple_element >::type - >::non_const_type -get(detail::cons& t); - - -/*! The \p get function returns a \c const reference to a \p tuple element of - * interest. - * - * \param t A reference to a \p tuple of interest. - * \return A \c const reference to \p t's Nth element. - * - * \tparam N The index of the element of interest. - * - * The following code snippet demonstrates how to use \p get to print - * the value of a \p tuple element. - * - * \code - * #include - * #include - * ... - * thrust::tuple t(13, "thrust"); - * - * std::cout << "The 1st value of t is " << thrust::get<0>(t) << std::endl; - * \endcode - * - * \see pair - * \see tuple - */ -template -__host__ __device__ -inline typename access_traits< - typename tuple_element >::type - >::const_type -get(const detail::cons& t); - - - -/*! \p tuple is a class template that can be instantiated with up to ten arguments. - * Each template argument specifies the type of element in the \p tuple. - * Consequently, tuples are heterogeneous, fixed-size collections of values. An - * instantiation of \p tuple with two arguments is similar to an instantiation - * of \p pair with the same two arguments. Individual elements of a \p tuple may - * be accessed with the \p get function. - * - * \tparam TN The type of the N \c tuple element. Thrust's \p tuple - * type currently supports up to ten elements. - * - * The following code snippet demonstrates how to create a new \p tuple object - * and inspect and modify the value of its elements. - * - * \code - * #include - * #include - * ... - * // create a tuple containing an int, a float, and a string - * thrust::tuple t(13, 0.1f, "thrust"); - * - * // individual members are accessed with the free function get - * std::cout << "The first element's value is " << thrust::get<0>(t) << std::endl; - * - * // or the member function get - * std::cout << "The second element's value is " << t.get<1>() << std::endl; - * - * // we can also modify elements with the same function - * thrust::get<0>(t) += 10; - * \endcode - * - * \see pair - * \see get - * \see make_tuple - * \see tuple_element - * \see tuple_size - * \see tie - */ -template - class tuple : - public detail::map_tuple_to_cons::type -{ - /*! \cond - */ - - private: - typedef typename detail::map_tuple_to_cons::type inherited; - - /*! \endcond - */ - - public: - /*! \p tuple's no-argument constructor initializes each element. - */ - inline __host__ __device__ - tuple(void) {} - - /*! \p tuple's one-argument constructor copy constructs the first element from the given parameter - * and intializes all other elements. - * \param t0 The value to assign to this \p tuple's first element. - */ - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0) - : inherited(t0, - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type())) {} - - /*! \p tuple's one-argument constructor copy constructs the first two elements from the given parameters - * and intializes all other elements. - * \param t0 The value to assign to this \p tuple's first element. - * \param t1 The value to assign to this \p tuple's second element. - * \note \p tuple's constructor has ten variants of this form, the rest of which are ommitted here for brevity. - */ - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1) - : inherited(t0, t1, - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type())) {} - - /*! \cond - */ - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2) - : inherited(t0, t1, t2, - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type())) {} - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2, - typename access_traits::parameter_type t3) - : inherited(t0, t1, t2, t3, - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type())) {} - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2, - typename access_traits::parameter_type t3, - typename access_traits::parameter_type t4) - : inherited(t0, t1, t2, t3, t4, - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type())) {} - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2, - typename access_traits::parameter_type t3, - typename access_traits::parameter_type t4, - typename access_traits::parameter_type t5) - : inherited(t0, t1, t2, t3, t4, t5, - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type())) {} - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2, - typename access_traits::parameter_type t3, - typename access_traits::parameter_type t4, - typename access_traits::parameter_type t5, - typename access_traits::parameter_type t6) - : inherited(t0, t1, t2, t3, t4, t5, t6, - static_cast(null_type()), - static_cast(null_type()), - static_cast(null_type())) {} - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2, - typename access_traits::parameter_type t3, - typename access_traits::parameter_type t4, - typename access_traits::parameter_type t5, - typename access_traits::parameter_type t6, - typename access_traits::parameter_type t7) - : inherited(t0, t1, t2, t3, t4, t5, t6, t7, - static_cast(null_type()), - static_cast(null_type())) {} - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2, - typename access_traits::parameter_type t3, - typename access_traits::parameter_type t4, - typename access_traits::parameter_type t5, - typename access_traits::parameter_type t6, - typename access_traits::parameter_type t7, - typename access_traits::parameter_type t8) - : inherited(t0, t1, t2, t3, t4, t5, t6, t7, t8, - static_cast(null_type())) {} - - inline __host__ __device__ - tuple(typename access_traits::parameter_type t0, - typename access_traits::parameter_type t1, - typename access_traits::parameter_type t2, - typename access_traits::parameter_type t3, - typename access_traits::parameter_type t4, - typename access_traits::parameter_type t5, - typename access_traits::parameter_type t6, - typename access_traits::parameter_type t7, - typename access_traits::parameter_type t8, - typename access_traits::parameter_type t9) - : inherited(t0, t1, t2, t3, t4, t5, t6, t7, t8, t9) {} - - - template - inline __host__ __device__ - tuple(const detail::cons& p) : inherited(p) {} - - __thrust_exec_check_disable__ - template - inline __host__ __device__ - tuple& operator=(const detail::cons& k) - { - inherited::operator=(k); - return *this; - } - - /*! \endcond - */ - - /*! This assignment operator allows assigning the first two elements of this \p tuple from a \p pair. - * \param k A \p pair to assign from. - */ - __thrust_exec_check_disable__ - template - __host__ __device__ inline - tuple& operator=(const thrust::pair& k) { - //BOOST_STATIC_ASSERT(length::value == 2);// check_length = 2 - this->head = k.first; - this->tail.head = k.second; - return *this; - } - - /*! \p swap swaps the elements of two tuples. - * - * \param t The other tuple with which to swap. - */ - inline __host__ __device__ - void swap(tuple &t) - { - inherited::swap(t); - } -}; - -/*! \cond - */ - -template <> -class tuple : - public null_type -{ -public: - typedef null_type inherited; -}; - -/*! \endcond - */ - - -/*! This version of \p make_tuple creates a new \c tuple object from a - * single object. - * - * \param t0 The object to copy from. - * \return A \p tuple object with a single member which is a copy of \p t0. - */ -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0); - -/*! This version of \p make_tuple creates a new \c tuple object from two - * objects. - * - * \param t0 The first object to copy from. - * \param t1 The second object to copy from. - * \return A \p tuple object with two members which are copies of \p t0 - * and \p t1. - * - * \note \p make_tuple has ten variants, the rest of which are omitted here - * for brevity. - */ -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1); - -/*! This version of \p tie creates a new \c tuple whose single element is - * a reference which refers to this function's argument. - * - * \param t0 The object to reference. - * \return A \p tuple object with one member which is a reference to \p t0. - */ -template -__host__ __device__ inline -tuple tie(T0& t0); - -/*! This version of \p tie creates a new \c tuple of references object which - * refers to this function's arguments. - * - * \param t0 The first object to reference. - * \param t1 The second object to reference. - * \return A \p tuple object with two members which are references to \p t0 - * and \p t1. - * - * \note \p tie has ten variants, the rest of which are omitted here for - * brevity. - */ -template -__host__ __device__ inline -tuple tie(T0& t0, T1& t1); - -/*! \p swap swaps the contents of two tuples. - * - * \param x The first \p tuple to swap. - * \param y The second \p tuple to swap. - */ -template< - typename T0, typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8, typename T9, - typename U0, typename U1, typename U2, typename U3, typename U4, typename U5, typename U6, typename U7, typename U8, typename U9 -> -inline __host__ __device__ -void swap(tuple &x, - tuple &y); - - - -/*! \cond - */ - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2); - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3); - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4); - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5); - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6); - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6, const T7& t7); - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6, const T7& t7, const T8& t8); - -template -__host__ __device__ inline - typename detail::make_tuple_mapper::type - make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6, const T7& t7, const T8& t8, const T9& t9); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6, T7 &t7); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6, T7 &t7, T8 &t8); - -template -__host__ __device__ inline -tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6, T7 &t7, T8 &t8, T9 &t9); - - -__host__ __device__ inline -bool operator==(const null_type&, const null_type&); - -__host__ __device__ inline -bool operator>=(const null_type&, const null_type&); - -__host__ __device__ inline -bool operator<=(const null_type&, const null_type&); - -__host__ __device__ inline -bool operator!=(const null_type&, const null_type&); - -__host__ __device__ inline -bool operator<(const null_type&, const null_type&); - -__host__ __device__ inline -bool operator>(const null_type&, const null_type&); - -/*! \endcond - */ - -/*! \} // tuple - */ - -/*! \} // utility - */ - -} // end thrust - diff --git a/spaces/marcusj83/MusicGenbruh/tests/utils/__init__.py b/spaces/marcusj83/MusicGenbruh/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/marcusj83/MusicGenbruh/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/merve/dataset-worldviews/public/dataset-worldviews/index.html b/spaces/merve/dataset-worldviews/public/dataset-worldviews/index.html deleted file mode 100644 index 7cc91d84d612bf8097d9568c37b1382c1dbf686f..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/dataset-worldviews/index.html +++ /dev/null @@ -1,288 +0,0 @@ - - - - - - - - - - - - - - - - - - Datasets Have Worldviews - - - - - - - - - - - - - - - -
- -
- -

Datasets Have Worldviews

-
Every dataset communicates a different perspective. When you shift your perspective, your conclusions can shift, too.
-

Suppose you have a dataset of shapes. They can either be shaded or unshaded. They look something like this:

- -
- -

You built a supervised machine learning classifier that will automatically classify each shape as shaded or unshaded. You call it the “Is-Shaded Classifier”.

- -

Click “Run Classifier” to see how your model performs.

-

-
-
-
- -

It’s not perfect— some of the shapes are definitely misclassified. You want to improve your model!

- -

To do so, you want to know more about the kinds of mistakes your model is making.

- -

Thinking About Bias

- -

In training, you only gave your model the raw image of each shape and one ground truth label: shaded and unshaded. But maybe something about your model—the distribution of the training data you used, the architecture you chose, or how you set your hyperparameters—resulted in your model performing better on some shapes than others.

- -

In fact, you’ve seen a lot of papers and articles citing issues of biased model performance between circles, triangles, and rectangles in shape data. One paper finds that shape detection algorithms tend to do worse on triangles; another article says color accuracy is an issue with circles. So you wonder: are there biases in your model’s misclassifications?

- -
Three abstract drawings of papers or articles with headlines 'Shape detection: biased against triangles?', 'Geometry experts call for more accurate rectangle data, cite fairness concerns', and 'Increasing color accuracy in circles'
- -

You want to make sure that your model is performing equally well across circles, triangles, and rectangles, so you decide to do a fairness analysis.

- -

There’s just one issue: you don’t have labels for which of your shapes are circles, triangles, or rectangles.

- -

So, you decide to send your data to data labelers.

- -
Different shapes with an arrow pointing to a group of abstract people.
- -

You receive feedback from your data labeling team that they’re not sure what to do with the shapes that aren’t exactly circles, triangles, or rectangles.

- -
An image of a computer interface and the instructions 'Please select the name of the shape below'. There is a lumpy, blob-like shape with three checkboxes that say 'circle', 'triangle', and 'rectangle'. There is a text box with a question mark next to the interface.
- -

For the shapes that are unclear, you can have them use their best guess or simply label them as “other”. Then, you can finally do some fairness analysis!

- -

Below is the interface they see:

- -
- -

These shapes should be labeled…

-
- -
- -
- -

If you go back and change the labelers’ instructions, which shapes do you perform worst on? Where do you find bias?

- -

You notice that your results hinge on how you choose to classify the shapes in our data.

- -

Because ultimately, this isn’t a world of only circles, triangles, and rectangles!

- -

Thinking About Classification

- -

What could we find out about our classifier’s performance if we used different categories altogether?

- -

All shapes are basically…

-

Everything else should be labeled…

- -

-

-

-

- -

With each of the different categories, which shapes do you perform worst on? Where do you find bias?

- -

Each way of categorizing your shapes takes a different stance about what’s important . Each one makes some features more important than others, it make some distinctions visible and other distinctions invisible, and make some things easy to classify while others become outliers.

- -

And each one tells you something different about what kind of bias your classifier has!

- -

Grouping and Regrouping

- -

Here’s another way to look at the same results. We can draw all the shapes that were correctly classified above the dashed line, and all the incorrectly classified shapes below it.

- -
- -

We’re still looking at the same model making the same classification on the same shapes, so the same shapes stay above and below the line. But each way of grouping the results distributes the errors differently— each way tells you something different.

- -

Labels Tell Stories

- -

The decisions you make about classification, however small…

- -

All shapes are basically…

- -

…begin to shape others’ decisions…

- -
- -

…they shape the analysis you can do…

- -
- -

…and they shape the kinds of conversations that happen.

- -

- -

It’s natural to want to find a way out of this problem by gathering more features or collecting more data. If we just have enough detail on enough data, surely we can avoid making these kinds of decisions, right?

- -

Unfortunately, that isn’t the case. Describing the world around us in any way—whether we’re telling a friend a story or telling a computer about shapes—requires us to choose what information is important to convey and what tools we want to use to convey it.

- -

Whether we think about it or not, we’re always making choices about classification. -

- -

All people are basically… men or women

-

All food is basically… sweet or savory

-

All content is basically… kid-friendly or adult

-

All speech is basically… hate speech or acceptable speech

- -

All results are basically… significant or insignificant

- -

And as we saw with shapes, all of these choices make some features more important than others, make some distinctions visible and other distinctions invisible, and make some things easy to classify while others become outliers.

- -

In Practice

- -

Let’s take a closer look at how this plays out in real machine learning applications. One straightforward example is in supervised object detection tasks.

- - -

For example, let’s imagine we want to train an object detection model on a dataset including this image:

- -

Image of the Seattle skyline
Source: Wikimedia Commons

- -

We could give it the following ground truth bounding boxes:

- -

Image of the Seattle skyline with boxes around several items in the picture with labels like 'building' and 'tree'.

- -

This looks objective, right? After all, a building is a building, a bush is a bush, and a mountain is a mountain!

-

But even labeling the same regions in the same image, you can communicate a very different perspective:

- -

Image of the Seattle skyline with boxes around several items in the picture, with labels like 'plant, non medicinal' and 'structure, nonreligious'.

- -

Or consider the image below, with several sets of “ground truth” labels. Looking at each of these labels, consider:

- -

What features matter? What gets labeled? Whose worldview comes through? What might you learn from this set of labels that you wouldn’t learn from another?

- -
Source: Wikimedia Commons
- -

There is no “view from nowhere”, no universal way to organize every object, or word, or image. Datasets are always products of a particular time, place, and set of conditions; they are socially situated artifacts. They have histories; they have politics. And ignoring this fact has very real consequences.

- -

So what do we do with this information?

- -

A great place to start is to reflect on your own context and get curious about your data.

- -

If it’s hard to see a dataset’s values—if it feels “objective”, “universal”, or “neutral”—it may simply be reflecting a worldview you’re accustomed to. So, understanding the limitations of your own worldview can tell you about the limitations of “objective” data. What assumptions do you make about the world? What feels like common sense? What feels foreign?

- -

And do some sleuthing about your data! Who collected this data? Why was it collected? Who paid for it? Where did the “ground truth” come from?

- -

You might even find yourself questioning what kinds of assumptions underpin machine learning dataset development or even thinking more deeply about classification as a whole.

- -

If you find yourself with lots of questions, you’re already off to a good start.

- -

-

- -

Credits

- -

Dylan Baker // January 2022

-

Thanks to Adam Pearce, Alex Hanna, Emily Denton, Fernanda Viégas, Kevin Robinson, Nithum Thain, Razvan Amironesei, and Vinodkumar Prabhakaran for their help with this piece.

-

- - - - - -

More Explorables

-

-

- - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/public/anonymization/make-slides.js b/spaces/merve/fill-in-the-blank/public/anonymization/make-slides.js deleted file mode 100644 index 3feff55ba9248cee61cd7ec881fade8ef661e67c..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/anonymization/make-slides.js +++ /dev/null @@ -1,98 +0,0 @@ -window.makeSlides = function(){ - var slides = [ - { - xKey: 'grid', - circleDelayFn: d => axii.ageScale(d.age), - showFlipRect: 0, - populationTarget: 144, - headsProbTarget: .5, - }, - { - xKey: 'age', - showAgeAxis: 1, - }, - { - xKey: 'ageState', - showStateAxis: 1, - }, - { - showUniqueBox: 1 - }, - { - xKey: 'ageStateSeason', - showUniqueBox: 1, - showUniqueSeasonBox: 1, - showSeasonAxis: 1, - }, - { - xKey: 'heads', - showUniqueBox: 0, - showUniqueSeasonBox: 0, - showSeasonAxis: 0, - showAgeAxis: 0, - showStateAxis: 0, - showHeadAxis: 1, - }, - { - showFlipCircle: 1, - showHeadCaptionAxis: 1, - }, - - // Flip coin - { - xKey: 'plagerizedShifted', - showHeadAxis: 0, - showHeadCaptionAxis: 0, - showHistogramAxis: 1, - }, - - // Exactly how far off can these estimates be after adding noise? Flip more coins to see the distribution. - { - enterHistogram: 1, - showHistogram: 1, - // showPlagerizedAxis: 0, - showEstimate: 1, - }, - - // Reducing the random noise increases our point estimate, but risks leaking information about students. - { - animateHeadsProbSlider: 1, - animatePopulationSlider: 1, - enterHistogram: 0, - name: 'noise', - headsProbTarget: .35, - }, - - // If we collect information from lots of people, we can have high accuracy and protect everyone's privacy. - { - showEstimate: 0, - showAllStudents: 1, - name: 'population', - animateHeadsProbSlider: -1, - animatePopulationSlider: 1, - populationTarget: 400, - }, - - ] - - var keys = [] - slides.forEach((d, i) => { - keys = keys.concat(d3.keys(d)) - d.index = i - }) - _.uniq(keys).forEach(str => { - var prev = null - slides.forEach(d => { - if (typeof(d[str]) === 'undefined'){ - d[str] = prev - } - prev = d[str] - }) - }) - - return slides -} - - - -if (window.init) window.init() diff --git a/spaces/merve/fill-in-the-blank/source/third_party/simple-statistics.min.js b/spaces/merve/fill-in-the-blank/source/third_party/simple-statistics.min.js deleted file mode 100644 index 9191046b7dc959d771a904875817c2b9c26ff0e5..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/third_party/simple-statistics.min.js +++ /dev/null @@ -1,3 +0,0 @@ -// https://github.com/simple-statistics/simple-statistics Copyright (c) 2014, Tom MacWright - -!function(t,r){"object"==typeof exports&&"undefined"!=typeof module?r(exports):"function"==typeof define&&define.amd?define(["exports"],r):r(t.ss={})}(this,function(t){"use strict";function r(t){if(0===t.length)return 0;for(var r,n=t[0],e=0,a=1;a=Math.abs(t[a])?e+=n-r+t[a]:e+=t[a]-r+n,n=r;return n+e}function g(t){if(0===t.length)throw new Error("mean requires at least one data point");return r(t)/t.length}function n(t,r){var n,e,a=g(t),o=0;if(2===r)for(e=0;er&&(r=t[n]);return r}function i(t,r){var n=t.length*r;if(0===t.length)throw new Error("quantile requires at least one data point.");if(r<0||1f&&p(t,n,e);sf;)l--}t[n]===f?p(t,n,l):p(t,++l,e),l<=r&&(n=l+1),r<=l&&(e=l-1)}}function p(t,r,n){var e=t[r];t[r]=t[n],t[n]=e}function s(t,r){var n=t.slice();if(Array.isArray(r)){!function(t,r){for(var n=[0],e=0;et[t.length-1])return 1;var n=function(t,r){var n=0,e=0,a=t.length;for(;e>>1]?a=n:e=-~n;return e}(t,r);if(t[n]!==r)return n/t.length;n++;var e=function(t,r){var n=0,e=0,a=t.length;for(;e=t[n=e+a>>>1]?e=-~n:a=n;return e}(t,r);if(e===n)return n/t.length;var a=e-n+1;return a*(e+n)/2/a/t.length}function m(t){var r=s(t,.75),n=s(t,.25);if("number"==typeof r&&"number"==typeof n)return r-n}function d(t){return+s(t,.5)}function b(t){for(var r=d(t),n=[],e=0;e=e[n][u]);--g)(s=x(h,u,o,i)+e[n-1][h-1])n&&(n=t[e]),t[e]t.length)throw new Error("cannot generate more classes than there are data values");var n=f(t);if(1===y(n))return[n];var e=S(r,n.length),a=S(r,n.length);!function(t,r,n){for(var e,a=r[0].length,o=t[Math.floor(a/2)],i=[],u=[],h=0;h=Math.abs(a)&&(c+=1);else if("greater"===n)for(h=0;h<=e;h++)o[h]>=a&&(c+=1);else for(h=0;h<=e;h++)o[h]<=a&&(c+=1);return c/e},t.bisect=function(t,r,n,e,a){if("function"!=typeof t)throw new TypeError("func must be a function");for(var o=0;o { - if (err){ - console.log(err) - return check() - } - - if (nextStr == lastStr) return - lastStr = nextStr - - if (path.includes('.js')){ - console.log('js', new Date()) - Function(nextStr.replace('\n', ';').replace('\n', ';'))() - } - - if (path.includes('.css')){ - console.log('css', new Date()) - - Array.from(document.querySelectorAll('link')) - .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder')) - .forEach(d => d.href = path + '?' + Math.random()) - } - }) - - if (python_settings.isDev) setTimeout(check, 100) - } - check() - } - - ;[ - 'style.css', - 'init-scatter.js', - 'init-util.js', - 'init-pair.js', - 'init.js' - ].forEach(filename => { - var root = document.currentScript.src.replace('watch-files.js', '').split('?')[0] - var path = root + filename - - if (python_settings.isDev){ - watchFile(path) - } else { - if (path.includes('.js')){ - var node = document.createElement('script') - node.setAttribute('src', path) - document.body.appendChild(node) - } - - if (path.includes('.css')){ - Array.from(document.querySelectorAll('link')) - .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder')) - .forEach(d => d.href = path + '?' + Math.random()) - } - } - }) -})() - - - diff --git a/spaces/michaelthwan/digest-everything-gpt/main.py b/spaces/michaelthwan/digest-everything-gpt/main.py deleted file mode 100644 index 530e12b1529caba231960a79e71f744e55c7aa5d..0000000000000000000000000000000000000000 --- a/spaces/michaelthwan/digest-everything-gpt/main.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -import threading -import time -import webbrowser - -from digester.gradio_ui_service import GradioUIService -from digester.util import get_config - -os.makedirs("analyzer_logs", exist_ok=True) - - -def opentab_with_delay(port): - def open(): - time.sleep(2) - webbrowser.open_new_tab(f"http://localhost:{port}/?__theme=dark") - - threading.Thread(target=open, name="open-browser", daemon=True).start() - - -if __name__ == '__main__': - config = get_config() - port = config["gradio"]["port"] - opentab_with_delay(port) - demo = GradioUIService.get_gradio_ui() - demo.queue(concurrency_count=config['gradio']['concurrent']).launch() diff --git a/spaces/miesnerjacob/Multi-task-NLP/keyword_extraction.py b/spaces/miesnerjacob/Multi-task-NLP/keyword_extraction.py deleted file mode 100644 index 6934768645db0e9628d5abdcc6c15d42800670c4..0000000000000000000000000000000000000000 --- a/spaces/miesnerjacob/Multi-task-NLP/keyword_extraction.py +++ /dev/null @@ -1,158 +0,0 @@ -import nltk -import pytextrank -import re -from operator import itemgetter -import en_core_web_sm - - -class KeywordExtractor: - """ - Keyword Extraction on text data - - Attributes: - nlp: An instance English pipeline optimized for CPU for spacy - """ - - def __init__(self): - self.nlp = en_core_web_sm.load() - self.nlp.add_pipe("textrank") - - def get_keywords(self, text, max_keywords): - """ - Extract keywords from text. - - Parameters: - text (str): The user input string to extract keywords from - - Returns: - kws (list): list of extracted keywords - """ - - doc = self.nlp(text) - - kws = [i.text for i in doc._.phrases[:max_keywords]] - - return kws - - def get_keyword_indices(self, kws, text): - """ - Extract keywords from text. - - Parameters: - kws (list): list of extracted keywords - text (str): The user input string to extract keywords from - - Returns: - keyword_indices (list): list of indices for keyword boundaries in text - """ - - keyword_indices = [] - for s in kws: - indices = [[m.start(), m.end()] for m in re.finditer(re.escape(s), text)] - keyword_indices.extend(indices) - - return keyword_indices - - def merge_overlapping_indices(self, keyword_indices): - """ - Merge overlapping keyword indices. - - Parameters: - keyword_indices (list): list of indices for keyword boundaries in text - - Returns: - keyword_indices (list): list of indices for keyword boundaries in with overlapping combined - """ - - # Sort the array on the basis of start values of intervals. - keyword_indices.sort() - - stack = [] - # insert first interval into stack - stack.append(keyword_indices[0]) - for i in keyword_indices[1:]: - # Check for overlapping interval, - # if interval overlap - if (stack[-1][0] <= i[0] <= stack[-1][-1]) or (stack[-1][-1] == i[0]-1): - stack[-1][-1] = max(stack[-1][-1], i[-1]) - else: - stack.append(i) - return stack - - def merge_until_finished(self, keyword_indices): - """ - Loop until no overlapping keyword indices left. - - Parameters: - keyword_indices (list): list of indices for keyword boundaries in text - - Returns: - keyword_indices (list): list of indices for keyword boundaries in with overlapping combined - """ - - len_indices = 0 - while True: - # Merge overlapping indices - merged = self.merge_overlapping_indices(keyword_indices) - # Check to see if merging reduced number of annotation indices - # If merging did not reduce list return final indicies - if len_indices == len(merged): - out_indices = sorted(merged, key=itemgetter(0)) - return out_indices - else: - len_indices = len(merged) - - def get_annotation(self, text, keyword_indices): - """ - Create text annotation for extracted keywords. - - Parameters: - keyword_indices (list): list of indices for keyword boundaries in text - - Returns: - annotation (list): list of tuples for generating html - """ - - # Turn list to numpy array - arr = list(text) - - # Loop through indices in list and insert delimeters - for idx in sorted(keyword_indices, reverse=True): - arr.insert(idx[0], "") - arr.insert(idx[1]+1, " ") - - # join array - joined_annotation = ''.join(arr) - - # split array on delimeter - split = joined_annotation.split('') - - # Create annotation for keywords in text - annotation = [(x.replace(' ', ''), "KEY", "#26aaef") if "" in x else x for x in split] - - return annotation - - def generate(self, text, max_keywords): - """ - Create text annotation for extracted keywords. - - Parameters: - text (str): The user input string to extract keywords from - max_keywords (int): Limit on number of keywords to generate - - Returns: - annotation (list): list of tuples for generating html - kws (list): list of extracted keywords - """ - - kws = self.get_keywords(text, max_keywords) - - indices = list(self.get_keyword_indices(kws, text)) - if indices: - indices_merged = self.merge_until_finished(indices) - annotation = self.get_annotation(text, indices_merged) - else: - annotation = None - - return annotation, kws - diff --git a/spaces/mikkoar/marco/src/components/button-scroll-to-bottom.tsx b/spaces/mikkoar/marco/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/mikkoar/marco/src/components/ui/sheet.tsx b/spaces/mikkoar/marco/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/mikkoar/marco/src/pages/api/image.ts b/spaces/mikkoar/marco/src/pages/api/image.ts deleted file mode 100644 index 4b894bea86050c0f3888cc56f60c0cb7f8b57cfc..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/src/pages/api/image.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' -import { createImage } from '@/lib/bots/bing/utils' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const { prompt, id } = req.query - if (!prompt) { - return res.json({ - result: { - value: 'Image', - message: 'No Prompt' - } - }) - } - try { - const headers = createHeaders(req.cookies, { - IMAGE_BING_COOKIE: process.env.IMAGE_BING_COOKIE - }) - - debug('headers', headers) - const response = await createImage(String(prompt), String(id), { - ...headers, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - }) - res.writeHead(200, { - 'Content-Type': 'text/plain; charset=UTF-8', - }) - return res.end(response) - } catch (e) { - return res.json({ - result: { - value: 'Error', - message: `${e}` - } - }) - } -} diff --git a/spaces/mindspore-ai/Wuhan-LuoJiaNET/app.py b/spaces/mindspore-ai/Wuhan-LuoJiaNET/app.py deleted file mode 100644 index af810f2936faeefb2bfb3552748cadac54f0b3ca..0000000000000000000000000000000000000000 --- a/spaces/mindspore-ai/Wuhan-LuoJiaNET/app.py +++ /dev/null @@ -1,91 +0,0 @@ -import os -import requests -import gradio as gr - -url = os.environ["URL_NODE"] - - -def detect_image(image): - print("image: ", image) - files = {"picture": open(image, "rb")} - resp = requests.post(url, - files=files, - verify=False) - resp = resp.json() - gen_url = resp["data"]["answer"] - return gen_url - - -def read_content(file_path): - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - return content - - -example_images = [ - os.path.join(os.path.dirname(__file__), "examples/00.jpg"), - os.path.join(os.path.dirname(__file__), "examples/01.jpg"), - os.path.join(os.path.dirname(__file__), "examples/02.jpg"), - os.path.join(os.path.dirname(__file__), "examples/03.jpg"), - os.path.join(os.path.dirname(__file__), "examples/04.jpg"), - os.path.join(os.path.dirname(__file__), "examples/05.jpg"), - os.path.join(os.path.dirname(__file__), "examples/06.png") -] - -default_image = example_images[0] - -css = """ -.gradio-container {background-image: url('file=./background.jpg'); background-size:cover; background-repeat: no-repeat;} -""" - -# warm up -# detect_image() - -with gr.Blocks(css=css) as demo: - gr.HTML(read_content("./header.html")) - gr.Markdown("# MindSpore Wuhan.LuoJiaNET") - gr.Markdown( - "`Wuhan.LuoJiaNET` is the first domestic autonomous and controllable machine learning framework for remote sensing in the field of remote sensing," - " jointly developed by` Wuhan University` and `Huawei's Ascend AI team`, which has the characteristics of large image size," - " multiple data channels, and large scale variation of remote sensing data." - " It is compatible with existing deep learning frameworks and provides a user-friendly," - " drag-and-drop interactive network structure to build an interface." - " It can shield the differences between different hardware devices and manage a diversified remote sensing image sample library," - " LuoJiaSET, to achieve efficient storage and management of remote multi-source sensing image samples." - ) - - with gr.Tab("目标识别 (Object Detection)"): - with gr.Row(): - image_input = gr.Image( - type="filepath", - value=default_image - ) - image_output = gr.Image( - type="filepath", - interactive=False - ) - - gr.Examples( - examples=example_images, - inputs=image_input, - ) - image_button = gr.Button("Detect") - - with gr.Accordion("Open for More!"): - gr.Markdown( - "- If you want to know more about the foundation models of MindSpore, please visit " - "[The Foundation Models Platform for Mindspore](https://xihe.mindspore.cn/)" - ) - gr.Markdown( - "- If you want to know more about Wuhan.LuoJiaNET, please visit " - "[Wuhan.LuoJiaNET](https://github.com/WHULuoJiaTeam/luojianet)") - gr.Markdown( - "- Try [Wukong-LuojiaNET model on the Foundation Models Platform for Mindspore]" - "(https://xihe.mindspore.cn/modelzoo/luojia)") - - image_button.click(detect_image, - inputs=[image_input], - outputs=[image_output]) - -demo.queue(concurrency_count=5) -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/mizoru/Japanese_pitch/app.py b/spaces/mizoru/Japanese_pitch/app.py deleted file mode 100644 index b347b4e6b93867728b197b8d9cfcfcae723a208c..0000000000000000000000000000000000000000 --- a/spaces/mizoru/Japanese_pitch/app.py +++ /dev/null @@ -1,39 +0,0 @@ -import gradio as gr - -from fastai.vision.all import * - -from fastaudio.core.all import * - -matplotlib.rcParams['figure.dpi'] = 300 - -def get_x(df): - return df.path -def get_y(df): - return df.pattern - -learn = load_learner('xresnet50_pitch3_removeSilence.pkl') - -labels = learn.dls.vocab - -def predict(Record, Upload): - if Upload: path = Upload - else: path = Record - spec,pred,pred_idx,probs = learn.predict(str(path), with_input=True) - fig,ax = plt.subplots(figsize=(16,10)) - show_image(spec, ax=ax) - ax.invert_yaxis() - return [{labels[i]: float(probs[i]) for i in range(len(labels))}, fig] - - -title = "Japanese Pitch Accent Pattern Detector" - -description = "This model will predict the pitch accent pattern of a word based on the recording of its pronunciation." - -article="

How did I make this and what is it for?

" - -examples = [['代わる.mp3'],['大丈夫な.mp3'],['熱くない.mp3'], ['あめー雨.mp3'], ['あめー飴.mp3']] - -enable_queue=True - -gr.Interface(fn=predict,inputs=[gr.inputs.Audio(source='microphone', type='filepath', optional=True), gr.inputs.Audio(source='upload', type='filepath', optional=True)], outputs= [gr.outputs.Label(num_top_classes=3), gr.outputs.Image(type="plot", label='Spectrogram')], title=title,description=description,article=article,examples=examples).launch(debug=True, enable_queue=enable_queue) - \ No newline at end of file diff --git a/spaces/mkrzyzan/face-swap/app.py b/spaces/mkrzyzan/face-swap/app.py deleted file mode 100644 index 27371b27b3d3a9849438253fefe3441d4db6801c..0000000000000000000000000000000000000000 --- a/spaces/mkrzyzan/face-swap/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import gradio as gr -import insightface -from insightface.app import FaceAnalysis - -wellcomingMessage = """ -

Face Swapping

-

If you like this app, plase take a look at my Meetup Group! There will be more interesting apps and events soon.

-

Happy coding!

-""" - -assert insightface.__version__>='0.7' - -value = 0 -app = FaceAnalysis(name='buffalo_l') -app.prepare(ctx_id=0, det_size=(640, 640)) -swapper = insightface.model_zoo.get_model('inswapper_128.onnx', download=True, download_zip=True) - -def swap_faces(faceSource, sourceFaceId, faceDestination, destFaceId): - faces = app.get(faceSource) - faces = sorted(faces, key = lambda x : x.bbox[0]) - if len(faces) < sourceFaceId or sourceFaceId < 1: - raise gr.Error(f"Source image only contains {len(faces)} faces, but you requested face {sourceFaceId}") - - source_face = faces[sourceFaceId-1] - - res_faces = app.get(faceDestination) - res_faces = sorted(res_faces, key = lambda x : x.bbox[0]) - if len(res_faces) < destFaceId or destFaceId < 1: - raise gr.Error(f"Destination image only contains {len(res_faces)} faces, but you requested face {destFaceId}") - res_face = res_faces[destFaceId-1] - - result = swapper.get(faceDestination, res_face, source_face, paste_back=True) - - global value - value = value + 1 - print(f"processed: {value}...") - - # for face in faces: - # res = swapper.get(res, face, source_face, paste_back=True) - # cv2.imwrite("./t1_swapped.jpg", res) - return result - -gr.Interface(swap_faces, - [ - gr.Image(), - gr.Number(precision=0, value=1, info='face position (from left, starting at 1)'), - gr.Image(), - gr.Number(precision=0, value=1, info='face position (from left, starting at 1)') - ], - gr.Image(), - description=wellcomingMessage, - examples=[ - ['./Images/kim.jpg', 1, './Images/marilyn.jpg', 1], - ['./Images/friends.jpg', 2, './Images/friends.jpg', 1], - ], -).launch() \ No newline at end of file diff --git a/spaces/ml6team/logo-generator/dalle/utils/__init__.py b/spaces/ml6team/logo-generator/dalle/utils/__init__.py deleted file mode 100644 index 776dd3a6ef93a2d905cbcaec159b6db320bdf3db..0000000000000000000000000000000000000000 --- a/spaces/ml6team/logo-generator/dalle/utils/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .utils import * -from .config import * -from .sampling import * \ No newline at end of file diff --git a/spaces/mohsenfayyaz/DecompX/DecompX/src/modeling_bert.py b/spaces/mohsenfayyaz/DecompX/DecompX/src/modeling_bert.py deleted file mode 100644 index 7c2899647df71693d133927ce48ec6c18b51d1cb..0000000000000000000000000000000000000000 --- a/spaces/mohsenfayyaz/DecompX/DecompX/src/modeling_bert.py +++ /dev/null @@ -1,2452 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch BERT model.""" - -import math -import os -import warnings -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from packaging import version -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from .decompx_utils import DecompXConfig, DecompXOutput - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - NextSentencePredictorOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from transformers.models.bert.configuration_bert import BertConfig - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "bert-base-uncased" -_CONFIG_FOR_DOC = "BertConfig" -_TOKENIZER_FOR_DOC = "BertTokenizer" - -BERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "bert-base-uncased", - "bert-large-uncased", - "bert-base-cased", - "bert-large-cased", - "bert-base-multilingual-uncased", - "bert-base-multilingual-cased", - "bert-base-chinese", - "bert-base-german-cased", - "bert-large-uncased-whole-word-masking", - "bert-large-cased-whole-word-masking", - "bert-large-uncased-whole-word-masking-finetuned-squad", - "bert-large-cased-whole-word-masking-finetuned-squad", - "bert-base-cased-finetuned-mrpc", - "bert-base-german-dbmdz-cased", - "bert-base-german-dbmdz-uncased", - "cl-tohoku/bert-base-japanese", - "cl-tohoku/bert-base-japanese-whole-word-masking", - "cl-tohoku/bert-base-japanese-char", - "cl-tohoku/bert-base-japanese-char-whole-word-masking", - "TurkuNLP/bert-base-finnish-cased-v1", - "TurkuNLP/bert-base-finnish-uncased-v1", - "wietsedv/bert-base-dutch-cased", - # See all BERT models at https://huggingface.co/models?filter=bert -] - - -def load_tf_weights_in_bert(model, config, tf_checkpoint_path): - """Load tf checkpoints in a pytorch model.""" - try: - import re - - import numpy as np - import tensorflow as tf - except ImportError: - logger.error( - "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " - "https://www.tensorflow.org/install/ for installation instructions." - ) - raise - tf_path = os.path.abspath(tf_checkpoint_path) - logger.info(f"Converting TensorFlow checkpoint from {tf_path}") - # Load weights from TF model - init_vars = tf.train.list_variables(tf_path) - names = [] - arrays = [] - for name, shape in init_vars: - logger.info(f"Loading TF weight {name} with shape {shape}") - array = tf.train.load_variable(tf_path, name) - names.append(name) - arrays.append(array) - - for name, array in zip(names, arrays): - name = name.split("/") - # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v - # which are not required for using pretrained model - if any( - n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"] - for n in name - ): - logger.info(f"Skipping {'/'.join(name)}") - continue - pointer = model - for m_name in name: - if re.fullmatch(r"[A-Za-z]+_\d+", m_name): - scope_names = re.split(r"_(\d+)", m_name) - else: - scope_names = [m_name] - if scope_names[0] == "kernel" or scope_names[0] == "gamma": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "output_bias" or scope_names[0] == "beta": - pointer = getattr(pointer, "bias") - elif scope_names[0] == "output_weights": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "squad": - pointer = getattr(pointer, "classifier") - else: - try: - pointer = getattr(pointer, scope_names[0]) - except AttributeError: - logger.info(f"Skipping {'/'.join(name)}") - continue - if len(scope_names) >= 2: - num = int(scope_names[1]) - pointer = pointer[num] - if m_name[-11:] == "_embeddings": - pointer = getattr(pointer, "weight") - elif m_name == "kernel": - array = np.transpose(array) - try: - if pointer.shape != array.shape: - raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") - except AssertionError as e: - e.args += (pointer.shape, array.shape) - raise - logger.info(f"Initialize PyTorch weight {name}") - pointer.data = torch.from_numpy(array) - return model - -def output_builder(input_vector, output_mode): - if output_mode is None: - return None - elif output_mode == "vector": - return (input_vector,) - elif output_mode == "norm": - return (torch.norm(input_vector, dim=-1),) - elif output_mode == "both": - return ((torch.norm(input_vector, dim=-1), input_vector),) - elif output_mode == "distance_based": - recomposed_vectors = torch.sum(input_vector, dim=-2, keepdim=True) - importance_matrix = -torch.nn.functional.pairwise_distance(input_vector, recomposed_vectors, p=1) - norm_y = torch.norm(recomposed_vectors, dim=-1, p=1) - maxed = torch.maximum(torch.zeros(1, device=norm_y.device), norm_y + importance_matrix) - return (maxed / (torch.sum(maxed, dim=-2, keepdim=True) + 1e-12),) - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word, position and token_type embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - if version.parse(torch.__version__) > version.parse("1.6.0"): - self.register_buffer( - "token_type_ids", - torch.zeros(self.position_ids.size(), dtype=torch.long), - persistent=False, - ) - - def forward( - self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 - ): - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - if position_ids is None: - position_ids = self.position_ids[:, past_key_values_length: seq_length + past_key_values_length] - - # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs - # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves - # issue #5664 - if token_type_ids is None: - if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class BertSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = position_embedding_type or getattr( - config, "position_embedding_type", "absolute" - ) - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - - self.is_decoder = config.is_decoder - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def transpose_for_scores_for_decomposed(self, x): - # x: (B, N, N, H*V) - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - # x: (B, N, N, H, V) - x = x.view(new_x_shape) - # x: (B, H, N, N, V) - return x.permute(0, 3, 1, 2, 4) - - def forward( - self, - hidden_states: torch.Tensor, - attribution_vectors: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - decompx_ready: Optional[bool] = None, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - mixed_query_layer = self.query(hidden_states) - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - decomposed_value_layer = None - - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_layer = past_key_value[0] - value_layer = past_key_value[1] - attention_mask = encoder_attention_mask - elif is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - if attribution_vectors is not None: - decomposed_value_layer = torch.einsum("bijd,vd->bijv", attribution_vectors, self.value.weight) - decomposed_value_layer = self.transpose_for_scores_for_decomposed(decomposed_value_layer) - - - query_layer = self.transpose_for_scores(mixed_query_layer) - - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - # added by Fayyaz / Modarressi - # ------------------------------- - if decompx_ready: - outputs = (context_layer, attention_probs, value_layer, decomposed_value_layer) - return outputs - # ------------------------------- - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - if self.is_decoder: - outputs = outputs + (past_key_value,) - return outputs - - -class BertSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor, - decompx_ready=False): # added by Fayyaz / Modarressi - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - pre_ln_states = hidden_states + input_tensor # added by Fayyaz / Modarressi - post_ln_states = self.LayerNorm(pre_ln_states) # added by Fayyaz / Modarressi - # added by Fayyaz / Modarressi - if decompx_ready: - return post_ln_states, pre_ln_states - else: - return post_ln_states - - -class BertAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - self.self = BertSelfAttention(config, position_embedding_type=position_embedding_type) - self.output = BertSelfOutput(config) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - attribution_vectors: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - decompx_ready: Optional[bool] = None, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - self_outputs = self.self( - hidden_states, - attribution_vectors, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - decompx_ready=decompx_ready, # added by Fayyaz / Modarressi - ) - attention_output = self.output( - self_outputs[0], - hidden_states, - decompx_ready=decompx_ready, # added by Goro Kobayashi (Edited by Fayyaz / Modarressi) - ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - if decompx_ready: - _, attention_probs, value_layer, decomposed_value_layer = self_outputs - attention_output, pre_ln_states = attention_output - outputs = (attention_output, attention_probs,) + (value_layer, decomposed_value_layer, pre_ln_states) # add attentions and norms if we output them - return outputs - # ------------------------------- - - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor, decompx_ready: Optional[bool] = False) -> torch.Tensor: - pre_act_hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(pre_act_hidden_states) - if decompx_ready: - return hidden_states, pre_act_hidden_states - return hidden_states, None - - -class BertOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor, decompx_ready: Optional[bool] = False): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - # return hidden_states - # Added by Fayyaz / Modarressi - # ------------------------------- - pre_ln_states = hidden_states + input_tensor - hidden_states = self.LayerNorm(pre_ln_states) - if decompx_ready: - return hidden_states, pre_ln_states - return hidden_states, None - # ------------------------------- - - -class BertLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = BertAttention(config) - self.is_decoder = config.is_decoder - self.add_cross_attention = config.add_cross_attention - if self.add_cross_attention: - if not self.is_decoder: - raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.crossattention = BertAttention(config, position_embedding_type="absolute") - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config) - self.similarity_fn = torch.nn.CosineSimilarity(dim=-1) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - def bias_decomposer(self, bias, attribution_vectors, bias_decomp_type="absdot"): - # Decomposes the input bias based on similarity to the attribution vectors - # Args: - # bias: a bias vector (all_head_size) - # attribution_vectors: the attribution vectors from token j to i (b, i, j, all_head_size) :: (batch, seq_length, seq_length, all_head_size) - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bskd,d->bsk", attribution_vectors, bias)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(attribution_vectors, bias, dim=-1)) - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(attribution_vectors, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(attribution_vectors.shape[:-1], device=attribution_vectors.device) - weights[:,:,0] = 1.0 - elif bias_decomp_type == "dot": - weights = torch.einsum("bskd,d->bsk", attribution_vectors, bias) - elif bias_decomp_type == "biastoken": - attrib_shape = attribution_vectors.shape - if attrib_shape[1] == attrib_shape[2]: - attribution_vectors = torch.concat([attribution_vectors, torch.zeros((attrib_shape[0], attrib_shape[1], 1, attrib_shape[3]), device=attribution_vectors.device)], dim=-2) - attribution_vectors[:,:,-1] = attribution_vectors[:,:,-1] + bias - return attribution_vectors - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.matmul(weights.unsqueeze(dim=-1), bias.unsqueeze(dim=0)) - return attribution_vectors + weighted_bias - - - def ln_decomposer(self, attribution_vectors, pre_ln_states, gamma, beta, eps, include_biases=True, bias_decomp_type="absdot"): - mean = pre_ln_states.mean(-1, keepdim=True) # (batch, seq_len, 1) m(y=Σy_j) - var = (pre_ln_states - mean).pow(2).mean(-1, keepdim=True).unsqueeze(dim=2) # (batch, seq_len, 1, 1) s(y) - - each_mean = attribution_vectors.mean(-1, keepdim=True) # (batch, seq_len, seq_len, 1) m(y_j) - - normalized_layer = torch.div(attribution_vectors - each_mean, - (var + eps) ** (1 / 2)) # (batch, seq_len, seq_len, all_head_size) - - post_ln_layer = torch.einsum('bskd,d->bskd', normalized_layer, - gamma) # (batch, seq_len, seq_len, all_head_size) - - if include_biases: - return self.bias_decomposer(beta, post_ln_layer, bias_decomp_type=bias_decomp_type) - else: - return post_ln_layer - - - def gelu_linear_approximation(self, intermediate_hidden_states, intermediate_output): - def phi(x): - return (1 + torch.erf(x / math.sqrt(2))) / 2. - - def normal_pdf(x): - return torch.exp(-(x**2) / 2) / math.sqrt(2. * math.pi) - - def gelu_deriv(x): - return phi(x)+x*normal_pdf(x) - - m = gelu_deriv(intermediate_hidden_states) - b = intermediate_output - m * intermediate_hidden_states - return m, b - - - def gelu_decomposition(self, attribution_vectors, intermediate_hidden_states, intermediate_output, bias_decomp_type): - m, b = self.gelu_linear_approximation(intermediate_hidden_states, intermediate_output) - mx = attribution_vectors * m.unsqueeze(dim=-2) - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bskl,bsl->bsk", mx, b)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(mx, b)) - weights = (torch.norm(mx, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(mx, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(mx, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(mx.shape[:-1], device=mx.device) - weights[:,:,0] = 1.0 - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.einsum("bsl,bsk->bskl", b, weights) - return mx + weighted_bias - - def gelu_zo_decomposition(self, attribution_vectors, intermediate_hidden_states, intermediate_output): - m = intermediate_output / (intermediate_hidden_states + 1e-12) - mx = attribution_vectors * m.unsqueeze(dim=-2) - return mx - - def ffn_decomposer(self, attribution_vectors, intermediate_hidden_states, intermediate_output, include_biases=True, approximation_type="GeLU_LA", bias_decomp_type="absdot"): - post_first_layer = torch.einsum("ld,bskd->bskl", self.intermediate.dense.weight, attribution_vectors) - if include_biases: - post_first_layer = self.bias_decomposer(self.intermediate.dense.bias, post_first_layer, bias_decomp_type=bias_decomp_type) - - if approximation_type == "ReLU": - mask_for_gelu_approx = (intermediate_hidden_states > 0) - post_act_first_layer = torch.einsum("bskl, bsl->bskl", post_first_layer, mask_for_gelu_approx) - post_act_first_layer = post_first_layer * mask_for_gelu_approx.unsqueeze(dim=-2) - elif approximation_type == "GeLU_LA": - post_act_first_layer = self.gelu_decomposition(post_first_layer, intermediate_hidden_states, intermediate_output, bias_decomp_type=bias_decomp_type) - elif approximation_type == "GeLU_ZO": - post_act_first_layer = self.gelu_zo_decomposition(post_first_layer, intermediate_hidden_states, intermediate_output) - - post_second_layer = torch.einsum("bskl, dl->bskd", post_act_first_layer, self.output.dense.weight) - if include_biases: - post_second_layer = self.bias_decomposer(self.output.dense.bias, post_second_layer, bias_decomp_type=bias_decomp_type) - - return post_second_layer - - def ffn_decomposer_fast(self, attribution_vectors, intermediate_hidden_states, intermediate_output, include_biases=True, approximation_type="GeLU_LA", bias_decomp_type="absdot"): - if approximation_type == "ReLU": - theta = (intermediate_hidden_states > 0) - elif approximation_type == "GeLU_ZO": - theta = intermediate_output / (intermediate_hidden_states + 1e-12) - - scaled_W1 = torch.einsum("bsl,ld->bsld", theta, self.intermediate.dense.weight) - W_equiv = torch.einsum("bsld, zl->bszd", scaled_W1, self.output.dense.weight) - - post_ffn_layer = torch.einsum("bszd,bskd->bskz", W_equiv, attribution_vectors) - - if include_biases: - scaled_b1 = torch.einsum("bsl,l->bsl", theta, self.intermediate.dense.bias) - b_equiv = torch.einsum("bsl, dl->bsd", scaled_b1, self.output.dense.weight) - b_equiv = b_equiv + self.output.dense.bias - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bskd,bsd->bsk", post_ffn_layer, b_equiv)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(post_ffn_layer, b_equiv)) - weights = (torch.norm(post_ffn_layer, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(post_ffn_layer, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(post_ffn_layer, dim=-1) != 0) * 1.0 - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.einsum("bsd,bsk->bskd", b_equiv, weights) - - post_ffn_layer = post_ffn_layer + weighted_bias - - return post_ffn_layer - - def forward( - self, - hidden_states: torch.Tensor, - attribution_vectors: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - # self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - # self_attention_outputs = self.attention( - # hidden_states, - # attention_mask, - # head_mask, - # output_attentions=output_attentions, - # past_key_value=self_attn_past_key_value, - # ) - decompx_ready = decompx_config is not None - self_attention_outputs = self.attention( - hidden_states, - attribution_vectors, - attention_mask, - head_mask, - output_attentions=output_attentions, - decompx_ready=decompx_ready, - ) # changed by Goro Kobayashi - attention_output = self_attention_outputs[0] - - # if decoder, the last output is tuple of self-attn cache - if self.is_decoder: - outputs = self_attention_outputs[1:-1] - present_key_value = self_attention_outputs[-1] - else: - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - cross_attn_present_key_value = None - if self.is_decoder and encoder_hidden_states is not None: - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers by setting `config.add_cross_attention=True`" - ) - - # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple - cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - cross_attn_past_key_value, - output_attentions, - ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights - - # add cross-attn cache to positions 3,4 of present_key_value tuple - cross_attn_present_key_value = cross_attention_outputs[-1] - present_key_value = present_key_value + cross_attn_present_key_value - - # layer_output = apply_chunking_to_forward( - # self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - # ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - bias_decomp_type = "biastoken" if decompx_config.include_bias_token else decompx_config.bias_decomp_type - intermediate_output, pre_act_hidden_states = self.intermediate(attention_output, decompx_ready=decompx_ready) - layer_output, pre_ln2_states = self.output(intermediate_output, attention_output, decompx_ready=decompx_ready) - if decompx_ready: - attention_probs, value_layer, decomposed_value_layer, pre_ln_states = outputs - - headmixing_weight = self.attention.output.dense.weight.view(self.all_head_size, self.num_attention_heads, - self.attention_head_size) - - if decomposed_value_layer is None or decompx_config.aggregation != "vector": - transformed_layer = torch.einsum('bhsv,dhv->bhsd', value_layer, headmixing_weight) # V * W^o (z=(qk)v) - # Make weighted vectors αf(x) from transformed vectors (transformed_layer) - # and attention weights (attentions): - # (batch, num_heads, seq_length, seq_length, all_head_size) - weighted_layer = torch.einsum('bhks,bhsd->bhksd', attention_probs, - transformed_layer) # attention_probs(Q*K^t) * V * W^o - # Sum each weighted vectors αf(x) over all heads: - # (batch, seq_length, seq_length, all_head_size) - summed_weighted_layer = weighted_layer.sum(dim=1) # sum over heads - - # Make residual matrix (batch, seq_length, seq_length, all_head_size) - hidden_shape = hidden_states.size() # (batch, seq_length, all_head_size) - device = hidden_states.device - residual = torch.einsum('sk,bsd->bskd', torch.eye(hidden_shape[1]).to(device), - hidden_states) # diagonal representations (hidden states) - - # Make matrix of summed weighted vector + residual vectors - residual_weighted_layer = summed_weighted_layer + residual - accumulated_bias = self.attention.output.dense.bias - else: - transformed_layer = torch.einsum('bhsqv,dhv->bhsqd', decomposed_value_layer, headmixing_weight) - - weighted_layer = torch.einsum('bhks,bhsqd->bhkqd', attention_probs, - transformed_layer) # attention_probs(Q*K^t) * V * W^o - - summed_weighted_layer = weighted_layer.sum(dim=1) # sum over heads - - residual_weighted_layer = summed_weighted_layer + attribution_vectors - accumulated_bias = torch.matmul(self.attention.output.dense.weight, self.attention.self.value.bias) + self.attention.output.dense.bias - - if decompx_config.include_biases: - residual_weighted_layer = self.bias_decomposer(accumulated_bias, residual_weighted_layer, bias_decomp_type) - - if decompx_config.include_LN1: - post_ln_layer = self.ln_decomposer( - attribution_vectors=residual_weighted_layer, - pre_ln_states=pre_ln_states, - gamma=self.attention.output.LayerNorm.weight.data, - beta=self.attention.output.LayerNorm.bias.data, - eps=self.attention.output.LayerNorm.eps, - include_biases=decompx_config.include_biases, - bias_decomp_type=bias_decomp_type - ) - else: - post_ln_layer = residual_weighted_layer - - if decompx_config.include_FFN: - post_ffn_layer = self.ffn_decomposer_fast if decompx_config.FFN_fast_mode else self.ffn_decomposer( - attribution_vectors=post_ln_layer, - intermediate_hidden_states=pre_act_hidden_states, - intermediate_output=intermediate_output, - approximation_type=decompx_config.FFN_approx_type, - include_biases=decompx_config.include_biases, - bias_decomp_type=bias_decomp_type - ) - pre_ln2_layer = post_ln_layer + post_ffn_layer - else: - pre_ln2_layer = post_ln_layer - post_ffn_layer = None - - if decompx_config.include_LN2: - post_ln2_layer = self.ln_decomposer( - attribution_vectors=pre_ln2_layer, - pre_ln_states=pre_ln2_states, - gamma=self.output.LayerNorm.weight.data, - beta=self.output.LayerNorm.bias.data, - eps=self.output.LayerNorm.eps, - include_biases=decompx_config.include_biases, - bias_decomp_type=bias_decomp_type - ) - else: - post_ln2_layer = pre_ln2_layer - - new_outputs = DecompXOutput( - attention=output_builder(summed_weighted_layer, decompx_config.output_attention), - res1=output_builder(residual_weighted_layer, decompx_config.output_res1), - LN1=output_builder(post_ln_layer, decompx_config.output_res2), - FFN=output_builder(post_ffn_layer, decompx_config.output_FFN), - res2=output_builder(pre_ln2_layer, decompx_config.output_res2), - encoder=output_builder(post_ln2_layer, "both") - ) - return (layer_output,) + (new_outputs,) - # ------------------------------- - outputs = (layer_output,) + outputs - - # if decoder, return the attn key/values as the last output - if self.is_decoder: - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -class BertEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - next_decoder_cache = () if use_cache else None - - aggregated_encoder_norms = None # added by Fayyaz / Modarressi - aggregated_encoder_vectors = None # added by Fayyaz / Modarressi - - # -- added by Fayyaz / Modarressi - if decompx_config and decompx_config.output_all_layers: - all_decompx_outputs = DecompXOutput( - attention=() if decompx_config.output_attention else None, - res1=() if decompx_config.output_res1 else None, - LN1=() if decompx_config.output_LN1 else None, - FFN=() if decompx_config.output_LN1 else None, - res2=() if decompx_config.output_res2 else None, - encoder=() if decompx_config.output_encoder else None, - aggregated=() if decompx_config.output_aggregated and decompx_config.aggregation else None, - ) - else: - all_decompx_outputs = None - # -- added by Fayyaz / Modarressi - - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warning( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, past_key_value, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - aggregated_encoder_vectors, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - decompx_config # added by Fayyaz / Modarressi - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - # added by Fayyaz / Modarressi - if decompx_config: - decompx_output = layer_outputs[1] - if decompx_config.aggregation == "rollout": - if decompx_config.include_classifier_w_pooler: - raise Exception("Classifier and pooler could be included in vector aggregation mode") - - encoder_norms = decompx_output.encoder[0][0] - - if aggregated_encoder_norms is None: - aggregated_encoder_norms = encoder_norms * torch.exp(attention_mask).view((-1, attention_mask.shape[-1], 1)) - else: - aggregated_encoder_norms = torch.einsum("ijk,ikm->ijm", encoder_norms, aggregated_encoder_norms) - - if decompx_config.output_aggregated == "norm": - decompx_output.aggregated = (aggregated_encoder_norms,) - elif decompx_config.output_aggregated is not None: - raise Exception("Rollout aggregated values are only available in norms. Set output_aggregated to 'norm'.") - - - elif decompx_config.aggregation == "vector": - aggregated_encoder_vectors = decompx_output.encoder[0][1] - - if decompx_config.include_classifier_w_pooler: - decompx_output.aggregated = (aggregated_encoder_vectors,) - else: - decompx_output.aggregated = output_builder(aggregated_encoder_vectors, decompx_config.output_aggregated) - - decompx_output.encoder = output_builder(decompx_output.encoder[0][1], decompx_config.output_encoder) - - if decompx_config.output_all_layers: - all_decompx_outputs.attention = all_decompx_outputs.attention + decompx_output.attention if decompx_config.output_attention else None - all_decompx_outputs.res1 = all_decompx_outputs.res1 + decompx_output.res1 if decompx_config.output_res1 else None - all_decompx_outputs.LN1 = all_decompx_outputs.LN1 + decompx_output.LN1 if decompx_config.output_LN1 else None - all_decompx_outputs.FFN = all_decompx_outputs.FFN + decompx_output.FFN if decompx_config.output_FFN else None - all_decompx_outputs.res2 = all_decompx_outputs.res2 + decompx_output.res2 if decompx_config.output_res2 else None - all_decompx_outputs.encoder = all_decompx_outputs.encoder + decompx_output.encoder if decompx_config.output_encoder else None - - if decompx_config.include_classifier_w_pooler and decompx_config.aggregation == "vector": - all_decompx_outputs.aggregated = all_decompx_outputs.aggregated + output_builder(aggregated_encoder_vectors, decompx_config.output_aggregated) if decompx_config.output_aggregated else None - else: - all_decompx_outputs.aggregated = all_decompx_outputs.aggregated + decompx_output.aggregated if decompx_config.output_aggregated else None - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - decompx_output if decompx_config else None, - all_decompx_outputs - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class BertPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states: torch.Tensor, decompx_ready=False) -> torch.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pre_pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pre_pooled_output) - if decompx_ready: - return pooled_output, pre_pooled_output - return pooled_output - - -class BertPredictionHeadTransform(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if isinstance(config.hidden_act, str): - self.transform_act_fn = ACT2FN[config.hidden_act] - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class BertLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class BertOnlyMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - - def forward(self, sequence_output: torch.Tensor) -> torch.Tensor: - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -class BertOnlyNSPHead(nn.Module): - def __init__(self, config): - super().__init__() - self.seq_relationship = nn.Linear(config.hidden_size, 2) - - def forward(self, pooled_output): - seq_relationship_score = self.seq_relationship(pooled_output) - return seq_relationship_score - - -class BertPreTrainingHeads(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - self.seq_relationship = nn.Linear(config.hidden_size, 2) - - def forward(self, sequence_output, pooled_output): - prediction_scores = self.predictions(sequence_output) - seq_relationship_score = self.seq_relationship(pooled_output) - return prediction_scores, seq_relationship_score - - -class BertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BertConfig - load_tf_weights = load_tf_weights_in_bert - base_model_prefix = "bert" - supports_gradient_checkpointing = True - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, BertEncoder): - module.gradient_checkpointing = value - - -@dataclass -class BertForPreTrainingOutput(ModelOutput): - """ - Output type of [`BertForPreTraining`]. - - Args: - loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): - Total loss as the sum of the masked language modeling loss and the next sequence prediction - (classification) loss. - prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - seq_relationship_logits (`torch.FloatTensor` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[torch.FloatTensor] = None - prediction_logits: torch.FloatTensor = None - seq_relationship_logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -BERT_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`BertConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -BERT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`BertTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare Bert Model transformer outputting raw hidden-states without any specific head on top.", - BERT_START_DOCSTRING, -) -class BertModel(BertPreTrainedModel): - """ - - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in [Attention is - all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, - Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. - - To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set - to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and - `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. - """ - - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embeddings = BertEmbeddings(config) - self.encoder = BertEncoder(config) - - self.pooler = BertPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - def bias_decomposer(self, bias, attribution_vectors, bias_decomp_type="absdot"): - # Decomposes the input bias based on similarity to the attribution vectors - # Args: - # bias: a bias vector (all_head_size) - # attribution_vectors: the attribution vectors from token j to i (b, i, j, all_head_size) :: (batch, seq_length, seq_length, all_head_size) - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bkd,d->bk", attribution_vectors, bias)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(attribution_vectors, bias, dim=-1)) - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(attribution_vectors, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(attribution_vectors.shape[:-1], device=attribution_vectors.device) - weights[:,0] = 1.0 - elif bias_decomp_type == "dot": - weights = torch.einsum("bkd,d->bk", attribution_vectors, bias) - elif bias_decomp_type == "biastoken": - attribution_vectors[:,-1] = attribution_vectors[:,-1] + bias - return attribution_vectors - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - - weighted_bias = torch.matmul(weights.unsqueeze(dim=-1), bias.unsqueeze(dim=0)) - return attribution_vectors + weighted_bias - - def tanh_linear_approximation(self, pre_act_pooled, post_act_pooled): - def tanh_deriv(x): - return 1 - torch.tanh(x)**2.0 - - m = tanh_deriv(pre_act_pooled) - b = post_act_pooled - m * pre_act_pooled - return m, b - - def tanh_la_decomposition(self, attribution_vectors, pre_act_pooled, post_act_pooled, bias_decomp_type): - m, b = self.tanh_linear_approximation(pre_act_pooled, post_act_pooled) - mx = attribution_vectors * m.unsqueeze(dim=-2) - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bkd,bd->bk", mx, b)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(mx, b, dim=-1)) - weights = (torch.norm(mx, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(mx, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(mx, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(mx.shape[:-1], device=mx.device) - weights[:,0] = 1.0 - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.einsum("bd,bk->bkd", b, weights) - return mx + weighted_bias - - def tanh_zo_decomposition(self, attribution_vectors, pre_act_pooled, post_act_pooled): - m = post_act_pooled / (pre_act_pooled + 1e-12) - mx = attribution_vectors * m.unsqueeze(dim=-2) - return mx - - def ffn_decomposer(self, attribution_vectors, pre_act_pooled, post_act_pooled, include_biases=True, bias_decomp_type="absdot", tanh_approx_type="LA"): - post_pool = torch.einsum("ld,bsd->bsl", self.pooler.dense.weight, attribution_vectors) - if include_biases: - post_pool = self.bias_decomposer(self.pooler.dense.bias, post_pool, bias_decomp_type=bias_decomp_type) - - if tanh_approx_type == "LA": - post_act_pool = self.tanh_la_decomposition(post_pool, pre_act_pooled, post_act_pooled, bias_decomp_type=bias_decomp_type) - else: - post_act_pool = self.tanh_zo_decomposition(post_pool, pre_act_pooled, post_act_pooled) - - return post_act_pool - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if attention_mask is None: - attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) - - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - decompx_config=decompx_config, # added by Fayyaz / Modarressi - ) - sequence_output = encoder_outputs[0] - decompx_ready = decompx_config is not None - pooled_output = self.pooler(sequence_output, decompx_ready=decompx_ready) if self.pooler is not None else None - - if decompx_ready: - pre_act_pooled = pooled_output[1] - pooled_output = pooled_output[0] - - if decompx_config.include_classifier_w_pooler: - decompx_idx = -2 if decompx_config.output_all_layers else -1 - aggregated_attribution_vectors = encoder_outputs[decompx_idx].aggregated[0] - - encoder_outputs[decompx_idx].aggregated = output_builder(aggregated_attribution_vectors, decompx_config.output_aggregated) - - pooler_decomposed = self.ffn_decomposer( - attribution_vectors=aggregated_attribution_vectors[:, 0], - pre_act_pooled=pre_act_pooled, - post_act_pooled=pooled_output, - include_biases=decompx_config.include_biases, - bias_decomp_type="biastoken" if decompx_config.include_bias_token else decompx_config.bias_decomp_type, - tanh_approx_type=decompx_config.tanh_approx_type - ) - - encoder_outputs[decompx_idx].pooler = pooler_decomposed - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next - sentence prediction (classification)` head. - """, - BERT_START_DOCSTRING, -) -class BertForPreTraining(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config) - self.cls = BertPreTrainingHeads(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=BertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - next_sentence_label: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], BertForPreTrainingOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), - the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - next_sentence_label (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the next sequence prediction (classification) loss. Input should be a sequence - pair (see `input_ids` docstring) Indices should be in `[0, 1]`: - - - 0 indicates sequence B is a continuation of sequence A, - - 1 indicates sequence B is a random sequence. - kwargs (`Dict[str, any]`, optional, defaults to *{}*): - Used to hide legacy arguments that have been deprecated. - - Returns: - - Example: - - ```python - >>> from transformers import BertTokenizer, BertForPreTraining - >>> import torch - - >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") - >>> model = BertForPreTraining.from_pretrained("bert-base-uncased") - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.prediction_logits - >>> seq_relationship_logits = outputs.seq_relationship_logits - ``` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output, pooled_output = outputs[:2] - prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output) - - total_loss = None - if labels is not None and next_sentence_label is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1)) - total_loss = masked_lm_loss + next_sentence_loss - - if not return_dict: - output = (prediction_scores, seq_relationship_score) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return BertForPreTrainingOutput( - loss=total_loss, - prediction_logits=prediction_scores, - seq_relationship_logits=seq_relationship_score, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """Bert Model with a `language modeling` head on top for CLM fine-tuning.""", BERT_START_DOCSTRING -) -class BertLMHeadModel(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - if not config.is_decoder: - logger.warning("If you want to use `BertLMHeadModel` as a standalone, add `is_decoder=True.`") - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.Tensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], CausalLMOutputWithCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention - if the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used - in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be - in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` - are ignored (masked), the loss is only computed for the tokens with labels n `[0, ..., - config.vocab_size]` - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up - decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those - that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of - all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding - (see `past_key_values`). - - Returns: - - Example: - - ```python - >>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig - >>> import torch - - >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") - >>> config = BertConfig.from_pretrained("bert-base-cased") - >>> config.is_decoder = True - >>> model = BertLMHeadModel.from_pretrained("bert-base-cased", config=config) - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.logits - ``` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if labels is not None: - use_cache = False - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - prediction_scores = self.cls(sequence_output) - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs): - input_shape = input_ids.shape - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_shape) - - # cut decoder_input_ids if past is used - if past is not None: - input_ids = input_ids[:, -1:] - - return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past} - - def _reorder_cache(self, past, beam_idx): - reordered_past = () - for layer_past in past: - reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) - return reordered_past - - -@add_start_docstrings("""Bert Model with a `language modeling` head on top.""", BERT_START_DOCSTRING) -class BertForMaskedLM(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - if config.is_decoder: - logger.warning( - "If you want to use `BertForMaskedLM` make sure `config.is_decoder=False` for " - "bi-directional self-attention." - ) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], MaskedLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - """ - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - prediction_scores = self.cls(sequence_output) - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() # -100 index = padding token - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs): - input_shape = input_ids.shape - effective_batch_size = input_shape[0] - - # add a dummy token - if self.config.pad_token_id is None: - raise ValueError("The PAD token should be defined for generation") - - attention_mask = torch.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1) - dummy_token = torch.full( - (effective_batch_size, 1), self.config.pad_token_id, dtype=torch.long, device=input_ids.device - ) - input_ids = torch.cat([input_ids, dummy_token], dim=1) - - return {"input_ids": input_ids, "attention_mask": attention_mask} - - -@add_start_docstrings( - """Bert Model with a `next sentence prediction (classification)` head on top.""", - BERT_START_DOCSTRING, -) -class BertForNextSentencePrediction(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config) - self.cls = BertOnlyNSPHead(config) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=NextSentencePredictorOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **kwargs, - ) -> Union[Tuple[torch.Tensor], NextSentencePredictorOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair - (see `input_ids` docstring). Indices should be in `[0, 1]`: - - - 0 indicates sequence B is a continuation of sequence A, - - 1 indicates sequence B is a random sequence. - - Returns: - - Example: - - ```python - >>> from transformers import BertTokenizer, BertForNextSentencePrediction - >>> import torch - - >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") - >>> model = BertForNextSentencePrediction.from_pretrained("bert-base-uncased") - - >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." - >>> next_sentence = "The sky is blue due to the shorter wavelength of blue light." - >>> encoding = tokenizer(prompt, next_sentence, return_tensors="pt") - - >>> outputs = model(**encoding, labels=torch.LongTensor([1])) - >>> logits = outputs.logits - >>> assert logits[0, 0] < logits[0, 1] # next sentence was random - ``` - """ - - if "next_sentence_label" in kwargs: - warnings.warn( - "The `next_sentence_label` argument is deprecated and will be removed in a future version, use `labels` instead.", - FutureWarning, - ) - labels = kwargs.pop("next_sentence_label") - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - - seq_relationship_scores = self.cls(pooled_output) - - next_sentence_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - next_sentence_loss = loss_fct(seq_relationship_scores.view(-1, 2), labels.view(-1)) - - if not return_dict: - output = (seq_relationship_scores,) + outputs[2:] - return ((next_sentence_loss,) + output) if next_sentence_loss is not None else output - - return NextSentencePredictorOutput( - loss=next_sentence_loss, - logits=seq_relationship_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled - output) e.g. for GLUE tasks. - """, - BERT_START_DOCSTRING, -) -class BertForSequenceClassification(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.bert = BertModel(config) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - def bias_decomposer(self, bias, attribution_vectors, bias_decomp_type="absdot"): - # Decomposes the input bias based on similarity to the attribution vectors - # Args: - # bias: a bias vector (all_head_size) - # attribution_vectors: the attribution vectors from token j to i (b, i, j, all_head_size) :: (batch, seq_length, seq_length, all_head_size) - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bkd,d->bk", attribution_vectors, bias)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(attribution_vectors, bias, dim=-1)) - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(attribution_vectors, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(attribution_vectors.shape[:-1], device=attribution_vectors.device) - weights[:,0] = 1.0 - elif bias_decomp_type == "dot": - weights = torch.einsum("bkd,d->bk", attribution_vectors, bias) - elif bias_decomp_type == "biastoken": - attribution_vectors[:,-1] = attribution_vectors[:,-1] + bias - return attribution_vectors - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.matmul(weights.unsqueeze(dim=-1), bias.unsqueeze(dim=0)) - return attribution_vectors + weighted_bias - - def biastoken_decomposer(self, biastoken, attribution_vectors, bias_decomp_type="absdot"): - # Decomposes the input bias based on similarity to the attribution vectors - # Args: - # bias: a bias vector (all_head_size) - # attribution_vectors: the attribution vectors from token j to i (b, i, j, all_head_size) :: (batch, seq_length, seq_length, all_head_size) - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bkd,bd->bk", attribution_vectors, biastoken)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(attribution_vectors, biastoken, dim=-1)) - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(attribution_vectors, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(attribution_vectors.shape[:-1], device=attribution_vectors.device) - weights[:,0] = 1.0 - elif bias_decomp_type == "dot": - weights = torch.einsum("bkd,d->bk", attribution_vectors, biastoken) - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.matmul(weights.unsqueeze(dim=-1), biastoken.unsqueeze(dim=1)) - return attribution_vectors + weighted_bias - - def ffn_decomposer(self, attribution_vectors, include_biases=True, bias_decomp_type="absdot"): - post_classifier = torch.einsum("ld,bkd->bkl", self.classifier.weight, attribution_vectors) - if include_biases: - post_classifier = self.bias_decomposer(self.classifier.bias, post_classifier, bias_decomp_type=bias_decomp_type) - - return post_classifier - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=SequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - decompx_config=decompx_config - ) - - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - - if decompx_config and decompx_config.include_classifier_w_pooler: - decompx_idx = -2 if decompx_config.output_all_layers else -1 - aggregated_attribution_vectors = outputs[decompx_idx].pooler - - outputs[decompx_idx].pooler = output_builder(aggregated_attribution_vectors, decompx_config.output_pooler) - - classifier_decomposed = self.ffn_decomposer( - attribution_vectors=aggregated_attribution_vectors, - include_biases=decompx_config.include_biases, - bias_decomp_type="biastoken" if decompx_config.include_bias_token else decompx_config.bias_decomp_type - ) - - if decompx_config.include_bias_token and decompx_config.bias_decomp_type is not None: - bias_token = classifier_decomposed[:,-1,:].detach().clone() - classifier_decomposed = classifier_decomposed[:,:-1,:] - classifier_decomposed = self.biastoken_decomposer( - bias_token, - classifier_decomposed, - bias_decomp_type=decompx_config.bias_decomp_type - ) - - - outputs[decompx_idx].classifier = classifier_decomposed if decompx_config.output_classifier else None - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output # (loss), logits, (hidden_states), (attentions) - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - BERT_START_DOCSTRING, -) -class BertForMultipleChoice(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, 1) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], MultipleChoiceModelOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., - num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See - `input_ids` above) - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] - - input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None - attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None - token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None - position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None - inputs_embeds = ( - inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1)) - if inputs_embeds is not None - else None - ) - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return MultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - BERT_START_DOCSTRING, -) -class BertForTokenClassification(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.bert = BertModel(config, add_pooling_layer=False) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - BERT_START_DOCSTRING, -) -class BertForQuestionAnswering(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.bert = BertModel(config, add_pooling_layer=False) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - start_positions: Optional[torch.Tensor] = None, - end_positions: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/mrmocciai/rvc-genshin-v2/app.py b/spaces/mrmocciai/rvc-genshin-v2/app.py deleted file mode 100644 index 54a7a271879330f2b0dc1f254c2044e46b9a8fcf..0000000000000000000000000000000000000000 --- a/spaces/mrmocciai/rvc-genshin-v2/app.py +++ /dev/null @@ -1,680 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -spaces = os.getenv("SYSTEM") == "spaces" -force_support = None -if config.unsupported is False: - if config.device == "mps" or config.device == "cpu": - force_support = False -else: - force_support = True - -audio_mode = [] -f0method_mode = [] -f0method_info = "" - -if force_support is False or spaces is True: - if spaces is True: - audio_mode = ["Upload audio", "TTS Audio"] - else: - audio_mode = ["Input path", "Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better). (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better), and Crepe effect is good but requires GPU (Default: PM)" - -if os.path.isfile("rmvpe.pt"): - f0method_mode.insert(2, "rmvpe") - -def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - logs = [] - print(f"Converting using {model_name}...") - logs.append(f"Converting using {model_name}...") - yield "\n".join(logs), None - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and spaces: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and spaces: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_name} | {info}") - logs.append(f"Successfully Convert {model_name}\n{info}") - yield "\n".join(logs), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - yield info, None - return vc_fn - -def load_model(): - categories = [] - if os.path.isfile("weights/folder_info.json"): - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - else: - categories = [] - return categories - -def download_audio(url, audio_provider): - logs = [] - if url == "": - raise gr.Error("URL Required!") - return "URL Required" - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - logs.append("Downloading the audio...") - yield None, "\n".join(logs) - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/audio', - } - audio_path = "dl_audio/audio.wav" - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - logs.append("Download Complete.") - yield audio_path, "\n".join(logs) - -def cut_vocal_and_inst(split_model): - logs = [] - logs.append("Starting the audio splitting process...") - yield "\n".join(logs), None, None, None, None - command = f"demucs --two-stems=vocals -n {split_model} dl_audio/audio.wav -o output" - result = subprocess.Popen(command.split(), stdout=subprocess.PIPE, text=True) - for line in result.stdout: - logs.append(line) - yield "\n".join(logs), None, None, None, None - print(result.stdout) - vocal = f"output/{split_model}/audio/vocals.wav" - inst = f"output/{split_model}/audio/no_vocals.wav" - logs.append("Audio splitting complete.") - yield "\n".join(logs), vocal, inst, vocal - -def combine_vocal_and_inst(audio_data, vocal_volume, inst_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - inst_path = f"output/{split_model}/audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [0:a]volume={inst_volume}[i];[1:a]volume={vocal_volume}[v];[i][v]amix=inputs=2:duration=longest[a] -map [a] -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=True), - # Splitter - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.new_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "
\n\n"+ - "# RVC V2 MODELS GENSHIN IMPACT\n\n"+ - "### Recommended to use Google Colab to use other character and feature.\n\n"+ - "#### All of this voice samples are taken from the game Genshin Impact, and all voice credits belong to hoyoverse.\n\n"+ - "##### NO COLAB! IM DONE WITH THAT SH*T!. \n\n"+ - "
\n\n"+ - "[![Repository](https://img.shields.io/badge/Github-Multi%20Model%20RVC%20Inference-blue?style=for-the-badge&logo=github)](https://github.com/ArkanDash/Multi-Model-RVC-Inference)\n\n"+ - "
" - ) - if categories == []: - gr.Markdown( - "
\n\n"+ - "## No model found, please add the model into weights folder\n\n"+ - "
" - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"###
{description}") - with gr.Tabs(): - if not models: - gr.Markdown("#
No Model Loaded.") - gr.Markdown("##
Please add the model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
' - f'
{title}
\n'+ - f'
RVC {model_version} Model
\n'+ - (f'
Model author: {author}
' if author else "")+ - (f'' if cover else "")+ - '
' - ) - with gr.Row(): - if spaces is False: - with gr.TabItem("Input"): - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_log_yt = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_download_button = gr.Button("Download Audio", variant="primary", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(label="TTS text", info="Text to speech input", visible=False) - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["hdemucs_mmi", "htdemucs", "htdemucs_ft", "mdx", "mdx_q", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split_log = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - with gr.TabItem("Convert"): - with gr.Row(): - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_vocal_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=1, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 1}", - visible=False - ) - vc_inst_volume = gr.Slider( - minimum=0, - maximum=10, - label="Instrument volume", - value=1, - interactive=True, - step=1, - info="Adjust instrument volume (Default: 1}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - else: - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_log_yt = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_download_button = gr.Button("Download Audio", variant="primary", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # Splitter - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["hdemucs_mmi", "htdemucs", "htdemucs_ft", "mdx", "mdx_q", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split_log = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - # TTS - tts_text = gr.Textbox(label="TTS text", info="Text to speech input", visible=False) - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_vocal_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=1, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 1}", - visible=False - ) - vc_inst_volume = gr.Slider( - minimum=0, - maximum=10, - label="Instrument volume", - value=1, - interactive=True, - step=1, - info="Adjust instrument volume (Default: 1}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_download_button.click( - fn=download_audio, - inputs=[vc_link, vc_download_audio], - outputs=[vc_audio_preview, vc_log_yt] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_split_model], - outputs=[vc_split_log, vc_vocal_preview, vc_inst_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_vocal_volume, vc_inst_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_log_yt, - vc_download_button, - vc_split_model, - vc_split_log, - vc_split, - vc_audio_preview, - vc_vocal_preview, - vc_inst_preview, - vc_vocal_volume, - vc_inst_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/mrmocciai/rvc-genshin-v2/lib/infer_pack/transforms.py b/spaces/mrmocciai/rvc-genshin-v2/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/mrmocciai/rvc-genshin-v2/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op_gpu/fused_act.py b/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op_gpu/fused_act.py deleted file mode 100644 index 815eca1905b7962a2314f6af3b3ab5daeb74a009..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op_gpu/fused_act.py +++ /dev/null @@ -1,119 +0,0 @@ -import os - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - "fused", - sources=[ - os.path.join(module_path, "fused_bias_act.cpp"), - os.path.join(module_path, "fused_bias_act_kernel.cu"), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, bias, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output.contiguous(), empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - if bias: - grad_bias = grad_input.sum(dim).detach() - - else: - grad_bias = empty - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input.contiguous(), gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - - ctx.bias = bias is not None - - if bias is None: - bias = empty - - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.bias, ctx.negative_slope, ctx.scale - ) - - if not ctx.bias: - grad_bias = None - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, bias=True, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - if bias: - self.bias = nn.Parameter(torch.zeros(channel)) - - else: - self.bias = None - - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias=None, negative_slope=0.2, scale=2 ** 0.5): - if input.device.type == "cpu": - if bias is not None: - rest_dim = [1] * (input.ndim - bias.ndim - 1) - return ( - F.leaky_relu( - input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=0.2 - ) - * scale - ) - - else: - return F.leaky_relu(input, negative_slope=0.2) * scale - - else: - return FusedLeakyReLUFunction.apply(input.contiguous(), bias, negative_slope, scale) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/criss/download_and_preprocess_tatoeba.sh b/spaces/mshukor/UnIVAL/fairseq/examples/criss/download_and_preprocess_tatoeba.sh deleted file mode 100644 index 7ed64f017d5e62695ba73745c840507b994abc0f..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/criss/download_and_preprocess_tatoeba.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -SPM_ENCODE=flores/scripts/spm_encode.py -DATA=data_tmp -SPM_MODEL=criss_checkpoints/sentence.bpe.model -DICT=criss_checkpoints/dict.txt - -if [[ -f flores ]]; then - echo "flores already cloned" -else - git clone https://github.com/facebookresearch/flores -fi -if [[ -f LASER ]]; then - echo "LASER already cloned" -else - git clone https://github.com/facebookresearch/LASER -fi -mkdir -p data_tmp -declare -A lang_tatoeba_map=( ["ar_AR"]="ara" ["de_DE"]="deu" ["es_XX"]="spa" ["et_EE"]="est" ["fi_FI"]="fin" ["fr_XX"]="fra" ["hi_IN"]="hin" ["it_IT"]="ita" ["ja_XX"]="jpn" ["ko_KR"]="kor" ["kk_KZ"]="kaz" ["nl_XX"]="nld" ["ru_RU"]="rus" ["tr_TR"]="tur" ["vi_VN"]="vie" ["zh_CN"]="cmn") -for lang in ar_AR de_DE es_XX et_EE fi_FI fr_XX hi_IN it_IT ja_XX kk_KZ ko_KR nl_XX ru_RU tr_TR vi_VN zh_CN; do - lang_tatoeba=${lang_tatoeba_map[$lang]} - echo $lang_tatoeba - datadir=$DATA/${lang}-en_XX-tatoeba - rm -rf $datadir - mkdir -p $datadir - TEST_PREFIX=LASER/data/tatoeba/v1/tatoeba - python $SPM_ENCODE \ - --model ${SPM_MODEL} \ - --output_format=piece \ - --inputs ${TEST_PREFIX}.${lang_tatoeba}-eng.${lang_tatoeba} ${TEST_PREFIX}.${lang_tatoeba}-eng.eng \ - --outputs $datadir/test.bpe.${lang}-en_XX.${lang} $datadir/test.bpe.${lang}-en_XX.en_XX - - # binarize data - fairseq-preprocess \ - --source-lang ${lang} --target-lang en_XX \ - --testpref $datadir/test.bpe.${lang}-en_XX \ - --destdir $datadir \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 4 -done diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/README.pretraining.md b/spaces/mshukor/UnIVAL/fairseq/examples/roberta/README.pretraining.md deleted file mode 100644 index a4e7453529111fdd198be637d911d1764cb96c0e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/README.pretraining.md +++ /dev/null @@ -1,84 +0,0 @@ -# Pretraining RoBERTa using your own data - -This tutorial will walk you through pretraining RoBERTa over your own data. - -### 1) Preprocess the data - -Data should be preprocessed following the [language modeling format](/examples/language_model), i.e. each document should be separated by an empty line (only useful with `--sample-break-mode complete_doc`). Lines will be concatenated as a 1D text stream during training. - -We'll use the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/) -to demonstrate how to preprocess raw text data with the GPT-2 BPE. Of course -this dataset is quite small, so the resulting pretrained model will perform -poorly, but it gives the general idea. - -First download the dataset: -```bash -wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip -unzip wikitext-103-raw-v1.zip -``` - -Next encode it with the GPT-2 BPE: -```bash -mkdir -p gpt2_bpe -wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json -wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe -for SPLIT in train valid test; do \ - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json gpt2_bpe/encoder.json \ - --vocab-bpe gpt2_bpe/vocab.bpe \ - --inputs wikitext-103-raw/wiki.${SPLIT}.raw \ - --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \ - --keep-empty \ - --workers 60; \ -done -``` - -Finally preprocess/binarize the data using the GPT-2 fairseq dictionary: -```bash -wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt -fairseq-preprocess \ - --only-source \ - --srcdict gpt2_bpe/dict.txt \ - --trainpref wikitext-103-raw/wiki.train.bpe \ - --validpref wikitext-103-raw/wiki.valid.bpe \ - --testpref wikitext-103-raw/wiki.test.bpe \ - --destdir data-bin/wikitext-103 \ - --workers 60 -``` - -### 2) Train RoBERTa base -```bash -DATA_DIR=data-bin/wikitext-103 - -fairseq-hydra-train -m --config-dir examples/roberta/config/pretraining \ ---config-name base task.data=$DATA_DIR -``` - -**Note:** You can optionally resume training the released RoBERTa base model by -adding `checkpoint.restore_file=/path/to/roberta.base/model.pt`. - -**Note:** The above command assumes training on 8x32GB V100 GPUs. Each GPU uses -a batch size of 16 sequences (`dataset.batch_size`) and accumulates gradients to -further increase the batch size by 16x (`optimization.update_freq`), for a total batch size -of 2048 sequences. If you have fewer GPUs or GPUs with less memory you may need -to reduce `dataset.batch_size` and increase dataset.update_freq to compensate. -Alternatively if you have more GPUs you can decrease `dataset.update_freq` accordingly -to increase training speed. - -**Note:** The learning rate and batch size are tightly connected and need to be -adjusted together. We generally recommend increasing the learning rate as you -increase the batch size according to the following table (although it's also -dataset dependent, so don't rely on the following values too closely): - -batch size | peak learning rate ----|--- -256 | 0.0001 -2048 | 0.0005 -8192 | 0.0007 - -### 3) Load your pretrained model -```python -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'path/to/data') -assert isinstance(roberta.model, torch.nn.Module) -``` diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/preprocess_RACE.py b/spaces/mshukor/UnIVAL/fairseq/examples/roberta/preprocess_RACE.py deleted file mode 100644 index cdd66072718ccb6033304c97926271909a17f9d6..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/preprocess_RACE.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import json -import os -import re - - -class InputExample: - def __init__(self, paragraph, qa_list, label): - self.paragraph = paragraph - self.qa_list = qa_list - self.label = label - - -def get_examples(data_dir, set_type): - """ - Extract paragraph and question-answer list from each json file - """ - examples = [] - - levels = ["middle", "high"] - set_type_c = set_type.split("-") - if len(set_type_c) == 2: - levels = [set_type_c[1]] - set_type = set_type_c[0] - for level in levels: - cur_dir = os.path.join(data_dir, set_type, level) - for filename in os.listdir(cur_dir): - cur_path = os.path.join(cur_dir, filename) - with open(cur_path, "r") as f: - cur_data = json.load(f) - answers = cur_data["answers"] - options = cur_data["options"] - questions = cur_data["questions"] - context = cur_data["article"].replace("\n", " ") - context = re.sub(r"\s+", " ", context) - for i in range(len(answers)): - label = ord(answers[i]) - ord("A") - qa_list = [] - question = questions[i] - for j in range(4): - option = options[i][j] - if "_" in question: - qa_cat = question.replace("_", option) - else: - qa_cat = " ".join([question, option]) - qa_cat = re.sub(r"\s+", " ", qa_cat) - qa_list.append(qa_cat) - examples.append(InputExample(context, qa_list, label)) - - return examples - - -def main(): - """ - Helper script to extract paragraphs questions and answers from RACE datasets. - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--input-dir", - help="input directory for downloaded RACE dataset", - ) - parser.add_argument( - "--output-dir", - help="output directory for extracted data", - ) - args = parser.parse_args() - - if not os.path.exists(args.output_dir): - os.makedirs(args.output_dir, exist_ok=True) - - for set_type in ["train", "dev", "test-middle", "test-high"]: - examples = get_examples(args.input_dir, set_type) - qa_file_paths = [ - os.path.join(args.output_dir, set_type + ".input" + str(i + 1)) - for i in range(4) - ] - qa_files = [open(qa_file_path, "w") for qa_file_path in qa_file_paths] - outf_context_path = os.path.join(args.output_dir, set_type + ".input0") - outf_label_path = os.path.join(args.output_dir, set_type + ".label") - outf_context = open(outf_context_path, "w") - outf_label = open(outf_label_path, "w") - for example in examples: - outf_context.write(example.paragraph + "\n") - for i in range(4): - qa_files[i].write(example.qa_list[i] + "\n") - outf_label.write(str(example.label) + "\n") - - for f in qa_files: - f.close() - outf_label.close() - outf_context.close() - - -if __name__ == "__main__": - main() diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/transformer_lm.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/transformer_lm.py deleted file mode 100644 index eedd5151ba5b1a7050b37639023cf8a158fae8d4..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/transformer_lm.py +++ /dev/null @@ -1,545 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from dataclasses import dataclass, field -from typing import Optional - -from fairseq import options, utils -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import ( - DEFAULT_MIN_PARAMS_TO_WRAP, Embedding, TransformerDecoder -) -from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder -from fairseq.utils import safe_getattr, safe_hasattr -from omegaconf import II - - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@dataclass -class TransformerLanguageModelConfig(FairseqDataclass): - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="relu", metadata={"help": "activation function to use"} - ) - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - attention_dropout: float = field( - default=0.0, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN."} - ) - relu_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN."} - ) - decoder_embed_dim: int = field( - default=512, metadata={"help": "decoder embedding dimension"} - ) - decoder_output_dim: int = field( - default=512, metadata={"help": "decoder output dimension"} - ) - decoder_input_dim: int = field( - default=512, metadata={"help": "decoder input dimension"} - ) - decoder_ffn_embed_dim: int = field( - default=2048, metadata={"help": "decoder embedding dimension for FFN"} - ) - decoder_layers: int = field(default=6, metadata={"help": "num decoder layers"}) - decoder_attention_heads: int = field( - default=8, metadata={"help": "num decoder attention heads"} - ) - decoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each decoder block"} - ) - no_decoder_final_norm: bool = field( - default=False, - metadata={"help": "don't add an extra layernorm after the last decoder block"}, - ) - adaptive_softmax_cutoff: Optional[str] = field( - default=None, - metadata={ - "help": "comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion" - }, - ) - adaptive_softmax_dropout: float = field( - default=0, - metadata={"help": "sets adaptive softmax dropout for the tail projections"}, - ) - adaptive_softmax_factor: float = field( - default=4, metadata={"help": "adaptive input factor"} - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, disables positional embeddings (outside self attention)" - }, - ) - share_decoder_input_output_embed: bool = field( - default=False, metadata={"help": "share decoder input and output embeddings"} - ) - character_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, uses character embedding convolutions to produce token embeddings" - }, - ) - character_filters: str = field( - default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - metadata={"help": "size of character embeddings"}, - ) - character_embedding_dim: int = field( - default=4, metadata={"help": "size of character embeddings"} - ) - char_embedder_highway_layers: int = field( - default=2, - metadata={"help": "number of highway layers for character token embeddder"}, - ) - adaptive_input: bool = field( - default=False, metadata={"help": "if set, uses adaptive input"} - ) - adaptive_input_factor: float = field( - default=4, metadata={"help": "adaptive input factor"} - ) - adaptive_input_cutoff: Optional[str] = field( - default=None, - metadata={"help": "comma separated list of adaptive input cutoff points."}, - ) - tie_adaptive_weights: bool = field( - default=False, - metadata={ - "help": "if set, ties the weights of adaptive softmax and adaptive input" - }, - ) - tie_adaptive_proj: bool = field( - default=False, - metadata={ - "help": "if set, ties the projection weights of adaptive softmax and adaptive input" - }, - ) - decoder_learned_pos: bool = field( - default=False, - metadata={"help": "use learned positional embeddings in the decoder"}, - ) - layernorm_embedding: bool = field( - default=False, metadata={"help": "add layernorm to embedding"} - ) - no_scale_embedding: bool = field( - default=False, metadata={"help": "if True, dont scale embeddings"} - ) - checkpoint_activations: bool = field( - default=False, metadata={"help": "checkpoint activations at each layer"} - ) - offload_activations: bool = field( - default=False, - metadata={"help": "move checkpointed activations to CPU after they are used."}, - ) - # config for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019) - decoder_layerdrop: float = field( - default=0.0, metadata={"help": "LayerDrop probability for decoder"} - ) - decoder_layers_to_keep: Optional[str] = field( - default=None, - metadata={ - "help": "which layers to *keep* when pruning as a comma-separated list" - }, - ) - # config for Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020) - quant_noise_pq: float = field( - default=0.0, - metadata={"help": "iterative PQ quantization noise at training time"}, - ) - quant_noise_pq_block_size: int = field( - default=8, - metadata={"help": "block size of quantization noise at training time"}, - ) - quant_noise_scalar: float = field( - default=0.0, - metadata={ - "help": "scalar quantization noise and scalar quantization at training time" - }, - ) - # config for Fully Sharded Data Parallel (FSDP) training - min_params_to_wrap: int = field( - default=DEFAULT_MIN_PARAMS_TO_WRAP, - metadata={ - "help": ( - "minimum number of params for a layer to be wrapped with FSDP() when " - "training with --ddp-backend=fully_sharded. Smaller values will " - "improve memory efficiency, but may make torch.distributed " - "communication less efficient due to smaller input sizes. This option " - "is set to 0 (i.e., always wrap) when --checkpoint-activations or " - "--offload-activations are passed." - ) - } - ) - # config for "BASE Layers: Simplifying Training of Large, Sparse Models" - base_layers: Optional[int] = field( - default=0, metadata={"help": "number of BASE layers in total"} - ) - base_sublayers: Optional[int] = field( - default=1, metadata={"help": "number of sublayers in each BASE layer"} - ) - base_shuffle: Optional[int] = field( - default=1, metadata={"help": "shuffle tokens between workers before computing assignment"} - ) - # options from other parts of the config - add_bos_token: bool = II("task.add_bos_token") - tokens_per_sample: int = II("task.tokens_per_sample") - max_target_positions: Optional[int] = II("task.max_target_positions") - tpu: bool = II("common.tpu") - - -@register_model("transformer_lm", dataclass=TransformerLanguageModelConfig) -class TransformerLanguageModel(FairseqLanguageModel): - @classmethod - def hub_models(cls): - def moses_fastbpe(path): - return {"path": path, "tokenizer": "moses", "bpe": "fastbpe"} - - def spm(path): - return {"path": path, "tokenizer": "space", "bpe": "sentencepiece"} - - return { - "transformer_lm.gbw.adaptive_huge": "https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2", - "transformer_lm.wiki103.adaptive": "https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2", - "transformer_lm.wmt19.en": moses_fastbpe( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.bz2" - ), - "transformer_lm.wmt19.de": moses_fastbpe( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.bz2" - ), - "transformer_lm.wmt19.ru": moses_fastbpe( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.bz2" - ), - "transformer_lm.wmt20.en": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.en.tar.gz" - ), - "transformer_lm.wmt20.ta": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.ta.tar.gz" - ), - "transformer_lm.wmt20.iu.news": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.iu.news.tar.gz" - ), - "transformer_lm.wmt20.iu.nh": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.iu.nh.tar.gz" - ), - } - - def __init__(self, decoder): - super().__init__(decoder) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if safe_getattr(args, "max_target_positions", None) is None: - args.max_target_positions = safe_getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - - if args.character_embeddings: - embed_tokens = CharacterTokenEmbedder( - task.source_dictionary, - eval(args.character_filters), - args.character_embedding_dim, - args.decoder_embed_dim, - args.char_embedder_highway_layers, - ) - elif args.adaptive_input: - embed_tokens = AdaptiveInput( - len(task.source_dictionary), - task.source_dictionary.pad(), - args.decoder_input_dim, - args.adaptive_input_factor, - args.decoder_embed_dim, - options.eval_str_list(args.adaptive_input_cutoff, type=int), - args.quant_noise_pq, - args.quant_noise_pq_block_size, - ) - else: - embed_tokens = cls.build_embedding( - args, task.source_dictionary, args.decoder_input_dim - ) - - if args.tie_adaptive_weights: - assert args.adaptive_input - assert args.adaptive_input_factor == args.adaptive_softmax_factor - assert ( - args.adaptive_softmax_cutoff == args.adaptive_input_cutoff - ), "{} != {}".format( - args.adaptive_softmax_cutoff, args.adaptive_input_cutoff - ) - assert args.decoder_input_dim == args.decoder_output_dim - - decoder = TransformerDecoder( - args, task.target_dictionary, embed_tokens, no_encoder_attn=True - ) - return cls(decoder) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - embed_tokens = Embedding(len(dictionary), embed_dim, dictionary.pad()) - return embed_tokens - - -def base_lm_architecture(args): - # backward compatibility for older model checkpoints - if safe_hasattr(args, "no_tie_adaptive_proj"): - # previous models defined --no-tie-adaptive-proj, so use the existence of - # that option to determine if this is an "old" model checkpoint - args.no_decoder_final_norm = True # old models always set this to True - if args.no_tie_adaptive_proj is False: - args.tie_adaptive_proj = True - if safe_hasattr(args, "decoder_final_norm"): - args.no_decoder_final_norm = not args.decoder_final_norm - - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.0) - - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = safe_getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 8) - args.adaptive_softmax_cutoff = safe_getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = safe_getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = safe_getattr(args, "adaptive_softmax_factor", 4) - args.decoder_learned_pos = safe_getattr(args, "decoder_learned_pos", False) - args.activation_fn = safe_getattr(args, "activation_fn", "relu") - - args.decoder_layerdrop = safe_getattr(args, "decoder_layerdrop", 0) - args.decoder_layers_to_keep = safe_getattr(args, "decoder_layers_to_keep", None) - args.quant_noise_pq = safe_getattr(args, "quant_noise_pq", 0) - args.quant_noise_pq_block_size = safe_getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = safe_getattr(args, "quant_noise_scalar", 0) - - args.base_layers = safe_getattr(args, "base_layers", 0) - args.base_sublayers = safe_getattr(args, "base_sublayers", 1) - args.base_shuffle = safe_getattr(args, "base_shuffle", False) - - args.add_bos_token = safe_getattr(args, "add_bos_token", False) - args.no_token_positional_embeddings = safe_getattr( - args, "no_token_positional_embeddings", False - ) - args.share_decoder_input_output_embed = safe_getattr( - args, "share_decoder_input_output_embed", False - ) - args.character_embeddings = safe_getattr(args, "character_embeddings", False) - - args.decoder_output_dim = safe_getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = safe_getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # Model training is not stable without this - args.decoder_normalize_before = True - args.no_decoder_final_norm = safe_getattr(args, "no_decoder_final_norm", False) - - args.adaptive_input = safe_getattr(args, "adaptive_input", False) - args.adaptive_input_factor = safe_getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = safe_getattr(args, "adaptive_input_cutoff", None) - - args.tie_adaptive_weights = safe_getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = safe_getattr(args, "tie_adaptive_proj", False) - - args.no_scale_embedding = safe_getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", False) - args.checkpoint_activations = safe_getattr(args, "checkpoint_activations", False) - args.offload_activations = safe_getattr(args, "offload_activations", False) - if args.offload_activations: - args.checkpoint_activations = True - - -@register_model_architecture("transformer_lm", "transformer_lm_big") -def transformer_lm_big(args): - args.decoder_layers = safe_getattr(args, "decoder_layers", 12) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_wiki103") -@register_model_architecture("transformer_lm", "transformer_lm_baevski_wiki103") -def transformer_lm_baevski_wiki103(args): - args.decoder_layers = safe_getattr(args, "decoder_layers", 16) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 8) - args.dropout = safe_getattr(args, "dropout", 0.3) - args.adaptive_input = safe_getattr(args, "adaptive_input", True) - args.tie_adaptive_weights = safe_getattr(args, "tie_adaptive_weights", True) - args.adaptive_input_cutoff = safe_getattr(args, "adaptive_input_cutoff", "20000,60000") - args.adaptive_softmax_cutoff = safe_getattr( - args, "adaptive_softmax_cutoff", "20000,60000" - ) - args.adaptive_softmax_dropout = safe_getattr(args, "adaptive_softmax_dropout", 0.2) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_dropout = safe_getattr(args, "activation_dropout", 0.1) - args.no_decoder_final_norm = safe_getattr(args, "no_decoder_final_norm", True) - args.tie_adaptive_proj = safe_getattr(args, "tie_adaptive_proj", True) - transformer_lm_big(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gbw") -@register_model_architecture("transformer_lm", "transformer_lm_baevski_gbw") -def transformer_lm_baevski_gbw(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 512) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.no_decoder_final_norm = safe_getattr(args, "no_decoder_final_norm", True) - transformer_lm_big(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt") -def transformer_lm_gpt(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 768) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 3072) - args.decoder_layers = safe_getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 12) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_small") -def transformer_lm_gpt2_small(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_tiny") -def transformer_lm_gpt2_tiny(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 64) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 64) - args.decoder_layers = safe_getattr(args, "decoder_layers", 2) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 1) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_medium") -def transformer_lm_gpt2_medium(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1280) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 5120) - args.decoder_layers = safe_getattr(args, "decoder_layers", 36) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 20) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_big") -def transformer_lm_gpt2_big(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1600) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 6400) - args.decoder_layers = safe_getattr(args, "decoder_layers", 48) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 25) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -def base_gpt3_architecture(args): - args.decoder_input_dim = args.decoder_embed_dim - args.decoder_output_dim = args.decoder_embed_dim - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", args.decoder_embed_dim * 4) - # GPT-3 used learned positional embeddings, rather than sinusoidal - args.decoder_learned_pos = safe_getattr(args, "decoder_learned_pos", True) - args.dropout = safe_getattr(args, "dropout", 0.0) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.0) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - args.share_decoder_input_output_embed = True - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_small") -def transformer_lm_gpt3_small(args): - # 125M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 12) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 768) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 12) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_medium") -def transformer_lm_gpt3_medium(args): - # 350M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1024) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_large") -def transformer_lm_gpt3_large(args): - # 760M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1536) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_xl") -def transformer_lm_gpt3_xl(args): - # 1.3B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 2048) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 32) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_2_7") -def transformer_lm_gpt3_2_7(args): - # 2.7B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 32) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 2560) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 32) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_6_7") -def transformer_lm_gpt3_6_7(args): - # 6.7B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 32) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 4096) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 32) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_13") -def transformer_lm_gpt3_13(args): - # 13B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 40) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 5120) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 40) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_175") -def transformer_lm_gpt3_175(args): - # 175B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 96) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 12288) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 96) - base_gpt3_architecture(args) diff --git a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/google_app_engine/Dockerfile b/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/google_app_engine/Dockerfile deleted file mode 100644 index 0155618f475104e9858b81470339558156c94e13..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/google_app_engine/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM gcr.io/google-appengine/python - -# Create a virtualenv for dependencies. This isolates these packages from -# system-level packages. -# Use -p python3 or -p python3.7 to select python version. Default is version 2. -RUN virtualenv /env -p python3 - -# Setting these environment variables are the same as running -# source /env/bin/activate. -ENV VIRTUAL_ENV /env -ENV PATH /env/bin:$PATH - -RUN apt-get update && apt-get install -y python-opencv - -# Copy the application's requirements.txt and run pip to install all -# dependencies into the virtualenv. -ADD requirements.txt /app/requirements.txt -RUN pip install -r /app/requirements.txt - -# Add the application source code. -ADD . /app - -# Run a WSGI server to serve the application. gunicorn must be declared as -# a dependency in requirements.txt. -CMD gunicorn -b :$PORT main:app diff --git a/spaces/nathanTQ/ChatDev/camel/prompts/task_prompt_template.py b/spaces/nathanTQ/ChatDev/camel/prompts/task_prompt_template.py deleted file mode 100644 index b383b5b0df912febe858c673e2f7b0c582c63112..0000000000000000000000000000000000000000 --- a/spaces/nathanTQ/ChatDev/camel/prompts/task_prompt_template.py +++ /dev/null @@ -1,48 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from typing import Any, Dict - -from camel.prompts import ( - AISocietyPromptTemplateDict, - CodePromptTemplateDict, - EvaluationPromptTemplateDict, - MisalignmentPromptTemplateDict, - SolutionExtractionPromptTemplateDict, - TextPromptDict, - TranslationPromptTemplateDict, -) -from camel.typing import TaskType - - -class TaskPromptTemplateDict(Dict[Any, TextPromptDict]): - r"""A dictionary (:obj:`Dict[Any, TextPromptDict]`) of task prompt - templates keyed by task type. This dictionary is used to map from - a task type to its corresponding prompt template dictionary. - - Args: - *args: Positional arguments passed to the :obj:`dict` constructor. - **kwargs: Keyword arguments passed to the :obj:`dict` constructor. - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.update({ - TaskType.AI_SOCIETY: AISocietyPromptTemplateDict(), - TaskType.CODE: CodePromptTemplateDict(), - TaskType.MISALIGNMENT: MisalignmentPromptTemplateDict(), - TaskType.TRANSLATION: TranslationPromptTemplateDict(), - TaskType.EVALUATION: EvaluationPromptTemplateDict(), - TaskType.SOLUTION_EXTRACTION: SolutionExtractionPromptTemplateDict(), - # TaskType.CHATDEV: ChatDevPromptTemplateDict(), - }) diff --git a/spaces/nathanTQ/ChatDev/chatdev/chat_env.py b/spaces/nathanTQ/ChatDev/chatdev/chat_env.py deleted file mode 100644 index fe518813987db8b939bd4daaac55ea12330e72f2..0000000000000000000000000000000000000000 --- a/spaces/nathanTQ/ChatDev/chatdev/chat_env.py +++ /dev/null @@ -1,245 +0,0 @@ -import os -import re -import shutil -import signal -import subprocess -import time -from typing import Dict - -import openai -import requests - -from chatdev.codes import Codes -from chatdev.documents import Documents -from chatdev.roster import Roster -from chatdev.utils import log_and_print_online - - -class ChatEnvConfig: - def __init__(self, clear_structure, - brainstorming, - gui_design, - git_management): - self.clear_structure = clear_structure - self.brainstorming = brainstorming - self.gui_design = gui_design - self.git_management = git_management - - def __str__(self): - string = "" - string += "ChatEnvConfig.clear_structure: {}\n".format(self.clear_structure) - string += "ChatEnvConfig.brainstorming: {}\n".format(self.brainstorming) - return string - - -class ChatEnv: - def __init__(self, chat_env_config: ChatEnvConfig): - self.config = chat_env_config - self.roster: Roster = Roster() - self.codes: Codes = Codes() - self.proposed_images: Dict[str, str] = {} - self.incorporated_images: Dict[str, str] = {} - self.requirements: Documents = Documents() - self.manuals: Documents = Documents() - self.env_dict = { - "directory": "", - "task_prompt": "", - "modality": "", - "ideas": "", - "language": "", - "review_comments": "", - "error_summary": "", - "test_reports": "" - } - - @staticmethod - def fix_module_not_found_error(test_reports): - if "ModuleNotFoundError" in test_reports: - for match in re.finditer(r"No module named '(\S+)'", test_reports, re.DOTALL): - module = match.group(1) - subprocess.Popen("pip install {}".format(module), shell=True).wait() - log_and_print_online("**[CMD Execute]**\n\n[CMD] pip install {}".format(module)) - - def set_directory(self, directory): - assert len(self.env_dict['directory']) == 0 - self.env_dict['directory'] = directory - self.codes.directory = directory - self.requirements.directory = directory - self.manuals.directory = directory - - if os.path.exists(self.env_dict['directory']) and len(os.listdir(directory)) > 0: - new_directory = "{}.{}".format(directory, time.strftime("%Y%m%d%H%M%S", time.localtime())) - shutil.copytree(directory, new_directory) - print("{} Copied to {}".format(directory, new_directory)) - if self.config.clear_structure: - if os.path.exists(self.env_dict['directory']): - shutil.rmtree(self.env_dict['directory']) - os.mkdir(self.env_dict['directory']) - print("{} Created".format(directory)) - else: - os.mkdir(self.env_dict['directory']) - - def exist_bugs(self) -> tuple[bool, str]: - directory = self.env_dict['directory'] - - success_info = "The software run successfully without errors." - try: - command = "cd {}; ls -l; python3 main.py;".format(directory) - process = subprocess.Popen(command, shell=True, preexec_fn=os.setsid, - stdout=subprocess.PIPE, stderr=subprocess.PIPE) - time.sleep(3) - return_code = process.returncode - # Check if the software is still running - if process.poll() is None: - os.killpg(os.getpgid(process.pid), signal.SIGTERM) - if return_code == 0: - return False, success_info - else: - error_output = process.stderr.read().decode('utf-8') - if error_output: - if "Traceback".lower() in error_output.lower(): - errs = error_output.replace(directory + "/", "") - return True, errs - else: - return False, success_info - except subprocess.CalledProcessError as e: - return True, f"Error: {e}" - except Exception as ex: - return True, f"An error occurred: {ex}" - - return False, success_info - - def recruit(self, agent_name: str): - self.roster._recruit(agent_name) - - def exist_employee(self, agent_name: str) -> bool: - return self.roster._exist_employee(agent_name) - - def print_employees(self): - self.roster._print_employees() - - def update_codes(self, generated_content): - self.codes._update_codes(generated_content) - - def rewrite_codes(self) -> None: - self.codes._rewrite_codes(self.config.git_management) - - def get_codes(self) -> str: - return self.codes._get_codes() - - def _load_from_hardware(self, directory) -> None: - self.codes._load_from_hardware(directory) - - def _update_requirements(self, generated_content): - self.requirements._update_docs(generated_content) - - def rewrite_requirements(self): - self.requirements._rewrite_docs() - - def get_requirements(self) -> str: - return self.requirements._get_docs() - - def _update_manuals(self, generated_content): - self.manuals._update_docs(generated_content, parse=False, predifined_filename="manual.md") - - def rewrite_manuals(self): - self.manuals._rewrite_docs() - - def write_meta(self) -> None: - directory = self.env_dict['directory'] - - if not os.path.exists(directory): - os.mkdir(directory) - print("{} Created.".format(directory)) - - meta_filename = "meta.txt" - with open(os.path.join(directory, meta_filename), "w", encoding="utf-8") as writer: - writer.write("{}:\n{}\n\n".format("Task", self.env_dict['task_prompt'])) - writer.write("{}:\n{}\n\n".format("Config", self.config.__str__())) - writer.write("{}:\n{}\n\n".format("Roster", ", ".join(self.roster.agents))) - writer.write("{}:\n{}\n\n".format("Modality", self.env_dict['modality'])) - writer.write("{}:\n{}\n\n".format("Ideas", self.env_dict['ideas'])) - writer.write("{}:\n{}\n\n".format("Language", self.env_dict['language'])) - writer.write("{}:\n{}\n\n".format("Code_Version", self.codes.version)) - writer.write("{}:\n{}\n\n".format("Proposed_images", len(self.proposed_images.keys()))) - writer.write("{}:\n{}\n\n".format("Incorporated_images", len(self.incorporated_images.keys()))) - print(os.path.join(directory, meta_filename), "Wrote") - - def generate_images_from_codes(self): - def download(img_url, file_name): - r = requests.get(img_url) - filepath = os.path.join(self.env_dict['directory'], file_name) - if os.path.exists(filepath): - os.remove(filepath) - with open(filepath, "wb") as f: - f.write(r.content) - print("{} Downloaded".format(filepath)) - - regex = r"(\w+.png)" - joined_codes = self.get_codes() - matches = re.finditer(regex, joined_codes, re.DOTALL) - # matched_images = {} - for match in matches: - filename = match.group(1).strip() - if filename in self.proposed_images.keys(): - self.incorporated_images[filename] = self.proposed_images[filename] - else: - self.incorporated_images[filename] = filename.replace("_", " ") - - for filename in self.incorporated_images.keys(): - if not os.path.exists(os.path.join(self.env_dict['directory'], filename)): - desc = self.incorporated_images[filename] - if desc.endswith(".png"): - desc = desc.replace(".png", "") - print("{}: {}".format(filename, desc)) - response = openai.Image.create( - prompt=desc, - n=1, - size="256x256" - ) - image_url = response['data'][0]['url'] - download(image_url, filename) - - def get_proposed_images_from_message(self, messages): - def download(img_url, file_name): - r = requests.get(img_url) - filepath = os.path.join(self.env_dict['directory'], file_name) - if os.path.exists(filepath): - os.remove(filepath) - with open(filepath, "wb") as f: - f.write(r.content) - print("{} Downloaded".format(filepath)) - - regex = r"(\w+.png):(.*?)\n" - matches = re.finditer(regex, messages, re.DOTALL) - images = {} - for match in matches: - filename = match.group(1).strip() - desc = match.group(2).strip() - images[filename] = desc - - if len(images.keys()) == 0: - regex = r"(\w+.png)" - matches = re.finditer(regex, messages, re.DOTALL) - images = {} - for match in matches: - filename = match.group(1).strip() - desc = " ".join(filename.replace(".png", "").split("_")) - images[filename] = desc - print("{}: {}".format(filename, images[filename])) - - for filename in images.keys(): - if not os.path.exists(os.path.join(self.env_dict['directory'], filename)): - desc = images[filename] - if desc.endswith(".png"): - desc = desc.replace(".png", "") - print("{}: {}".format(filename, desc)) - response = openai.Image.create( - prompt=desc, - n=1, - size="256x256" - ) - image_url = response['data'][0]['url'] - download(image_url, filename) - - return images diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hollow Knight Mac Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hollow Knight Mac Download.md deleted file mode 100644 index 9774cdc439b3ca42f2e67acc2f2508d3a95ed244..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hollow Knight Mac Download.md +++ /dev/null @@ -1,72 +0,0 @@ - -

How to Download and Play Hollow Knight on Mac

-

Hollow Knight is a critically acclaimed 2D action adventure game that takes you to a vast and mysterious kingdom of insects and heroes. You can explore twisting caverns, battle tainted creatures, befriend bizarre bugs, and uncover ancient secrets in a hand-drawn 2D style.

-

Hollow Knight Mac Download


Downloadhttps://urlcod.com/2uIctF



-

If you are a Mac user and want to play Hollow Knight on your device, you might be wondering how to do it. Fortunately, there are a few options available for you to enjoy this amazing game on your Mac.

-

Option 1: Steam

-

One of the easiest ways to download and play Hollow Knight on Mac is through Steam, a popular digital distribution platform for games. Steam offers a cross-platform feature that allows you to buy the game once and play it on any supported device, including Mac.

-

To download and play Hollow Knight on Mac through Steam, you need to follow these steps:

-

-
    -
  1. Create a Steam account or log in to your existing one.
  2. -
  3. Download and install the Steam app on your Mac.
  4. -
  5. Search for Hollow Knight on the Steam store and purchase it.
  6. -
  7. Go to your library and click on Hollow Knight to start the download.
  8. -
  9. Once the download is complete, click on Hollow Knight again to launch the game.
  10. -
-

Note that you need to have at least 9 GB of free space on your Mac and meet the minimum system requirements for Hollow Knight, which are:

-
    -
  • OS: macOS 10.9 or later
  • -
  • Processor: Intel Core i5 or later
  • -
  • Memory: 4 GB RAM
  • -
  • Graphics: Intel HD Graphics 4000 or later
  • -
  • Storage: 9 GB available space
  • -
-

Option 2: GOG.com

-

Another option to download and play Hollow Knight on Mac is through GOG.com, a DRM-free digital distribution platform for games. GOG.com also offers a cross-platform feature that allows you to buy the game once and play it on any supported device, including Mac.

-

To download and play Hollow Knight on Mac through GOG.com, you need to follow these steps:

-
    -
  1. Create a GOG.com account or log in to your existing one.
  2. -
  3. Download and install the GOG Galaxy app on your Mac.
  4. -
  5. Search for Hollow Knight on the GOG.com store and purchase it.
  6. -
  7. Go to your library and click on Hollow Knight to start the download.
  8. -
  9. Once the download is complete, click on Hollow Knight again to launch the game.
  10. -
-

Note that you need to have at least 9 GB of free space on your Mac and meet the minimum system requirements for Hollow Knight, which are:

-
    -
  • OS: macOS 10.9 or later
  • -
  • Processor: Intel Core i5 or later
  • -
  • Memory: 4 GB RAM
  • -
  • Graphics: Intel HD Graphics 4000 or later
  • -
  • Storage: 9 GB available space
  • -
- -

Option 3: Wine

- -

A third option to download and play Hollow Knight on Mac is through Wine, a compatibility layer that allows you to run Windows applications on other operating systems, such as macOS. Wine is not an emulator, but rather a translator that converts Windows system calls into native macOS calls.

- -

To download and play Hollow Knight on Mac through Wine, you need to follow these steps:

- -
    - -
  1. Download and install Wine from its official website.
  2. - -
  3. Download the Windows version of Hollow Knight from any source (such as Steam or GOG.com).
  4. - -
  5. Create a new Wine prefix (a virtual Windows environment) by running this command in Terminal: - -winecfg
  6. - -
  7. Select Windows 10 as the Windows version in the Wine configuration window.
  8. - -
  9. Navigate to the folder where you downloaded Hollow Knight and run this command in Terminal: - -wine setup_hollow_knight.exe
  10. - -
  11. Follow the installation wizard to install Hollow Knight in your Wine prefix.
  12. - -
  13. Navigate to the folder where you installed Hollow Knight and run this command in Terminal: - -w

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/models/utils/flow_losses.py b/spaces/oguzakif/video-object-remover/FGT_codes/FGT/models/utils/flow_losses.py deleted file mode 100644 index d1a266bdd6d85fcd0aeb6574ee62bda6b6a242b5..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/models/utils/flow_losses.py +++ /dev/null @@ -1,517 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -import numpy as np -from .fbConsistencyCheck import image_warp - - -class FlowWarpingLoss(nn.Module): - def __init__(self, metric): - super(FlowWarpingLoss, self).__init__() - self.metric = metric - - def warp(self, x, flow): - """ - - Args: - x: torch tensor with shape [b, c, h, w], the x can be 3 (for rgb frame) or 2 (for optical flow) - flow: torch tensor with shape [b, 2, h, w] - - Returns: the warped x (can be an image or an optical flow) - - """ - h, w = x.shape[2:] - device = x.device - # normalize the flow to [-1~1] - flow = torch.cat([flow[:, 0:1, :, :] / ((w - 1) / 2), flow[:, 1:2, :, :] / ((h - 1) / 2)], dim=1) - flow = flow.permute(0, 2, 3, 1) # change to [b, h, w, c] - # generate meshgrid - x_idx = np.linspace(-1, 1, w) - y_idx = np.linspace(-1, 1, h) - X_idx, Y_idx = np.meshgrid(x_idx, y_idx) - grid = torch.cat((torch.from_numpy(X_idx.astype('float32')).unsqueeze(0).unsqueeze(3), - torch.from_numpy(Y_idx.astype('float32')).unsqueeze(0).unsqueeze(3)), 3).to(device) - output = torch.nn.functional.grid_sample(x, grid + flow, mode='bilinear', padding_mode='zeros') - return output - - def __call__(self, x, y, flow, mask): - """ - image/flow warping, only support the single image/flow warping - Args: - x: Can be optical flow or image with shape [b, c, h, w], c can be 2 or 3 - y: The ground truth of x (can be the extracted optical flow or image) - flow: The flow used to warp x, whose shape is [b, 2, h, w] - mask: The mask which indicates the hole of x, which must be [b, 1, h, w] - - Returns: the warped image/optical flow - - """ - warped_x = self.warp(x, flow) - loss = self.metric(warped_x * mask, y * mask) - return loss - - -class TVLoss(): - # shift one pixel to get difference ( for both x and y direction) - def __init__(self): - super(TVLoss, self).__init__() - - def __call__(self, x): - loss = torch.mean(torch.abs(x[:, :, :, :-1] - x[:, :, :, 1:])) + torch.mean( - torch.abs(x[:, :, :-1, :] - x[:, :, 1:, :])) - return loss - - -class WarpLoss(nn.Module): - def __init__(self): - super(WarpLoss, self).__init__() - self.metric = nn.L1Loss() - - def forward(self, flow, mask, img1, img2): - """ - - Args: - flow: flow indicates the motion from img1 to img2 - mask: mask corresponds to img1 - img1: frame 1 - img2: frame t+1 - - Returns: warp loss from img2 to img1 - - """ - img2_warped = image_warp(img2, flow) - loss = self.metric(img2_warped * mask, img1 * mask) - return loss - - -class AdversarialLoss(nn.Module): - r""" - Adversarial loss - https://arxiv.org/abs/1711.10337 - """ - - def __init__(self, type='nsgan', target_real_label=1.0, target_fake_label=0.0): - r""" - type = nsgan | lsgan | hinge - """ - super(AdversarialLoss, self).__init__() - - self.type = type - self.register_buffer('real_label', torch.tensor(target_real_label)) - self.register_buffer('fake_label', torch.tensor(target_fake_label)) - - if type == 'nsgan': - self.criterion = nn.BCELoss() - - elif type == 'lsgan': - self.criterion = nn.MSELoss() - - elif type == 'hinge': - self.criterion = nn.ReLU() - - def __call__(self, outputs, is_real, is_disc=None): - if self.type == 'hinge': - if is_disc: - if is_real: - outputs = -outputs - return self.criterion(1 + outputs).mean() - else: - return (-outputs).mean() - - else: - labels = (self.real_label if is_real else self.fake_label).expand_as(outputs) - loss = self.criterion(outputs, labels) - return loss - - -class StyleLoss(nn.Module): - r""" - Perceptual loss, VGG-based - https://arxiv.org/abs/1603.08155 - https://github.com/dxyang/StyleTransfer/blob/master/utils.py - """ - - def __init__(self): - super(StyleLoss, self).__init__() - self.add_module('vgg', VGG19()) - self.criterion = torch.nn.L1Loss() - - def compute_gram(self, x): - b, ch, h, w = x.size() - f = x.view(b, ch, w * h) - f_T = f.transpose(1, 2) - G = f.bmm(f_T) / (h * w * ch) - - return G - - def __call__(self, x, y): - # Compute features - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - - # Compute loss - style_loss = 0.0 - style_loss += self.criterion(self.compute_gram(x_vgg['relu2_2']), self.compute_gram(y_vgg['relu2_2'])) - style_loss += self.criterion(self.compute_gram(x_vgg['relu3_4']), self.compute_gram(y_vgg['relu3_4'])) - style_loss += self.criterion(self.compute_gram(x_vgg['relu4_4']), self.compute_gram(y_vgg['relu4_4'])) - style_loss += self.criterion(self.compute_gram(x_vgg['relu5_2']), self.compute_gram(y_vgg['relu5_2'])) - - return style_loss - - -class PerceptualLoss(nn.Module): - r""" - Perceptual loss, VGG-based - https://arxiv.org/abs/1603.08155 - https://github.com/dxyang/StyleTransfer/blob/master/utils.py - """ - - def __init__(self, weights=[1.0, 1.0, 1.0, 1.0, 1.0]): - super(PerceptualLoss, self).__init__() - self.add_module('vgg', VGG19()) - self.criterion = torch.nn.L1Loss() - self.weights = weights - - def __call__(self, x, y): - # Compute features - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - - content_loss = 0.0 - content_loss += self.weights[0] * self.criterion(x_vgg['relu1_1'], y_vgg['relu1_1']) - content_loss += self.weights[1] * self.criterion(x_vgg['relu2_1'], y_vgg['relu2_1']) - content_loss += self.weights[2] * self.criterion(x_vgg['relu3_1'], y_vgg['relu3_1']) - content_loss += self.weights[3] * self.criterion(x_vgg['relu4_1'], y_vgg['relu4_1']) - content_loss += self.weights[4] * self.criterion(x_vgg['relu5_1'], y_vgg['relu5_1']) - - return content_loss - - -class VGG19(torch.nn.Module): - def __init__(self): - super(VGG19, self).__init__() - features = models.vgg19(pretrained=True).features - self.relu1_1 = torch.nn.Sequential() - self.relu1_2 = torch.nn.Sequential() - - self.relu2_1 = torch.nn.Sequential() - self.relu2_2 = torch.nn.Sequential() - - self.relu3_1 = torch.nn.Sequential() - self.relu3_2 = torch.nn.Sequential() - self.relu3_3 = torch.nn.Sequential() - self.relu3_4 = torch.nn.Sequential() - - self.relu4_1 = torch.nn.Sequential() - self.relu4_2 = torch.nn.Sequential() - self.relu4_3 = torch.nn.Sequential() - self.relu4_4 = torch.nn.Sequential() - - self.relu5_1 = torch.nn.Sequential() - self.relu5_2 = torch.nn.Sequential() - self.relu5_3 = torch.nn.Sequential() - self.relu5_4 = torch.nn.Sequential() - - for x in range(2): - self.relu1_1.add_module(str(x), features[x]) - - for x in range(2, 4): - self.relu1_2.add_module(str(x), features[x]) - - for x in range(4, 7): - self.relu2_1.add_module(str(x), features[x]) - - for x in range(7, 9): - self.relu2_2.add_module(str(x), features[x]) - - for x in range(9, 12): - self.relu3_1.add_module(str(x), features[x]) - - for x in range(12, 14): - self.relu3_2.add_module(str(x), features[x]) - - for x in range(14, 16): - self.relu3_3.add_module(str(x), features[x]) - - for x in range(16, 18): - self.relu3_4.add_module(str(x), features[x]) - - for x in range(18, 21): - self.relu4_1.add_module(str(x), features[x]) - - for x in range(21, 23): - self.relu4_2.add_module(str(x), features[x]) - - for x in range(23, 25): - self.relu4_3.add_module(str(x), features[x]) - - for x in range(25, 27): - self.relu4_4.add_module(str(x), features[x]) - - for x in range(27, 30): - self.relu5_1.add_module(str(x), features[x]) - - for x in range(30, 32): - self.relu5_2.add_module(str(x), features[x]) - - for x in range(32, 34): - self.relu5_3.add_module(str(x), features[x]) - - for x in range(34, 36): - self.relu5_4.add_module(str(x), features[x]) - - # don't need the gradients, just want the features - for param in self.parameters(): - param.requires_grad = False - - def forward(self, x): - relu1_1 = self.relu1_1(x) - relu1_2 = self.relu1_2(relu1_1) - - relu2_1 = self.relu2_1(relu1_2) - relu2_2 = self.relu2_2(relu2_1) - - relu3_1 = self.relu3_1(relu2_2) - relu3_2 = self.relu3_2(relu3_1) - relu3_3 = self.relu3_3(relu3_2) - relu3_4 = self.relu3_4(relu3_3) - - relu4_1 = self.relu4_1(relu3_4) - relu4_2 = self.relu4_2(relu4_1) - relu4_3 = self.relu4_3(relu4_2) - relu4_4 = self.relu4_4(relu4_3) - - relu5_1 = self.relu5_1(relu4_4) - relu5_2 = self.relu5_2(relu5_1) - relu5_3 = self.relu5_3(relu5_2) - relu5_4 = self.relu5_4(relu5_3) - - out = { - 'relu1_1': relu1_1, - 'relu1_2': relu1_2, - - 'relu2_1': relu2_1, - 'relu2_2': relu2_2, - - 'relu3_1': relu3_1, - 'relu3_2': relu3_2, - 'relu3_3': relu3_3, - 'relu3_4': relu3_4, - - 'relu4_1': relu4_1, - 'relu4_2': relu4_2, - 'relu4_3': relu4_3, - 'relu4_4': relu4_4, - - 'relu5_1': relu5_1, - 'relu5_2': relu5_2, - 'relu5_3': relu5_3, - 'relu5_4': relu5_4, - } - return out - - -# Some losses related to optical flows -# From Unflow: https://github.com/simonmeister/UnFlow -def fbLoss(forward_flow, backward_flow, forward_gt_flow, backward_gt_flow, fb_loss_weight, image_warp_loss_weight=0, - occ_weight=0, beta=255, first_image=None, second_image=None): - """ - calculate the forward-backward consistency loss and the related image warp loss - Args: - forward_flow: torch tensor, with shape [b, c, h, w] - backward_flow: torch tensor, with shape [b, c, h, w] - forward_gt_flow: the ground truth of the forward flow (used for occlusion calculation) - backward_gt_flow: the ground truth of the backward flow (used for occlusion calculation) - fb_loss_weight: loss weight for forward-backward consistency check between two frames - image_warp_loss_weight: loss weight for image warping - occ_weight: loss weight for occlusion area (serve as a punishment for image warp loss) - beta: 255 by default, according to the original loss codes in unflow - first_image: the previous image (extraction for the optical flows) - second_image: the later image (extraction for the optical flows) - Note: forward and backward flow should be extracted from the same image pair - Returns: forward backward consistency loss between forward and backward flow - - """ - mask_fw = create_outgoing_mask(forward_flow).float() - mask_bw = create_outgoing_mask(backward_flow).float() - - # forward warp backward flow and backward forward flow to calculate the cycle consistency - forward_flow_warped = image_warp(forward_flow, backward_gt_flow) - forward_flow_warped_gt = image_warp(forward_gt_flow, backward_gt_flow) - backward_flow_warped = image_warp(backward_flow, forward_gt_flow) - backward_flow_warped_gt = image_warp(backward_gt_flow, forward_gt_flow) - flow_diff_fw = backward_flow_warped + forward_flow - flow_diff_fw_gt = backward_flow_warped_gt + forward_gt_flow - flow_diff_bw = backward_flow + forward_flow_warped - flow_diff_bw_gt = backward_gt_flow + forward_flow_warped_gt - - # occlusion calculation - mag_sq_fw = length_sq(forward_gt_flow) + length_sq(backward_flow_warped_gt) - mag_sq_bw = length_sq(backward_gt_flow) + length_sq(forward_flow_warped_gt) - occ_thresh_fw = 0.01 * mag_sq_fw + 0.5 - occ_thresh_bw = 0.01 * mag_sq_bw + 0.5 - - fb_occ_fw = (length_sq(flow_diff_fw_gt) > occ_thresh_fw).float() - fb_occ_bw = (length_sq(flow_diff_bw_gt) > occ_thresh_bw).float() - - mask_fw *= (1 - fb_occ_fw) - mask_bw *= (1 - fb_occ_bw) - - occ_fw = 1 - mask_fw - occ_bw = 1 - mask_bw - - if image_warp_loss_weight != 0: - # warp images - second_image_warped = image_warp(second_image, forward_flow) # frame 2 -> 1 - first_image_warped = image_warp(first_image, backward_flow) # frame 1 -> 2 - im_diff_fw = first_image - second_image_warped - im_diff_bw = second_image - first_image_warped - # calculate the image warp loss based on the occlusion regions calculated by forward and backward flows (gt) - occ_loss = occ_weight * (charbonnier_loss(occ_fw) + charbonnier_loss(occ_bw)) - image_warp_loss = image_warp_loss_weight * ( - charbonnier_loss(im_diff_fw, mask_fw, beta=beta) + charbonnier_loss(im_diff_bw, mask_bw, - beta=beta)) + occ_loss - else: - image_warp_loss = 0 - fb_loss = fb_loss_weight * (charbonnier_loss(flow_diff_fw, mask_fw) + charbonnier_loss(flow_diff_bw, mask_bw)) - return fb_loss + image_warp_loss - - -def length_sq(x): - return torch.sum(torch.square(x), 1, keepdim=True) - - -def smoothness_loss(flow, cmask): - delta_u, delta_v, mask = smoothness_deltas(flow) - loss_u = charbonnier_loss(delta_u, cmask) - loss_v = charbonnier_loss(delta_v, cmask) - return loss_u + loss_v - - -def smoothness_deltas(flow): - """ - flow: [b, c, h, w] - """ - mask_x = create_mask(flow, [[0, 0], [0, 1]]) - mask_y = create_mask(flow, [[0, 1], [0, 0]]) - mask = torch.cat((mask_x, mask_y), dim=1) - mask = mask.to(flow.device) - filter_x = torch.tensor([[0, 0, 0.], [0, 1, -1], [0, 0, 0]]) - filter_y = torch.tensor([[0, 0, 0.], [0, 1, 0], [0, -1, 0]]) - weights = torch.ones([2, 1, 3, 3]) - weights[0, 0] = filter_x - weights[1, 0] = filter_y - weights = weights.to(flow.device) - - flow_u, flow_v = torch.split(flow, split_size_or_sections=1, dim=1) - delta_u = F.conv2d(flow_u, weights, stride=1, padding=1) - delta_v = F.conv2d(flow_v, weights, stride=1, padding=1) - return delta_u, delta_v, mask - - -def second_order_loss(flow, cmask): - delta_u, delta_v, mask = second_order_deltas(flow) - loss_u = charbonnier_loss(delta_u, cmask) - loss_v = charbonnier_loss(delta_v, cmask) - return loss_u + loss_v - - -def charbonnier_loss(x, mask=None, truncate=None, alpha=0.45, beta=1.0, epsilon=0.001): - """ - Compute the generalized charbonnier loss of the difference tensor x - All positions where mask == 0 are not taken into account - x: a tensor of shape [b, c, h, w] - mask: a mask of shape [b, mc, h, w], where mask channels must be either 1 or the same as - the number of channels of x. Entries should be 0 or 1 - return: loss - """ - b, c, h, w = x.shape - norm = b * c * h * w - error = torch.pow(torch.square(x * beta) + torch.square(torch.tensor(epsilon)), alpha) - if mask is not None: - error = mask * error - if truncate is not None: - error = torch.min(error, truncate) - return torch.sum(error) / norm - - -def second_order_deltas(flow): - """ - consider the single flow first - flow shape: [b, c, h, w] - """ - # create mask - mask_x = create_mask(flow, [[0, 0], [1, 1]]) - mask_y = create_mask(flow, [[1, 1], [0, 0]]) - mask_diag = create_mask(flow, [[1, 1], [1, 1]]) - mask = torch.cat((mask_x, mask_y, mask_diag, mask_diag), dim=1) - mask = mask.to(flow.device) - - filter_x = torch.tensor([[0, 0, 0.], [1, -2, 1], [0, 0, 0]]) - filter_y = torch.tensor([[0, 1, 0.], [0, -2, 0], [0, 1, 0]]) - filter_diag1 = torch.tensor([[1, 0, 0.], [0, -2, 0], [0, 0, 1]]) - filter_diag2 = torch.tensor([[0, 0, 1.], [0, -2, 0], [1, 0, 0]]) - weights = torch.ones([4, 1, 3, 3]) - weights[0] = filter_x - weights[1] = filter_y - weights[2] = filter_diag1 - weights[3] = filter_diag2 - weights = weights.to(flow.device) - - # split the flow into flow_u and flow_v, conv them with the weights - flow_u, flow_v = torch.split(flow, split_size_or_sections=1, dim=1) - delta_u = F.conv2d(flow_u, weights, stride=1, padding=1) - delta_v = F.conv2d(flow_v, weights, stride=1, padding=1) - return delta_u, delta_v, mask - - -def create_mask(tensor, paddings): - """ - tensor shape: [b, c, h, w] - paddings: [2 x 2] shape list, the first row indicates up and down paddings - the second row indicates left and right paddings - | | - | x | - | x * x | - | x | - | | - """ - shape = tensor.shape - inner_height = shape[2] - (paddings[0][0] + paddings[0][1]) - inner_width = shape[3] - (paddings[1][0] + paddings[1][1]) - inner = torch.ones([inner_height, inner_width]) - torch_paddings = [paddings[1][0], paddings[1][1], paddings[0][0], paddings[0][1]] # left, right, up and down - mask2d = F.pad(inner, pad=torch_paddings) - mask3d = mask2d.unsqueeze(0).repeat(shape[0], 1, 1) - mask4d = mask3d.unsqueeze(1) - return mask4d.detach() - - -def create_outgoing_mask(flow): - """ - Computes a mask that is zero at all positions where the flow would carry a pixel over the image boundary - For such pixels, it's invalid to calculate the flow losses - Args: - flow: torch tensor: with shape [b, 2, h, w] - - Returns: a mask, 1 indicates in-boundary pixels, with shape [b, 1, h, w] - - """ - b, c, h, w = flow.shape - - grid_x = torch.reshape(torch.arange(0, w), [1, 1, w]) - grid_x = grid_x.repeat(b, h, 1).float() - grid_y = torch.reshape(torch.arange(0, h), [1, h, 1]) - grid_y = grid_y.repeat(b, 1, w).float() - - grid_x = grid_x.to(flow.device) - grid_y = grid_y.to(flow.device) - - flow_u, flow_v = torch.split(flow, split_size_or_sections=1, dim=1) # [b, h, w] - pos_x = grid_x + flow_u - pos_y = grid_y + flow_v - inside_x = torch.logical_and(pos_x <= w - 1, pos_x >= 0) - inside_y = torch.logical_and(pos_y <= h - 1, pos_y >= 0) - inside = torch.logical_and(inside_x, inside_y) - if len(inside.shape) == 3: - inside = inside.unsqueeze(1) - return inside diff --git a/spaces/oguzakif/video-object-remover/SiamMask/utils/pyvotkit/src/buffer.h b/spaces/oguzakif/video-object-remover/SiamMask/utils/pyvotkit/src/buffer.h deleted file mode 100644 index 99986afb7c0c2d66dd4d3341d9446725975f6e8f..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/utils/pyvotkit/src/buffer.h +++ /dev/null @@ -1,190 +0,0 @@ - -#ifndef __STRING_BUFFER_H -#define __STRING_BUFFER_H - -// Enable MinGW secure API for _snprintf_s -#define MINGW_HAS_SECURE_API 1 - -#ifdef _MSC_VER -#define __INLINE __inline -#else -#define __INLINE inline -#endif - -#include -#include -#include - -typedef struct string_buffer { - char* buffer; - int position; - int size; -} string_buffer; - -typedef struct string_list { - char** buffer; - int position; - int size; -} string_list; - -#define BUFFER_INCREMENT_STEP 4096 - -static __INLINE string_buffer* buffer_create(int L) { - string_buffer* B = (string_buffer*) malloc(sizeof(string_buffer)); - B->size = L; - B->buffer = (char*) malloc(sizeof(char) * B->size); - B->position = 0; - return B; -} - -static __INLINE void buffer_reset(string_buffer* B) { - B->position = 0; -} - -static __INLINE void buffer_destroy(string_buffer** B) { - if (!(*B)) return; - if ((*B)->buffer) { - free((*B)->buffer); - (*B)->buffer = NULL; - } - free((*B)); - (*B) = NULL; -} - -static __INLINE char* buffer_extract(const string_buffer* B) { - char *S = (char*) malloc(sizeof(char) * (B->position + 1)); - memcpy(S, B->buffer, B->position); - S[B->position] = '\0'; - return S; -} - -static __INLINE int buffer_size(const string_buffer* B) { - return B->position; -} - -static __INLINE void buffer_push(string_buffer* B, char C) { - int required = 1; - if (required > B->size - B->position) { - B->size = B->position + BUFFER_INCREMENT_STEP; - B->buffer = (char*) realloc(B->buffer, sizeof(char) * B->size); - } - B->buffer[B->position] = C; - B->position += required; -} - -static __INLINE void buffer_append(string_buffer* B, const char *format, ...) { - - int required; - va_list args; - -#if defined(__OS2__) || defined(__WINDOWS__) || defined(WIN32) || defined(_MSC_VER) - - va_start(args, format); - required = _vscprintf(format, args) + 1; - va_end(args); - if (required >= B->size - B->position) { - B->size = B->position + required + 1; - B->buffer = (char*) realloc(B->buffer, sizeof(char) * B->size); - } - va_start(args, format); - required = _vsnprintf_s(&(B->buffer[B->position]), B->size - B->position, _TRUNCATE, format, args); - va_end(args); - B->position += required; - -#else - va_start(args, format); - required = vsnprintf(&(B->buffer[B->position]), B->size - B->position, format, args); - va_end(args); - if (required >= B->size - B->position) { - B->size = B->position + required + 1; - B->buffer = (char*) realloc(B->buffer, sizeof(char) * B->size); - va_start(args, format); - required = vsnprintf(&(B->buffer[B->position]), B->size - B->position, format, args); - va_end(args); - } - B->position += required; -#endif - -} - -static __INLINE string_list* list_create(int L) { - string_list* B = (string_list*) malloc(sizeof(string_list)); - B->size = L; - B->buffer = (char**) malloc(sizeof(char*) * B->size); - memset(B->buffer, 0, sizeof(char*) * B->size); - B->position = 0; - return B; -} - -static __INLINE void list_reset(string_list* B) { - int i; - for (i = 0; i < B->position; i++) { - if (B->buffer[i]) free(B->buffer[i]); - B->buffer[i] = NULL; - } - B->position = 0; -} - -static __INLINE void list_destroy(string_list **B) { - int i; - - if (!(*B)) return; - - for (i = 0; i < (*B)->position; i++) { - if ((*B)->buffer[i]) free((*B)->buffer[i]); (*B)->buffer[i] = NULL; - } - - if ((*B)->buffer) { - free((*B)->buffer); (*B)->buffer = NULL; - } - - free((*B)); - (*B) = NULL; -} - -static __INLINE char* list_get(const string_list *B, int I) { - if (I < 0 || I >= B->position) { - return NULL; - } else { - if (!B->buffer[I]) { - return NULL; - } else { - char *S; - int length = strlen(B->buffer[I]); - S = (char*) malloc(sizeof(char) * (length + 1)); - memcpy(S, B->buffer[I], length + 1); - return S; - } - } -} - -static __INLINE int list_size(const string_list *B) { - return B->position; -} - -static __INLINE void list_append(string_list *B, char* S) { - int required = 1; - int length = strlen(S); - if (required > B->size - B->position) { - B->size = B->position + 16; - B->buffer = (char**) realloc(B->buffer, sizeof(char*) * B->size); - } - B->buffer[B->position] = (char*) malloc(sizeof(char) * (length + 1)); - memcpy(B->buffer[B->position], S, length + 1); - B->position += required; -} - -// This version of the append does not copy the string but simply takes the control of its allocation -static __INLINE void list_append_direct(string_list *B, char* S) { - int required = 1; - // int length = strlen(S); - if (required > B->size - B->position) { - B->size = B->position + 16; - B->buffer = (char**) realloc(B->buffer, sizeof(char*) * B->size); - } - B->buffer[B->position] = S; - B->position += required; -} - - -#endif diff --git a/spaces/paulokewunmi/omowe.ai/app.py b/spaces/paulokewunmi/omowe.ai/app.py deleted file mode 100644 index bc0bb93822eced77b87e15047b01b28b6a25be2b..0000000000000000000000000000000000000000 --- a/spaces/paulokewunmi/omowe.ai/app.py +++ /dev/null @@ -1,383 +0,0 @@ -import gradio as gr -from src.document_utils import ( - summarize, - question_answer, - generate_questions, - load_history, - load_science, - paraphrase -) -from src.wiki_search import cross_lingual_document_search, translate_text -from src.theme import CustomTheme - - -max_search_results = 3 - - -def reset_chatbot(): - return gr.update(value="") - - -def get_user_input(input_question, history): - return "", history + [[input_question, None]] - - -def study_doc_qa_bot(input_document, history): - bot_message = question_answer(input_document, history) - history[-1][1] = bot_message - return history - - -custom_theme = CustomTheme() - - -with gr.Blocks(theme=custom_theme) as demo: - gr.HTML( - """
    omowe.ai logo

    """ - ) - - qa_bot_state = gr.State(value=[]) - - with gr.Tabs(): - - with gr.TabItem("Document Search"): - gr.HTML( - """

    Search across a library of study materials in your own native language or even a mix of languages.

    """ - ) - gr.HTML( - """

    Get started with a pre-indexed set of study materials spaning various subjects (History, Literature, Philosophy, Government etc) in 4 different languages.

    """ - ) - - with gr.Row(): - text_match = gr.CheckboxGroup( - ["Full Text Search"], label="find exact text in documents", visible=False - ) - - with gr.Row(): - lang_choices = gr.CheckboxGroup( - [ - "English", - "Yoruba", - "Igbo", - "Hausa", - ], - label="Filter results based on language", - value = "Yoruba" - ) - - with gr.Row(): - with gr.Column(): - user_query = gr.Text( - label="Enter query here", - placeholder="Search through study materials (e.g The Nigerian Civil War, What is Literature)", - ) - - num_search_results = gr.Slider( - 1, - max_search_results, - visible=False, - value=max_search_results, - step=1, - interactive=True, - label="How many search results to show:", - ) - - with gr.Row(): - - with gr.Column(): - query_match_out_1 = gr.Textbox( - label= f"Search Result 1" - ) - - with gr.Column(): - with gr.Accordion("Click to View Translation/Source", open=False): - translate_btn_1 = gr.Button( - label="Translate Text", - value="Translate Text", - variant="primary", - ) - translate_res_1 = gr.Textbox( - label=f"Translation in English", - ) - - source_res_1 = gr.Textbox( - label=f"Source Url", - ) - - with gr.Row(): - with gr.Column(): - query_match_out_2 = gr.Textbox(label=f"Search Result 2") - - with gr.Column(): - with gr.Accordion("Click to View Translation/Source", open=False): - - translate_btn_2 = gr.Button( - label="Translate Text", - value="Translate Text", - variant="primary", - ) - translate_res_2 = gr.Textbox( - label=f"Translation in English", - - ) - - source_res_2 = gr.Textbox( - label=f"Source Url" - ) - - - with gr.Row(): - with gr.Column(): - query_match_out_3 = gr.Textbox(label=f"Search Result 3") - - with gr.Column(): - with gr.Accordion("Click to View Translation/Source", open=False): - - translate_btn_3 = gr.Button( - label="Translate Text", - value="Translate Text", - variant="primary", - ) - translate_res_3= gr.Textbox( - label=f"Translation in English", - ) - source_res_3 = gr.Textbox( - label=f"Source Url" - ) - - with gr.TabItem("Q&A"): - gr.HTML( - """

    Looking to breeze through your study materials effortlessly? Simply upload your documents and fire away any questions you have!

    """ - ) - with gr.Row(): - with gr.Accordion("Click to use preloaded examples", open=False): - - example_2 = gr.Button( - "Load History of Nigeria", variant="primary" - ) - example_1 = gr.Button( - "Load Science of Photosynthesis", variant="primary" - ) - - with gr.Row(): - with gr.Column(): - input_document = gr.Text(label="Copy your document here", lines=2) - input_document_pdf = gr.inputs.File(label="Uplaod file") - - - with gr.Column(): - chatbot = gr.Chatbot(label="Chat History") - input_question = gr.Text( - label="Ask a question", - placeholder="Type a question here and hit enter.", - ) - clear = gr.Button("Clear", variant="primary") - - with gr.TabItem("Summarize"): - gr.HTML( - """

    Get the most out of your study materials!

    """ - ) - gr.HTML( - """

    You can easily upload your documents and generate quick summaries and practice questions in a flash.

    """ - ) - - with gr.Row(): - with gr.Accordion("Click to use preloaded examples", open=False): - example_4 = gr.Button( - "Load History of Nigeria", variant="primary" - ) - example_3 = gr.Button( - "Load Science of Photosynthesis", variant="primary" - ) - - with gr.Row(): - with gr.Column(): - summary_input = gr.Text(label="Document", lines=5) - with gr.Column(): - summary_output = gr.Text(label="Generated Summary", lines=5) - invisible_comp = gr.Text(label="Dummy Component", visible=False) - - with gr.Row(): - with gr.Column(): - with gr.Accordion("Summary Settings", open=False): - summary_length = gr.Radio( - ["short", "medium", "long"], - label="Summary Length", - value="long", - ) - - summary_format = gr.Radio( - ["paragraph", "bullets"], - label="Summary Format", - value="bullets", - ) - extractiveness = gr.Radio( - ["low", "medium", "high"], - label="Extractiveness", - info="Controls how close to the original text the summary is.", - visible=False, - value="high", - ) - temperature = gr.Slider( - minimum=0, - maximum=5.0, - value=0.64, - step=0.1, - interactive=True, - visible=False, - label="Temperature", - info="Controls the randomness of the output. Lower values tend to generate more “predictable” output, while higher values tend to generate more “creative” output.", - ) - - - with gr.Row(): - generate_summary = gr.Button("Generate Summary", variant="primary") - - with gr.Row(): - generate_questions_btn = gr.Button("Generate practice questions", variant="primary") - with gr.Row(): - generate_output = gr.Text(label="Generated questions", lines=5) - - with gr.TabItem("Paraphrase"): - gr.HTML( - """

    Provide the text you'll like to accurately rephrase.

    """ - ) - - with gr.Row(): - with gr.Column(): - paraphrase_input = gr.Text(label="Document", lines=10) - generate_paraphrase = gr.Button("Paraphrase", variant="primary") - - with gr.Column(): - paraphrase_output = gr.HTML(label="Paraphrase", lines=10) - invisible_comp = gr.Text(label="Dummy Component", visible=False) - - with gr.Row(): - with gr.Accordion("Advanced Settings:", open=False): - paraphrase_length = gr.Radio( - ["short", "medium", "long"], - label="Paraphrase Length", - value="long", - ) - paraphrase_format = gr.Radio( - ["paragraph", "bullets"], - label="Paraphrase Format", - value="bullets", - ) - extractiveness = gr.Radio( - ["low", "medium", "high"], - label="Extractiveness", - info="Controls how close to the original text the paraphrase is.", - visible=False, - value="high", - ) - temperature = gr.Slider( - minimum=0, - maximum=5.0, - value=0.64, - step=0.1, - interactive=True, - visible=False, - label="Temperature", - info="Controls the randomness of the output. Lower values tend to generate more “predictable” output, while higher values tend to generate more “creative” output.", - ) - - # fetch answer for submitted question corresponding to input document - input_question.submit( - get_user_input, - [input_question, chatbot], - [input_question, chatbot], - queue=False, - ).then(study_doc_qa_bot, [input_document, chatbot], chatbot) - - # reset the chatbot Q&A history when input document changes - input_document.change(fn=reset_chatbot, inputs=[], outputs=chatbot) - - # Loading examples on click for Q&A module - example_1.click( - load_history, - [], - [input_document, input_question], - queue=False, - ) - - example_2.click( - load_science, - [], - [input_document, input_question], - queue=False, - ) - - # Loading examples on click for Q&A module - example_3.click( - load_history, - [], - [summary_input, invisible_comp], - queue=False, - ) - - example_4.click( - load_science, - [], - [summary_input, invisible_comp], - queue=False, - ) - - # generate summary corresponding to document submitted by the user. - generate_summary.click( - summarize, - [summary_input, summary_length, summary_format, extractiveness, temperature], - [summary_output], - queue=False, - ) - - generate_questions_btn.click( - generate_questions, - [summary_input], - [generate_output], - queue=False, - ) - - generate_paraphrase.click( - paraphrase, - [paraphrase_input], - [paraphrase_output], - queue=False, - ) - - # clear the chatbot Q&A history when this button is clicked by the user - clear.click(lambda: None, None, chatbot, queue=False) - - # run search if user submits query - user_query.submit( - cross_lingual_document_search, - [user_query, num_search_results, lang_choices, text_match], - [query_match_out_1, query_match_out_2, query_match_out_3, \ - source_res_1,source_res_2,source_res_3], - queue=False, - ) - - - # translate results corresponding to 1st search result obtained if user clicks 'Translate' - translate_btn_1.click( - translate_text, - [query_match_out_1], - [translate_res_1], - queue=False, - ) - translate_btn_2.click( - translate_text, - [query_match_out_2], - [translate_res_2], - queue=False, - ) - translate_btn_3.click( - translate_text, - [query_match_out_3], - [translate_res_3], - queue=False, - ) - - -if __name__ == "__main__": - demo.launch(debug=True) diff --git a/spaces/pierreguillou/document-layout-detection-dit-image-instances/app.py b/spaces/pierreguillou/document-layout-detection-dit-image-instances/app.py deleted file mode 100644 index 935e8caf67de50d1fb2bc2bbd2c145f37d5a58d9..0000000000000000000000000000000000000000 --- a/spaces/pierreguillou/document-layout-detection-dit-image-instances/app.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -os.system('pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html') -os.system("git clone https://github.com/microsoft/unilm.git") - -import sys -sys.path.append("unilm") - -import cv2 - -from unilm.dit.object_detection.ditod import add_vit_config - -import torch -import numpy as np - -from detectron2.config import CfgNode as CN -from detectron2.config import get_cfg -from detectron2.utils.visualizer import ColorMode, Visualizer -from detectron2.data import MetadataCatalog -from detectron2.engine import DefaultPredictor - -import gradio as gr - - -# Step 1: instantiate config -cfg = get_cfg() -add_vit_config(cfg) -cfg.merge_from_file("cascade_dit_base.yml") - -# Step 2: add model weights URL to config -cfg.MODEL.WEIGHTS = "https://layoutlm.blob.core.windows.net/dit/dit-fts/publaynet_dit-b_cascade.pth" - -# Step 3: set device -cfg.MODEL.DEVICE = "cuda" if torch.cuda.is_available() else "cpu" - -# Step 4: define model -predictor = DefaultPredictor(cfg) - -def get_bytes_shape_dtype(t): - """ - input: tensor - output: 3 strings - """ - t_numpy = t.cpu().numpy() - t_bytes = str(t_numpy.tobytes()) - t_numpy_shape = str(t_numpy.shape) - t_numpy_dtype = str(t_numpy.dtype) - return t_bytes, t_numpy_shape, t_numpy_dtype - -def analyze_image(img): - md = MetadataCatalog.get(cfg.DATASETS.TEST[0]) - if cfg.DATASETS.TEST[0]=='icdar2019_test': - md.set(thing_classes=["table"]) - else: - md.set(thing_classes=["text","title","list","table","figure"]) - - output = predictor(img)["instances"] - v = Visualizer(img[:, :, ::-1], - md, - scale=1.0, - instance_mode=ColorMode.SEGMENTATION) - result = v.draw_instance_predictions(output.to("cpu")) - result_image = result.get_image()[:, :, ::-1] - - num_instances = len(output) - image_size = output._image_size - fields = list(output.get_fields().keys()) - for field in fields: - if field == 'pred_boxes': - boxes = output.get_fields()[field] - boxes = boxes.tensor - boxes_bytes, boxes_numpy_shape, boxes_numpy_dtype = get_bytes_shape_dtype(boxes) - # boxes_recover = torch.from_numpy(np.frombuffer(boxes_bytes, dtype=boxes_numpy_dtype).reshape(boxes_numpy_shape)) - elif field == 'scores': - scores = output.get_fields()[field] - scores_bytes, scores_numpy_shape, scores_numpy_dtype = get_bytes_shape_dtype(scores) - # scores_recover = torch.from_numpy(np.frombuffer(scores_bytes, dtype=scores_numpy_dtype).reshape(scores_numpy_shape)) - elif field == 'pred_classes': - pred_classes = output.get_fields()[field] - pred_classes_bytes, pred_classes_numpy_shape, pred_classes_numpy_dtype = get_bytes_shape_dtype(pred_classes) - # pred_classes_recover = torch.from_numpy(np.frombuffer(pred_classes_bytes, dtype=pred_classes_numpy_dtype).reshape(pred_classes_numpy_shape)) - - return result_image, num_instances, image_size, boxes_bytes, boxes_numpy_shape, boxes_numpy_dtype, scores_bytes, scores_numpy_shape, scores_numpy_dtype, pred_classes_bytes, pred_classes_numpy_shape, pred_classes_numpy_dtype - -title = "Interactive demo: Document Layout Analysis with DiT" -description = "Demo for Microsoft's DiT, the Document Image Transformer for state-of-the-art document understanding tasks. This particular model is fine-tuned on PubLayNet, a large dataset for document layout analysis (read more at the links below). To use it, simply upload an image or use the example image below and click 'Submit'. Results will show up in a few seconds. If you want to make the output bigger, right-click on it and select 'Open image in new tab'." -article = "

    Paper | Github Repo

    | HuggingFace doc

    " -examples =[['publaynet_example.jpeg']] -css = ".output-image, .input-image, .image-preview {height: 600px !important}" - -iface = gr.Interface(fn=analyze_image, - inputs=gr.inputs.Image(type="numpy", label="document image"), - outputs=[ - gr.outputs.Image(type="numpy", label="annotated document"), - gr.outputs.Textbox(label="num instances"), - gr.outputs.Textbox(label="image size (h,w in pixels)"), - gr.outputs.Textbox(label="boxes bytes"), - gr.outputs.Textbox(label="boxes numpy shape"), - gr.outputs.Textbox(label="boxes numpy dtype"), - gr.outputs.Textbox(label="scores bytes"), - gr.outputs.Textbox(label="scores numpy shape"), - gr.outputs.Textbox(label="scores numpy dtype"), - gr.outputs.Textbox(label="pred_classes bytes"), - gr.outputs.Textbox(label="pred_classes numpy shape"), - gr.outputs.Textbox(label="pred_classes numpy dtype") - ], - title=title, - description=description, - examples=examples, - article=article, - css=css - ) -iface.launch(debug=True, cache_examples=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/pinkq/Newbing/src/components/toaster.tsx b/spaces/pinkq/Newbing/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/pinkq/Newbing/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/link.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/link.py deleted file mode 100644 index 4453519ad0202281cfa53b3ca2a0282a9b0a1799..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/link.py +++ /dev/null @@ -1,581 +0,0 @@ -import functools -import itertools -import logging -import os -import posixpath -import re -import urllib.parse -from dataclasses import dataclass -from typing import ( - TYPE_CHECKING, - Any, - Dict, - List, - Mapping, - NamedTuple, - Optional, - Tuple, - Union, -) - -from pip._internal.utils.deprecation import deprecated -from pip._internal.utils.filetypes import WHEEL_EXTENSION -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.misc import ( - pairwise, - redact_auth_from_url, - split_auth_from_netloc, - splitext, -) -from pip._internal.utils.models import KeyBasedCompareMixin -from pip._internal.utils.urls import path_to_url, url_to_path - -if TYPE_CHECKING: - from pip._internal.index.collector import IndexContent - -logger = logging.getLogger(__name__) - - -# Order matters, earlier hashes have a precedence over later hashes for what -# we will pick to use. -_SUPPORTED_HASHES = ("sha512", "sha384", "sha256", "sha224", "sha1", "md5") - - -@dataclass(frozen=True) -class LinkHash: - """Links to content may have embedded hash values. This class parses those. - - `name` must be any member of `_SUPPORTED_HASHES`. - - This class can be converted to and from `ArchiveInfo`. While ArchiveInfo intends to - be JSON-serializable to conform to PEP 610, this class contains the logic for - parsing a hash name and value for correctness, and then checking whether that hash - conforms to a schema with `.is_hash_allowed()`.""" - - name: str - value: str - - _hash_url_fragment_re = re.compile( - # NB: we do not validate that the second group (.*) is a valid hex - # digest. Instead, we simply keep that string in this class, and then check it - # against Hashes when hash-checking is needed. This is easier to debug than - # proactively discarding an invalid hex digest, as we handle incorrect hashes - # and malformed hashes in the same place. - r"[#&]({choices})=([^&]*)".format( - choices="|".join(re.escape(hash_name) for hash_name in _SUPPORTED_HASHES) - ), - ) - - def __post_init__(self) -> None: - assert self.name in _SUPPORTED_HASHES - - @classmethod - @functools.lru_cache(maxsize=None) - def find_hash_url_fragment(cls, url: str) -> Optional["LinkHash"]: - """Search a string for a checksum algorithm name and encoded output value.""" - match = cls._hash_url_fragment_re.search(url) - if match is None: - return None - name, value = match.groups() - return cls(name=name, value=value) - - def as_dict(self) -> Dict[str, str]: - return {self.name: self.value} - - def as_hashes(self) -> Hashes: - """Return a Hashes instance which checks only for the current hash.""" - return Hashes({self.name: [self.value]}) - - def is_hash_allowed(self, hashes: Optional[Hashes]) -> bool: - """ - Return True if the current hash is allowed by `hashes`. - """ - if hashes is None: - return False - return hashes.is_hash_allowed(self.name, hex_digest=self.value) - - -@dataclass(frozen=True) -class MetadataFile: - """Information about a core metadata file associated with a distribution.""" - - hashes: Optional[Dict[str, str]] - - def __post_init__(self) -> None: - if self.hashes is not None: - assert all(name in _SUPPORTED_HASHES for name in self.hashes) - - -def supported_hashes(hashes: Optional[Dict[str, str]]) -> Optional[Dict[str, str]]: - # Remove any unsupported hash types from the mapping. If this leaves no - # supported hashes, return None - if hashes is None: - return None - hashes = {n: v for n, v in hashes.items() if n in _SUPPORTED_HASHES} - if not hashes: - return None - return hashes - - -def _clean_url_path_part(part: str) -> str: - """ - Clean a "part" of a URL path (i.e. after splitting on "@" characters). - """ - # We unquote prior to quoting to make sure nothing is double quoted. - return urllib.parse.quote(urllib.parse.unquote(part)) - - -def _clean_file_url_path(part: str) -> str: - """ - Clean the first part of a URL path that corresponds to a local - filesystem path (i.e. the first part after splitting on "@" characters). - """ - # We unquote prior to quoting to make sure nothing is double quoted. - # Also, on Windows the path part might contain a drive letter which - # should not be quoted. On Linux where drive letters do not - # exist, the colon should be quoted. We rely on urllib.request - # to do the right thing here. - return urllib.request.pathname2url(urllib.request.url2pathname(part)) - - -# percent-encoded: / -_reserved_chars_re = re.compile("(@|%2F)", re.IGNORECASE) - - -def _clean_url_path(path: str, is_local_path: bool) -> str: - """ - Clean the path portion of a URL. - """ - if is_local_path: - clean_func = _clean_file_url_path - else: - clean_func = _clean_url_path_part - - # Split on the reserved characters prior to cleaning so that - # revision strings in VCS URLs are properly preserved. - parts = _reserved_chars_re.split(path) - - cleaned_parts = [] - for to_clean, reserved in pairwise(itertools.chain(parts, [""])): - cleaned_parts.append(clean_func(to_clean)) - # Normalize %xx escapes (e.g. %2f -> %2F) - cleaned_parts.append(reserved.upper()) - - return "".join(cleaned_parts) - - -def _ensure_quoted_url(url: str) -> str: - """ - Make sure a link is fully quoted. - For example, if ' ' occurs in the URL, it will be replaced with "%20", - and without double-quoting other characters. - """ - # Split the URL into parts according to the general structure - # `scheme://netloc/path;parameters?query#fragment`. - result = urllib.parse.urlparse(url) - # If the netloc is empty, then the URL refers to a local filesystem path. - is_local_path = not result.netloc - path = _clean_url_path(result.path, is_local_path=is_local_path) - return urllib.parse.urlunparse(result._replace(path=path)) - - -class Link(KeyBasedCompareMixin): - """Represents a parsed link from a Package Index's simple URL""" - - __slots__ = [ - "_parsed_url", - "_url", - "_hashes", - "comes_from", - "requires_python", - "yanked_reason", - "metadata_file_data", - "cache_link_parsing", - "egg_fragment", - ] - - def __init__( - self, - url: str, - comes_from: Optional[Union[str, "IndexContent"]] = None, - requires_python: Optional[str] = None, - yanked_reason: Optional[str] = None, - metadata_file_data: Optional[MetadataFile] = None, - cache_link_parsing: bool = True, - hashes: Optional[Mapping[str, str]] = None, - ) -> None: - """ - :param url: url of the resource pointed to (href of the link) - :param comes_from: instance of IndexContent where the link was found, - or string. - :param requires_python: String containing the `Requires-Python` - metadata field, specified in PEP 345. This may be specified by - a data-requires-python attribute in the HTML link tag, as - described in PEP 503. - :param yanked_reason: the reason the file has been yanked, if the - file has been yanked, or None if the file hasn't been yanked. - This is the value of the "data-yanked" attribute, if present, in - a simple repository HTML link. If the file has been yanked but - no reason was provided, this should be the empty string. See - PEP 592 for more information and the specification. - :param metadata_file_data: the metadata attached to the file, or None if - no such metadata is provided. This argument, if not None, indicates - that a separate metadata file exists, and also optionally supplies - hashes for that file. - :param cache_link_parsing: A flag that is used elsewhere to determine - whether resources retrieved from this link should be cached. PyPI - URLs should generally have this set to False, for example. - :param hashes: A mapping of hash names to digests to allow us to - determine the validity of a download. - """ - - # The comes_from, requires_python, and metadata_file_data arguments are - # only used by classmethods of this class, and are not used in client - # code directly. - - # url can be a UNC windows share - if url.startswith("\\\\"): - url = path_to_url(url) - - self._parsed_url = urllib.parse.urlsplit(url) - # Store the url as a private attribute to prevent accidentally - # trying to set a new value. - self._url = url - - link_hash = LinkHash.find_hash_url_fragment(url) - hashes_from_link = {} if link_hash is None else link_hash.as_dict() - if hashes is None: - self._hashes = hashes_from_link - else: - self._hashes = {**hashes, **hashes_from_link} - - self.comes_from = comes_from - self.requires_python = requires_python if requires_python else None - self.yanked_reason = yanked_reason - self.metadata_file_data = metadata_file_data - - super().__init__(key=url, defining_class=Link) - - self.cache_link_parsing = cache_link_parsing - self.egg_fragment = self._egg_fragment() - - @classmethod - def from_json( - cls, - file_data: Dict[str, Any], - page_url: str, - ) -> Optional["Link"]: - """ - Convert an pypi json document from a simple repository page into a Link. - """ - file_url = file_data.get("url") - if file_url is None: - return None - - url = _ensure_quoted_url(urllib.parse.urljoin(page_url, file_url)) - pyrequire = file_data.get("requires-python") - yanked_reason = file_data.get("yanked") - hashes = file_data.get("hashes", {}) - - # PEP 714: Indexes must use the name core-metadata, but - # clients should support the old name as a fallback for compatibility. - metadata_info = file_data.get("core-metadata") - if metadata_info is None: - metadata_info = file_data.get("dist-info-metadata") - - # The metadata info value may be a boolean, or a dict of hashes. - if isinstance(metadata_info, dict): - # The file exists, and hashes have been supplied - metadata_file_data = MetadataFile(supported_hashes(metadata_info)) - elif metadata_info: - # The file exists, but there are no hashes - metadata_file_data = MetadataFile(None) - else: - # False or not present: the file does not exist - metadata_file_data = None - - # The Link.yanked_reason expects an empty string instead of a boolean. - if yanked_reason and not isinstance(yanked_reason, str): - yanked_reason = "" - # The Link.yanked_reason expects None instead of False. - elif not yanked_reason: - yanked_reason = None - - return cls( - url, - comes_from=page_url, - requires_python=pyrequire, - yanked_reason=yanked_reason, - hashes=hashes, - metadata_file_data=metadata_file_data, - ) - - @classmethod - def from_element( - cls, - anchor_attribs: Dict[str, Optional[str]], - page_url: str, - base_url: str, - ) -> Optional["Link"]: - """ - Convert an anchor element's attributes in a simple repository page to a Link. - """ - href = anchor_attribs.get("href") - if not href: - return None - - url = _ensure_quoted_url(urllib.parse.urljoin(base_url, href)) - pyrequire = anchor_attribs.get("data-requires-python") - yanked_reason = anchor_attribs.get("data-yanked") - - # PEP 714: Indexes must use the name data-core-metadata, but - # clients should support the old name as a fallback for compatibility. - metadata_info = anchor_attribs.get("data-core-metadata") - if metadata_info is None: - metadata_info = anchor_attribs.get("data-dist-info-metadata") - # The metadata info value may be the string "true", or a string of - # the form "hashname=hashval" - if metadata_info == "true": - # The file exists, but there are no hashes - metadata_file_data = MetadataFile(None) - elif metadata_info is None: - # The file does not exist - metadata_file_data = None - else: - # The file exists, and hashes have been supplied - hashname, sep, hashval = metadata_info.partition("=") - if sep == "=": - metadata_file_data = MetadataFile(supported_hashes({hashname: hashval})) - else: - # Error - data is wrong. Treat as no hashes supplied. - logger.debug( - "Index returned invalid data-dist-info-metadata value: %s", - metadata_info, - ) - metadata_file_data = MetadataFile(None) - - return cls( - url, - comes_from=page_url, - requires_python=pyrequire, - yanked_reason=yanked_reason, - metadata_file_data=metadata_file_data, - ) - - def __str__(self) -> str: - if self.requires_python: - rp = f" (requires-python:{self.requires_python})" - else: - rp = "" - if self.comes_from: - return "{} (from {}){}".format( - redact_auth_from_url(self._url), self.comes_from, rp - ) - else: - return redact_auth_from_url(str(self._url)) - - def __repr__(self) -> str: - return f"" - - @property - def url(self) -> str: - return self._url - - @property - def filename(self) -> str: - path = self.path.rstrip("/") - name = posixpath.basename(path) - if not name: - # Make sure we don't leak auth information if the netloc - # includes a username and password. - netloc, user_pass = split_auth_from_netloc(self.netloc) - return netloc - - name = urllib.parse.unquote(name) - assert name, f"URL {self._url!r} produced no filename" - return name - - @property - def file_path(self) -> str: - return url_to_path(self.url) - - @property - def scheme(self) -> str: - return self._parsed_url.scheme - - @property - def netloc(self) -> str: - """ - This can contain auth information. - """ - return self._parsed_url.netloc - - @property - def path(self) -> str: - return urllib.parse.unquote(self._parsed_url.path) - - def splitext(self) -> Tuple[str, str]: - return splitext(posixpath.basename(self.path.rstrip("/"))) - - @property - def ext(self) -> str: - return self.splitext()[1] - - @property - def url_without_fragment(self) -> str: - scheme, netloc, path, query, fragment = self._parsed_url - return urllib.parse.urlunsplit((scheme, netloc, path, query, "")) - - _egg_fragment_re = re.compile(r"[#&]egg=([^&]*)") - - # Per PEP 508. - _project_name_re = re.compile( - r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", re.IGNORECASE - ) - - def _egg_fragment(self) -> Optional[str]: - match = self._egg_fragment_re.search(self._url) - if not match: - return None - - # An egg fragment looks like a PEP 508 project name, along with - # an optional extras specifier. Anything else is invalid. - project_name = match.group(1) - if not self._project_name_re.match(project_name): - deprecated( - reason=f"{self} contains an egg fragment with a non-PEP 508 name", - replacement="to use the req @ url syntax, and remove the egg fragment", - gone_in="25.0", - issue=11617, - ) - - return project_name - - _subdirectory_fragment_re = re.compile(r"[#&]subdirectory=([^&]*)") - - @property - def subdirectory_fragment(self) -> Optional[str]: - match = self._subdirectory_fragment_re.search(self._url) - if not match: - return None - return match.group(1) - - def metadata_link(self) -> Optional["Link"]: - """Return a link to the associated core metadata file (if any).""" - if self.metadata_file_data is None: - return None - metadata_url = f"{self.url_without_fragment}.metadata" - if self.metadata_file_data.hashes is None: - return Link(metadata_url) - return Link(metadata_url, hashes=self.metadata_file_data.hashes) - - def as_hashes(self) -> Hashes: - return Hashes({k: [v] for k, v in self._hashes.items()}) - - @property - def hash(self) -> Optional[str]: - return next(iter(self._hashes.values()), None) - - @property - def hash_name(self) -> Optional[str]: - return next(iter(self._hashes), None) - - @property - def show_url(self) -> str: - return posixpath.basename(self._url.split("#", 1)[0].split("?", 1)[0]) - - @property - def is_file(self) -> bool: - return self.scheme == "file" - - def is_existing_dir(self) -> bool: - return self.is_file and os.path.isdir(self.file_path) - - @property - def is_wheel(self) -> bool: - return self.ext == WHEEL_EXTENSION - - @property - def is_vcs(self) -> bool: - from pip._internal.vcs import vcs - - return self.scheme in vcs.all_schemes - - @property - def is_yanked(self) -> bool: - return self.yanked_reason is not None - - @property - def has_hash(self) -> bool: - return bool(self._hashes) - - def is_hash_allowed(self, hashes: Optional[Hashes]) -> bool: - """ - Return True if the link has a hash and it is allowed by `hashes`. - """ - if hashes is None: - return False - return any(hashes.is_hash_allowed(k, v) for k, v in self._hashes.items()) - - -class _CleanResult(NamedTuple): - """Convert link for equivalency check. - - This is used in the resolver to check whether two URL-specified requirements - likely point to the same distribution and can be considered equivalent. This - equivalency logic avoids comparing URLs literally, which can be too strict - (e.g. "a=1&b=2" vs "b=2&a=1") and produce conflicts unexpecting to users. - - Currently this does three things: - - 1. Drop the basic auth part. This is technically wrong since a server can - serve different content based on auth, but if it does that, it is even - impossible to guarantee two URLs without auth are equivalent, since - the user can input different auth information when prompted. So the - practical solution is to assume the auth doesn't affect the response. - 2. Parse the query to avoid the ordering issue. Note that ordering under the - same key in the query are NOT cleaned; i.e. "a=1&a=2" and "a=2&a=1" are - still considered different. - 3. Explicitly drop most of the fragment part, except ``subdirectory=`` and - hash values, since it should have no impact the downloaded content. Note - that this drops the "egg=" part historically used to denote the requested - project (and extras), which is wrong in the strictest sense, but too many - people are supplying it inconsistently to cause superfluous resolution - conflicts, so we choose to also ignore them. - """ - - parsed: urllib.parse.SplitResult - query: Dict[str, List[str]] - subdirectory: str - hashes: Dict[str, str] - - -def _clean_link(link: Link) -> _CleanResult: - parsed = link._parsed_url - netloc = parsed.netloc.rsplit("@", 1)[-1] - # According to RFC 8089, an empty host in file: means localhost. - if parsed.scheme == "file" and not netloc: - netloc = "localhost" - fragment = urllib.parse.parse_qs(parsed.fragment) - if "egg" in fragment: - logger.debug("Ignoring egg= fragment in %s", link) - try: - # If there are multiple subdirectory values, use the first one. - # This matches the behavior of Link.subdirectory_fragment. - subdirectory = fragment["subdirectory"][0] - except (IndexError, KeyError): - subdirectory = "" - # If there are multiple hash values under the same algorithm, use the - # first one. This matches the behavior of Link.hash_value. - hashes = {k: fragment[k][0] for k in _SUPPORTED_HASHES if k in fragment} - return _CleanResult( - parsed=parsed._replace(netloc=netloc, query="", fragment=""), - query=urllib.parse.parse_qs(parsed.query), - subdirectory=subdirectory, - hashes=hashes, - ) - - -@functools.lru_cache(maxsize=None) -def links_equivalent(link1: Link, link2: Link) -> bool: - return _clean_link(link1) == _clean_link(link2) diff --git a/spaces/presidio/presidio_demo/presidio_helpers.py b/spaces/presidio/presidio_demo/presidio_helpers.py deleted file mode 100644 index a64fe84aebefe3e8738594ee57426ded1c9eeb95..0000000000000000000000000000000000000000 --- a/spaces/presidio/presidio_demo/presidio_helpers.py +++ /dev/null @@ -1,260 +0,0 @@ -""" -Helper methods for the Presidio Streamlit app -""" -from typing import List, Optional, Tuple -import logging -import streamlit as st -from presidio_analyzer import ( - AnalyzerEngine, - RecognizerResult, - RecognizerRegistry, - PatternRecognizer, - Pattern, -) -from presidio_analyzer.nlp_engine import NlpEngine -from presidio_anonymizer import AnonymizerEngine -from presidio_anonymizer.entities import OperatorConfig - -from openai_fake_data_generator import ( - set_openai_params, - call_completion_model, - create_prompt, - OpenAIParams, -) -from presidio_nlp_engine_config import ( - create_nlp_engine_with_spacy, - create_nlp_engine_with_flair, - create_nlp_engine_with_transformers, - create_nlp_engine_with_azure_text_analytics, -) - -logger = logging.getLogger("presidio-streamlit") - - -@st.cache_resource -def nlp_engine_and_registry( - model_family: str, - model_path: str, - ta_key: Optional[str] = None, - ta_endpoint: Optional[str] = None, -) -> Tuple[NlpEngine, RecognizerRegistry]: - """Create the NLP Engine instance based on the requested model. - :param model_family: Which model package to use for NER. - :param model_path: Which model to use for NER. E.g., - "StanfordAIMI/stanford-deidentifier-base", - "obi/deid_roberta_i2b2", - "en_core_web_lg" - :param ta_key: Key to the Text Analytics endpoint (only if model_path = "Azure Text Analytics") - :param ta_endpoint: Endpoint of the Text Analytics instance (only if model_path = "Azure Text Analytics") - """ - - # Set up NLP Engine according to the model of choice - if "spaCy" in model_family: - return create_nlp_engine_with_spacy(model_path) - elif "flair" in model_family: - return create_nlp_engine_with_flair(model_path) - elif "HuggingFace" in model_family: - return create_nlp_engine_with_transformers(model_path) - elif "Azure Text Analytics" in model_family: - return create_nlp_engine_with_azure_text_analytics(ta_key, ta_endpoint) - else: - raise ValueError(f"Model family {model_family} not supported") - - -@st.cache_resource -def analyzer_engine( - model_family: str, - model_path: str, - ta_key: Optional[str] = None, - ta_endpoint: Optional[str] = None, -) -> AnalyzerEngine: - """Create the NLP Engine instance based on the requested model. - :param model_family: Which model package to use for NER. - :param model_path: Which model to use for NER: - "StanfordAIMI/stanford-deidentifier-base", - "obi/deid_roberta_i2b2", - "en_core_web_lg" - :param ta_key: Key to the Text Analytics endpoint (only if model_path = "Azure Text Analytics") - :param ta_endpoint: Endpoint of the Text Analytics instance (only if model_path = "Azure Text Analytics") - """ - nlp_engine, registry = nlp_engine_and_registry( - model_family, model_path, ta_key, ta_endpoint - ) - analyzer = AnalyzerEngine(nlp_engine=nlp_engine, registry=registry) - return analyzer - - -@st.cache_resource -def anonymizer_engine(): - """Return AnonymizerEngine.""" - return AnonymizerEngine() - - -@st.cache_data -def get_supported_entities( - model_family: str, model_path: str, ta_key: str, ta_endpoint: str -): - """Return supported entities from the Analyzer Engine.""" - return analyzer_engine( - model_family, model_path, ta_key, ta_endpoint - ).get_supported_entities() + ["GENERIC_PII"] - - -@st.cache_data -def analyze( - model_family: str, model_path: str, ta_key: str, ta_endpoint: str, **kwargs -): - """Analyze input using Analyzer engine and input arguments (kwargs).""" - if "entities" not in kwargs or "All" in kwargs["entities"]: - kwargs["entities"] = None - - if "deny_list" in kwargs and kwargs["deny_list"] is not None: - ad_hoc_recognizer = create_ad_hoc_deny_list_recognizer(kwargs["deny_list"]) - kwargs["ad_hoc_recognizers"] = [ad_hoc_recognizer] if ad_hoc_recognizer else [] - del kwargs["deny_list"] - - if "regex_params" in kwargs and len(kwargs["regex_params"]) > 0: - ad_hoc_recognizer = create_ad_hoc_regex_recognizer(*kwargs["regex_params"]) - kwargs["ad_hoc_recognizers"] = [ad_hoc_recognizer] if ad_hoc_recognizer else [] - del kwargs["regex_params"] - - return analyzer_engine(model_family, model_path, ta_key, ta_endpoint).analyze( - **kwargs - ) - - -def anonymize( - text: str, - operator: str, - analyze_results: List[RecognizerResult], - mask_char: Optional[str] = None, - number_of_chars: Optional[str] = None, - encrypt_key: Optional[str] = None, -): - """Anonymize identified input using Presidio Anonymizer. - - :param text: Full text - :param operator: Operator name - :param mask_char: Mask char (for mask operator) - :param number_of_chars: Number of characters to mask (for mask operator) - :param encrypt_key: Encryption key (for encrypt operator) - :param analyze_results: list of results from presidio analyzer engine - """ - - if operator == "mask": - operator_config = { - "type": "mask", - "masking_char": mask_char, - "chars_to_mask": number_of_chars, - "from_end": False, - } - - # Define operator config - elif operator == "encrypt": - operator_config = {"key": encrypt_key} - elif operator == "highlight": - operator_config = {"lambda": lambda x: x} - else: - operator_config = None - - # Change operator if needed as intermediate step - if operator == "highlight": - operator = "custom" - elif operator == "synthesize": - operator = "replace" - else: - operator = operator - - res = anonymizer_engine().anonymize( - text, - analyze_results, - operators={"DEFAULT": OperatorConfig(operator, operator_config)}, - ) - return res - - -def annotate(text: str, analyze_results: List[RecognizerResult]): - """Highlight the identified PII entities on the original text - - :param text: Full text - :param analyze_results: list of results from presidio analyzer engine - """ - tokens = [] - - # Use the anonymizer to resolve overlaps - results = anonymize( - text=text, - operator="highlight", - analyze_results=analyze_results, - ) - - # sort by start index - results = sorted(results.items, key=lambda x: x.start) - for i, res in enumerate(results): - if i == 0: - tokens.append(text[: res.start]) - - # append entity text and entity type - tokens.append((text[res.start : res.end], res.entity_type)) - - # if another entity coming i.e. we're not at the last results element, add text up to next entity - if i != len(results) - 1: - tokens.append(text[res.end : results[i + 1].start]) - # if no more entities coming, add all remaining text - else: - tokens.append(text[res.end :]) - return tokens - - -def create_fake_data( - text: str, - analyze_results: List[RecognizerResult], - openai_params: OpenAIParams, -): - """Creates a synthetic version of the text using OpenAI APIs""" - if not openai_params.openai_key: - return "Please provide your OpenAI key" - results = anonymize(text=text, operator="replace", analyze_results=analyze_results) - set_openai_params(openai_params) - prompt = create_prompt(results.text) - print(f"Prompt: {prompt}") - fake = call_openai_api( - prompt=prompt, - openai_model_name=openai_params.model, - openai_deployment_name=openai_params.deployment_name, - ) - return fake - - -@st.cache_data -def call_openai_api( - prompt: str, openai_model_name: str, openai_deployment_name: Optional[str] = None -) -> str: - fake_data = call_completion_model( - prompt, model=openai_model_name, deployment_id=openai_deployment_name - ) - return fake_data - - -def create_ad_hoc_deny_list_recognizer( - deny_list=Optional[List[str]], -) -> Optional[PatternRecognizer]: - if not deny_list: - return None - - deny_list_recognizer = PatternRecognizer( - supported_entity="GENERIC_PII", deny_list=deny_list - ) - return deny_list_recognizer - - -def create_ad_hoc_regex_recognizer( - regex: str, entity_type: str, score: float, context: Optional[List[str]] = None -) -> Optional[PatternRecognizer]: - if not regex: - return None - pattern = Pattern(name="Regex pattern", regex=regex, score=score) - regex_recognizer = PatternRecognizer( - supported_entity=entity_type, patterns=[pattern], context=context - ) - return regex_recognizer diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-1cda6415.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-1cda6415.css deleted file mode 100644 index a850057793c6fd9950184e6415bc4b1bdffd4416..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-1cda6415.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-rgtszb{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.gallery.svelte-rgtszb{display:flex;align-items:center;cursor:pointer;padding:var(--size-1) var(--size-2);text-align:left} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Tabs-014dc45f.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Tabs-014dc45f.js deleted file mode 100644 index a655c18132c56e6000ff92e3d7d2cd956e3145f6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Tabs-014dc45f.js +++ /dev/null @@ -1,2 +0,0 @@ -import{w as T}from"./Index-c74a8b7c.js";const{SvelteComponent:F,append:v,attr:h,component_subscribe:C,create_slot:G,destroy_block:H,detach:g,element:w,empty:I,ensure_array_like:S,get_all_dirty_from_scope:J,get_slot_changes:K,init:L,insert:p,listen:O,safe_not_equal:P,set_data:M,set_store_value:A,space:y,text:N,toggle_class:B,transition_in:Q,transition_out:R,update_keyed_each:U,update_slot_base:V}=window.__gradio__svelte__internal,{setContext:W,createEventDispatcher:X,tick:le}=window.__gradio__svelte__internal;function D(l,e,i){const n=l.slice();return n[14]=e[i],n[16]=i,n}function Y(l){let e,i=l[14].name+"",n,c,u,_,s;function f(){return l[12](l[14],l[16])}return{c(){e=w("button"),n=N(i),c=y(),h(e,"id",u=l[14].elem_id?l[14].elem_id+"-button":null),h(e,"class","svelte-kqij2n")},m(m,r){p(m,e,r),v(e,n),v(e,c),_||(s=O(e,"click",f),_=!0)},p(m,r){l=m,r&8&&i!==(i=l[14].name+"")&&M(n,i),r&8&&u!==(u=l[14].elem_id?l[14].elem_id+"-button":null)&&h(e,"id",u)},d(m){m&&g(e),_=!1,s()}}}function Z(l){let e,i=l[14].name+"",n,c,u;return{c(){e=w("button"),n=N(i),c=y(),h(e,"class","selected svelte-kqij2n"),h(e,"id",u=l[14].elem_id?l[14].elem_id+"-button":null)},m(_,s){p(_,e,s),v(e,n),v(e,c)},p(_,s){s&8&&i!==(i=_[14].name+"")&&M(n,i),s&8&&u!==(u=_[14].elem_id?_[14].elem_id+"-button":null)&&h(e,"id",u)},d(_){_&&g(e)}}}function E(l,e){let i,n;function c(s,f){return s[14].id===s[4]?Z:Y}let u=c(e),_=u(e);return{key:l,first:null,c(){i=I(),_.c(),n=I(),this.first=i},m(s,f){p(s,i,f),_.m(s,f),p(s,n,f)},p(s,f){e=s,u===(u=c(e))&&_?_.p(e,f):(_.d(1),_=u(e),_&&(_.c(),_.m(n.parentNode,n)))},d(s){s&&(g(i),g(n)),_.d(s)}}}function x(l){let e,i,n=[],c=new Map,u,_,s,f=S(l[3]);const m=t=>t[14].id;for(let t=0;ti(4,c=d));const o=T(0);C(l,o,d=>i(13,n=d));const b=X();W($,{register_tab:d=>(a.push({name:d.name,id:d.id,elem_id:d.elem_id}),t.update(k=>k??d.id),i(3,a),a.length-1),unregister_tab:d=>{const k=a.findIndex(j=>j.id===d.id);a.splice(k,1),t.update(j=>j===d.id?a[k]?.id||a[a.length-1]?.id:j)},selected_tab:t,selected_tab_index:o});function q(d){i(9,r=d),A(t,c=d,c),A(o,n=a.findIndex(k=>k.id===d),n),b("change")}const z=(d,k)=>{q(d.id),b("select",{value:d.name,index:k})};return l.$$set=d=>{"visible"in d&&i(0,s=d.visible),"elem_id"in d&&i(1,f=d.elem_id),"elem_classes"in d&&i(2,m=d.elem_classes),"selected"in d&&i(9,r=d.selected),"$$scope"in d&&i(10,_=d.$$scope)},l.$$.update=()=>{l.$$.dirty&512&&r!==null&&q(r)},[s,f,m,a,c,t,o,b,q,r,_,u,z]}class ie extends F{constructor(e){super(),L(this,e,ee,x,P,{visible:0,elem_id:1,elem_classes:2,selected:9})}}export{ie as T,$ as a}; -//# sourceMappingURL=Tabs-014dc45f.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/conftest.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/conftest.py deleted file mode 100644 index f1a3eda989057713f3576b60580f2d06b664873c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/conftest.py +++ /dev/null @@ -1,138 +0,0 @@ -""" -Pytest configuration and fixtures for the Numpy test suite. -""" -import os -import tempfile - -import hypothesis -import pytest -import numpy - -from numpy.core._multiarray_tests import get_fpu_mode - - -_old_fpu_mode = None -_collect_results = {} - -# Use a known and persistent tmpdir for hypothesis' caches, which -# can be automatically cleared by the OS or user. -hypothesis.configuration.set_hypothesis_home_dir( - os.path.join(tempfile.gettempdir(), ".hypothesis") -) - -# We register two custom profiles for Numpy - for details see -# https://hypothesis.readthedocs.io/en/latest/settings.html -# The first is designed for our own CI runs; the latter also -# forces determinism and is designed for use via np.test() -hypothesis.settings.register_profile( - name="numpy-profile", deadline=None, print_blob=True, -) -hypothesis.settings.register_profile( - name="np.test() profile", - deadline=None, print_blob=True, database=None, derandomize=True, - suppress_health_check=list(hypothesis.HealthCheck), -) -# Note that the default profile is chosen based on the presence -# of pytest.ini, but can be overridden by passing the -# --hypothesis-profile=NAME argument to pytest. -_pytest_ini = os.path.join(os.path.dirname(__file__), "..", "pytest.ini") -hypothesis.settings.load_profile( - "numpy-profile" if os.path.isfile(_pytest_ini) else "np.test() profile" -) - -# The experimentalAPI is used in _umath_tests -os.environ["NUMPY_EXPERIMENTAL_DTYPE_API"] = "1" - -def pytest_configure(config): - config.addinivalue_line("markers", - "valgrind_error: Tests that are known to error under valgrind.") - config.addinivalue_line("markers", - "leaks_references: Tests that are known to leak references.") - config.addinivalue_line("markers", - "slow: Tests that are very slow.") - config.addinivalue_line("markers", - "slow_pypy: Tests that are very slow on pypy.") - - -def pytest_addoption(parser): - parser.addoption("--available-memory", action="store", default=None, - help=("Set amount of memory available for running the " - "test suite. This can result to tests requiring " - "especially large amounts of memory to be skipped. " - "Equivalent to setting environment variable " - "NPY_AVAILABLE_MEM. Default: determined" - "automatically.")) - - -def pytest_sessionstart(session): - available_mem = session.config.getoption('available_memory') - if available_mem is not None: - os.environ['NPY_AVAILABLE_MEM'] = available_mem - - -#FIXME when yield tests are gone. -@pytest.hookimpl() -def pytest_itemcollected(item): - """ - Check FPU precision mode was not changed during test collection. - - The clumsy way we do it here is mainly necessary because numpy - still uses yield tests, which can execute code at test collection - time. - """ - global _old_fpu_mode - - mode = get_fpu_mode() - - if _old_fpu_mode is None: - _old_fpu_mode = mode - elif mode != _old_fpu_mode: - _collect_results[item] = (_old_fpu_mode, mode) - _old_fpu_mode = mode - - -@pytest.fixture(scope="function", autouse=True) -def check_fpu_mode(request): - """ - Check FPU precision mode was not changed during the test. - """ - old_mode = get_fpu_mode() - yield - new_mode = get_fpu_mode() - - if old_mode != new_mode: - raise AssertionError("FPU precision mode changed from {0:#x} to {1:#x}" - " during the test".format(old_mode, new_mode)) - - collect_result = _collect_results.get(request.node) - if collect_result is not None: - old_mode, new_mode = collect_result - raise AssertionError("FPU precision mode changed from {0:#x} to {1:#x}" - " when collecting the test".format(old_mode, - new_mode)) - - -@pytest.fixture(autouse=True) -def add_np(doctest_namespace): - doctest_namespace['np'] = numpy - -@pytest.fixture(autouse=True) -def env_setup(monkeypatch): - monkeypatch.setenv('PYTHONHASHSEED', '0') - - -@pytest.fixture(params=[True, False]) -def weak_promotion(request): - """ - Fixture to ensure "legacy" promotion state or change it to use the new - weak promotion (plus warning). `old_promotion` should be used as a - parameter in the function. - """ - state = numpy._get_promotion_state() - if request.param: - numpy._set_promotion_state("weak_and_warn") - else: - numpy._set_promotion_state("legacy") - - yield request.param - numpy._set_promotion_state(state) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_exceptions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_exceptions.py deleted file mode 100644 index 87d4213a6d42cf090f8db75571244840dd68cd5a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_exceptions.py +++ /dev/null @@ -1,172 +0,0 @@ -""" -Various richly-typed exceptions, that also help us deal with string formatting -in python where it's easier. - -By putting the formatting in `__str__`, we also avoid paying the cost for -users who silence the exceptions. -""" -from .._utils import set_module - -def _unpack_tuple(tup): - if len(tup) == 1: - return tup[0] - else: - return tup - - -def _display_as_base(cls): - """ - A decorator that makes an exception class look like its base. - - We use this to hide subclasses that are implementation details - the user - should catch the base type, which is what the traceback will show them. - - Classes decorated with this decorator are subject to removal without a - deprecation warning. - """ - assert issubclass(cls, Exception) - cls.__name__ = cls.__base__.__name__ - return cls - - -class UFuncTypeError(TypeError): - """ Base class for all ufunc exceptions """ - def __init__(self, ufunc): - self.ufunc = ufunc - - -@_display_as_base -class _UFuncNoLoopError(UFuncTypeError): - """ Thrown when a ufunc loop cannot be found """ - def __init__(self, ufunc, dtypes): - super().__init__(ufunc) - self.dtypes = tuple(dtypes) - - def __str__(self): - return ( - "ufunc {!r} did not contain a loop with signature matching types " - "{!r} -> {!r}" - ).format( - self.ufunc.__name__, - _unpack_tuple(self.dtypes[:self.ufunc.nin]), - _unpack_tuple(self.dtypes[self.ufunc.nin:]) - ) - - -@_display_as_base -class _UFuncBinaryResolutionError(_UFuncNoLoopError): - """ Thrown when a binary resolution fails """ - def __init__(self, ufunc, dtypes): - super().__init__(ufunc, dtypes) - assert len(self.dtypes) == 2 - - def __str__(self): - return ( - "ufunc {!r} cannot use operands with types {!r} and {!r}" - ).format( - self.ufunc.__name__, *self.dtypes - ) - - -@_display_as_base -class _UFuncCastingError(UFuncTypeError): - def __init__(self, ufunc, casting, from_, to): - super().__init__(ufunc) - self.casting = casting - self.from_ = from_ - self.to = to - - -@_display_as_base -class _UFuncInputCastingError(_UFuncCastingError): - """ Thrown when a ufunc input cannot be casted """ - def __init__(self, ufunc, casting, from_, to, i): - super().__init__(ufunc, casting, from_, to) - self.in_i = i - - def __str__(self): - # only show the number if more than one input exists - i_str = "{} ".format(self.in_i) if self.ufunc.nin != 1 else "" - return ( - "Cannot cast ufunc {!r} input {}from {!r} to {!r} with casting " - "rule {!r}" - ).format( - self.ufunc.__name__, i_str, self.from_, self.to, self.casting - ) - - -@_display_as_base -class _UFuncOutputCastingError(_UFuncCastingError): - """ Thrown when a ufunc output cannot be casted """ - def __init__(self, ufunc, casting, from_, to, i): - super().__init__(ufunc, casting, from_, to) - self.out_i = i - - def __str__(self): - # only show the number if more than one output exists - i_str = "{} ".format(self.out_i) if self.ufunc.nout != 1 else "" - return ( - "Cannot cast ufunc {!r} output {}from {!r} to {!r} with casting " - "rule {!r}" - ).format( - self.ufunc.__name__, i_str, self.from_, self.to, self.casting - ) - - -@_display_as_base -class _ArrayMemoryError(MemoryError): - """ Thrown when an array cannot be allocated""" - def __init__(self, shape, dtype): - self.shape = shape - self.dtype = dtype - - @property - def _total_size(self): - num_bytes = self.dtype.itemsize - for dim in self.shape: - num_bytes *= dim - return num_bytes - - @staticmethod - def _size_to_string(num_bytes): - """ Convert a number of bytes into a binary size string """ - - # https://en.wikipedia.org/wiki/Binary_prefix - LOG2_STEP = 10 - STEP = 1024 - units = ['bytes', 'KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB'] - - unit_i = max(num_bytes.bit_length() - 1, 1) // LOG2_STEP - unit_val = 1 << (unit_i * LOG2_STEP) - n_units = num_bytes / unit_val - del unit_val - - # ensure we pick a unit that is correct after rounding - if round(n_units) == STEP: - unit_i += 1 - n_units /= STEP - - # deal with sizes so large that we don't have units for them - if unit_i >= len(units): - new_unit_i = len(units) - 1 - n_units *= 1 << ((unit_i - new_unit_i) * LOG2_STEP) - unit_i = new_unit_i - - unit_name = units[unit_i] - # format with a sensible number of digits - if unit_i == 0: - # no decimal point on bytes - return '{:.0f} {}'.format(n_units, unit_name) - elif round(n_units) < 1000: - # 3 significant figures, if none are dropped to the left of the . - return '{:#.3g} {}'.format(n_units, unit_name) - else: - # just give all the digits otherwise - return '{:#.0f} {}'.format(n_units, unit_name) - - def __str__(self): - size_str = self._size_to_string(self._total_size) - return ( - "Unable to allocate {} for an array with shape {} and data type {}" - .format(size_str, self.shape, self.dtype) - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/pivot.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/pivot.py deleted file mode 100644 index 71e3ea5b2588ee99f967ebc08defc64a43583aa1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/pivot.py +++ /dev/null @@ -1,881 +0,0 @@ -from __future__ import annotations - -from collections.abc import ( - Hashable, - Sequence, -) -from typing import ( - TYPE_CHECKING, - Callable, - cast, -) - -import numpy as np - -from pandas._libs import lib -from pandas.util._decorators import ( - Appender, - Substitution, -) - -from pandas.core.dtypes.cast import maybe_downcast_to_dtype -from pandas.core.dtypes.common import ( - is_list_like, - is_nested_list_like, - is_scalar, -) -from pandas.core.dtypes.dtypes import ExtensionDtype -from pandas.core.dtypes.generic import ( - ABCDataFrame, - ABCSeries, -) - -import pandas.core.common as com -from pandas.core.frame import _shared_docs -from pandas.core.groupby import Grouper -from pandas.core.indexes.api import ( - Index, - MultiIndex, - get_objs_combined_axis, -) -from pandas.core.reshape.concat import concat -from pandas.core.reshape.util import cartesian_product -from pandas.core.series import Series - -if TYPE_CHECKING: - from pandas._typing import ( - AggFuncType, - AggFuncTypeBase, - AggFuncTypeDict, - IndexLabel, - ) - - from pandas import DataFrame - - -# Note: We need to make sure `frame` is imported before `pivot`, otherwise -# _shared_docs['pivot_table'] will not yet exist. TODO: Fix this dependency -@Substitution("\ndata : DataFrame") -@Appender(_shared_docs["pivot_table"], indents=1) -def pivot_table( - data: DataFrame, - values=None, - index=None, - columns=None, - aggfunc: AggFuncType = "mean", - fill_value=None, - margins: bool = False, - dropna: bool = True, - margins_name: Hashable = "All", - observed: bool = False, - sort: bool = True, -) -> DataFrame: - index = _convert_by(index) - columns = _convert_by(columns) - - if isinstance(aggfunc, list): - pieces: list[DataFrame] = [] - keys = [] - for func in aggfunc: - _table = __internal_pivot_table( - data, - values=values, - index=index, - columns=columns, - fill_value=fill_value, - aggfunc=func, - margins=margins, - dropna=dropna, - margins_name=margins_name, - observed=observed, - sort=sort, - ) - pieces.append(_table) - keys.append(getattr(func, "__name__", func)) - - table = concat(pieces, keys=keys, axis=1) - return table.__finalize__(data, method="pivot_table") - - table = __internal_pivot_table( - data, - values, - index, - columns, - aggfunc, - fill_value, - margins, - dropna, - margins_name, - observed, - sort, - ) - return table.__finalize__(data, method="pivot_table") - - -def __internal_pivot_table( - data: DataFrame, - values, - index, - columns, - aggfunc: AggFuncTypeBase | AggFuncTypeDict, - fill_value, - margins: bool, - dropna: bool, - margins_name: Hashable, - observed: bool, - sort: bool, -) -> DataFrame: - """ - Helper of :func:`pandas.pivot_table` for any non-list ``aggfunc``. - """ - keys = index + columns - - values_passed = values is not None - if values_passed: - if is_list_like(values): - values_multi = True - values = list(values) - else: - values_multi = False - values = [values] - - # GH14938 Make sure value labels are in data - for i in values: - if i not in data: - raise KeyError(i) - - to_filter = [] - for x in keys + values: - if isinstance(x, Grouper): - x = x.key - try: - if x in data: - to_filter.append(x) - except TypeError: - pass - if len(to_filter) < len(data.columns): - data = data[to_filter] - - else: - values = data.columns - for key in keys: - try: - values = values.drop(key) - except (TypeError, ValueError, KeyError): - pass - values = list(values) - - grouped = data.groupby(keys, observed=observed, sort=sort, dropna=dropna) - agged = grouped.agg(aggfunc) - - if dropna and isinstance(agged, ABCDataFrame) and len(agged.columns): - agged = agged.dropna(how="all") - - table = agged - - # GH17038, this check should only happen if index is defined (not None) - if table.index.nlevels > 1 and index: - # Related GH #17123 - # If index_names are integers, determine whether the integers refer - # to the level position or name. - index_names = agged.index.names[: len(index)] - to_unstack = [] - for i in range(len(index), len(keys)): - name = agged.index.names[i] - if name is None or name in index_names: - to_unstack.append(i) - else: - to_unstack.append(name) - table = agged.unstack(to_unstack, fill_value=fill_value) - - if not dropna: - if isinstance(table.index, MultiIndex): - m = MultiIndex.from_arrays( - cartesian_product(table.index.levels), names=table.index.names - ) - table = table.reindex(m, axis=0, fill_value=fill_value) - - if isinstance(table.columns, MultiIndex): - m = MultiIndex.from_arrays( - cartesian_product(table.columns.levels), names=table.columns.names - ) - table = table.reindex(m, axis=1, fill_value=fill_value) - - if sort is True and isinstance(table, ABCDataFrame): - table = table.sort_index(axis=1) - - if fill_value is not None: - table = table.fillna(fill_value) - if aggfunc is len and not observed and lib.is_integer(fill_value): - # TODO: can we avoid this? this used to be handled by - # downcast="infer" in fillna - table = table.astype(np.int64) - - if margins: - if dropna: - data = data[data.notna().all(axis=1)] - table = _add_margins( - table, - data, - values, - rows=index, - cols=columns, - aggfunc=aggfunc, - observed=dropna, - margins_name=margins_name, - fill_value=fill_value, - ) - - # discard the top level - if values_passed and not values_multi and table.columns.nlevels > 1: - table.columns = table.columns.droplevel(0) - if len(index) == 0 and len(columns) > 0: - table = table.T - - # GH 15193 Make sure empty columns are removed if dropna=True - if isinstance(table, ABCDataFrame) and dropna: - table = table.dropna(how="all", axis=1) - - return table - - -def _add_margins( - table: DataFrame | Series, - data: DataFrame, - values, - rows, - cols, - aggfunc, - observed: bool, - margins_name: Hashable = "All", - fill_value=None, -): - if not isinstance(margins_name, str): - raise ValueError("margins_name argument must be a string") - - msg = f'Conflicting name "{margins_name}" in margins' - for level in table.index.names: - if margins_name in table.index.get_level_values(level): - raise ValueError(msg) - - grand_margin = _compute_grand_margin(data, values, aggfunc, margins_name) - - if table.ndim == 2: - # i.e. DataFrame - for level in table.columns.names[1:]: - if margins_name in table.columns.get_level_values(level): - raise ValueError(msg) - - key: str | tuple[str, ...] - if len(rows) > 1: - key = (margins_name,) + ("",) * (len(rows) - 1) - else: - key = margins_name - - if not values and isinstance(table, ABCSeries): - # If there are no values and the table is a series, then there is only - # one column in the data. Compute grand margin and return it. - return table._append(table._constructor({key: grand_margin[margins_name]})) - - elif values: - marginal_result_set = _generate_marginal_results( - table, data, values, rows, cols, aggfunc, observed, margins_name - ) - if not isinstance(marginal_result_set, tuple): - return marginal_result_set - result, margin_keys, row_margin = marginal_result_set - else: - # no values, and table is a DataFrame - assert isinstance(table, ABCDataFrame) - marginal_result_set = _generate_marginal_results_without_values( - table, data, rows, cols, aggfunc, observed, margins_name - ) - if not isinstance(marginal_result_set, tuple): - return marginal_result_set - result, margin_keys, row_margin = marginal_result_set - - row_margin = row_margin.reindex(result.columns, fill_value=fill_value) - # populate grand margin - for k in margin_keys: - if isinstance(k, str): - row_margin[k] = grand_margin[k] - else: - row_margin[k] = grand_margin[k[0]] - - from pandas import DataFrame - - margin_dummy = DataFrame(row_margin, columns=Index([key])).T - - row_names = result.index.names - # check the result column and leave floats - for dtype in set(result.dtypes): - if isinstance(dtype, ExtensionDtype): - # Can hold NA already - continue - - cols = result.select_dtypes([dtype]).columns - margin_dummy[cols] = margin_dummy[cols].apply( - maybe_downcast_to_dtype, args=(dtype,) - ) - result = result._append(margin_dummy) - result.index.names = row_names - - return result - - -def _compute_grand_margin( - data: DataFrame, values, aggfunc, margins_name: Hashable = "All" -): - if values: - grand_margin = {} - for k, v in data[values].items(): - try: - if isinstance(aggfunc, str): - grand_margin[k] = getattr(v, aggfunc)() - elif isinstance(aggfunc, dict): - if isinstance(aggfunc[k], str): - grand_margin[k] = getattr(v, aggfunc[k])() - else: - grand_margin[k] = aggfunc[k](v) - else: - grand_margin[k] = aggfunc(v) - except TypeError: - pass - return grand_margin - else: - return {margins_name: aggfunc(data.index)} - - -def _generate_marginal_results( - table, - data: DataFrame, - values, - rows, - cols, - aggfunc, - observed: bool, - margins_name: Hashable = "All", -): - margin_keys: list | Index - if len(cols) > 0: - # need to "interleave" the margins - table_pieces = [] - margin_keys = [] - - def _all_key(key): - return (key, margins_name) + ("",) * (len(cols) - 1) - - if len(rows) > 0: - margin = data[rows + values].groupby(rows, observed=observed).agg(aggfunc) - cat_axis = 1 - - for key, piece in table.T.groupby(level=0, observed=observed): - piece = piece.T - all_key = _all_key(key) - - # we are going to mutate this, so need to copy! - piece = piece.copy() - piece[all_key] = margin[key] - - table_pieces.append(piece) - margin_keys.append(all_key) - else: - from pandas import DataFrame - - cat_axis = 0 - for key, piece in table.groupby(level=0, observed=observed): - if len(cols) > 1: - all_key = _all_key(key) - else: - all_key = margins_name - table_pieces.append(piece) - # GH31016 this is to calculate margin for each group, and assign - # corresponded key as index - transformed_piece = DataFrame(piece.apply(aggfunc)).T - if isinstance(piece.index, MultiIndex): - # We are adding an empty level - transformed_piece.index = MultiIndex.from_tuples( - [all_key], names=piece.index.names + [None] - ) - else: - transformed_piece.index = Index([all_key], name=piece.index.name) - - # append piece for margin into table_piece - table_pieces.append(transformed_piece) - margin_keys.append(all_key) - - if not table_pieces: - # GH 49240 - return table - else: - result = concat(table_pieces, axis=cat_axis) - - if len(rows) == 0: - return result - else: - result = table - margin_keys = table.columns - - if len(cols) > 0: - row_margin = data[cols + values].groupby(cols, observed=observed).agg(aggfunc) - row_margin = row_margin.stack(future_stack=True) - - # slight hack - new_order = [len(cols)] + list(range(len(cols))) - row_margin.index = row_margin.index.reorder_levels(new_order) - else: - row_margin = data._constructor_sliced(np.nan, index=result.columns) - - return result, margin_keys, row_margin - - -def _generate_marginal_results_without_values( - table: DataFrame, - data: DataFrame, - rows, - cols, - aggfunc, - observed: bool, - margins_name: Hashable = "All", -): - margin_keys: list | Index - if len(cols) > 0: - # need to "interleave" the margins - margin_keys = [] - - def _all_key(): - if len(cols) == 1: - return margins_name - return (margins_name,) + ("",) * (len(cols) - 1) - - if len(rows) > 0: - margin = data[rows].groupby(rows, observed=observed).apply(aggfunc) - all_key = _all_key() - table[all_key] = margin - result = table - margin_keys.append(all_key) - - else: - margin = data.groupby(level=0, axis=0, observed=observed).apply(aggfunc) - all_key = _all_key() - table[all_key] = margin - result = table - margin_keys.append(all_key) - return result - else: - result = table - margin_keys = table.columns - - if len(cols): - row_margin = data[cols].groupby(cols, observed=observed).apply(aggfunc) - else: - row_margin = Series(np.nan, index=result.columns) - - return result, margin_keys, row_margin - - -def _convert_by(by): - if by is None: - by = [] - elif ( - is_scalar(by) - or isinstance(by, (np.ndarray, Index, ABCSeries, Grouper)) - or callable(by) - ): - by = [by] - else: - by = list(by) - return by - - -@Substitution("\ndata : DataFrame") -@Appender(_shared_docs["pivot"], indents=1) -def pivot( - data: DataFrame, - *, - columns: IndexLabel, - index: IndexLabel | lib.NoDefault = lib.no_default, - values: IndexLabel | lib.NoDefault = lib.no_default, -) -> DataFrame: - columns_listlike = com.convert_to_list_like(columns) - - # If columns is None we will create a MultiIndex level with None as name - # which might cause duplicated names because None is the default for - # level names - data = data.copy(deep=False) - data.index = data.index.copy() - data.index.names = [ - name if name is not None else lib.no_default for name in data.index.names - ] - - indexed: DataFrame | Series - if values is lib.no_default: - if index is not lib.no_default: - cols = com.convert_to_list_like(index) - else: - cols = [] - - append = index is lib.no_default - # error: Unsupported operand types for + ("List[Any]" and "ExtensionArray") - # error: Unsupported left operand type for + ("ExtensionArray") - indexed = data.set_index( - cols + columns_listlike, append=append # type: ignore[operator] - ) - else: - if index is lib.no_default: - if isinstance(data.index, MultiIndex): - # GH 23955 - index_list = [ - data.index.get_level_values(i) for i in range(data.index.nlevels) - ] - else: - index_list = [ - data._constructor_sliced(data.index, name=data.index.name) - ] - else: - index_list = [data[idx] for idx in com.convert_to_list_like(index)] - - data_columns = [data[col] for col in columns_listlike] - index_list.extend(data_columns) - multiindex = MultiIndex.from_arrays(index_list) - - if is_list_like(values) and not isinstance(values, tuple): - # Exclude tuple because it is seen as a single column name - values = cast(Sequence[Hashable], values) - indexed = data._constructor( - data[values]._values, index=multiindex, columns=values - ) - else: - indexed = data._constructor_sliced(data[values]._values, index=multiindex) - # error: Argument 1 to "unstack" of "DataFrame" has incompatible type "Union - # [List[Any], ExtensionArray, ndarray[Any, Any], Index, Series]"; expected - # "Hashable" - result = indexed.unstack(columns_listlike) # type: ignore[arg-type] - result.index.names = [ - name if name is not lib.no_default else None for name in result.index.names - ] - - return result - - -def crosstab( - index, - columns, - values=None, - rownames=None, - colnames=None, - aggfunc=None, - margins: bool = False, - margins_name: Hashable = "All", - dropna: bool = True, - normalize: bool = False, -) -> DataFrame: - """ - Compute a simple cross tabulation of two (or more) factors. - - By default, computes a frequency table of the factors unless an - array of values and an aggregation function are passed. - - Parameters - ---------- - index : array-like, Series, or list of arrays/Series - Values to group by in the rows. - columns : array-like, Series, or list of arrays/Series - Values to group by in the columns. - values : array-like, optional - Array of values to aggregate according to the factors. - Requires `aggfunc` be specified. - rownames : sequence, default None - If passed, must match number of row arrays passed. - colnames : sequence, default None - If passed, must match number of column arrays passed. - aggfunc : function, optional - If specified, requires `values` be specified as well. - margins : bool, default False - Add row/column margins (subtotals). - margins_name : str, default 'All' - Name of the row/column that will contain the totals - when margins is True. - dropna : bool, default True - Do not include columns whose entries are all NaN. - normalize : bool, {'all', 'index', 'columns'}, or {0,1}, default False - Normalize by dividing all values by the sum of values. - - - If passed 'all' or `True`, will normalize over all values. - - If passed 'index' will normalize over each row. - - If passed 'columns' will normalize over each column. - - If margins is `True`, will also normalize margin values. - - Returns - ------- - DataFrame - Cross tabulation of the data. - - See Also - -------- - DataFrame.pivot : Reshape data based on column values. - pivot_table : Create a pivot table as a DataFrame. - - Notes - ----- - Any Series passed will have their name attributes used unless row or column - names for the cross-tabulation are specified. - - Any input passed containing Categorical data will have **all** of its - categories included in the cross-tabulation, even if the actual data does - not contain any instances of a particular category. - - In the event that there aren't overlapping indexes an empty DataFrame will - be returned. - - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - >>> a = np.array(["foo", "foo", "foo", "foo", "bar", "bar", - ... "bar", "bar", "foo", "foo", "foo"], dtype=object) - >>> b = np.array(["one", "one", "one", "two", "one", "one", - ... "one", "two", "two", "two", "one"], dtype=object) - >>> c = np.array(["dull", "dull", "shiny", "dull", "dull", "shiny", - ... "shiny", "dull", "shiny", "shiny", "shiny"], - ... dtype=object) - >>> pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c']) - b one two - c dull shiny dull shiny - a - bar 1 2 1 0 - foo 2 2 1 2 - - Here 'c' and 'f' are not represented in the data and will not be - shown in the output because dropna is True by default. Set - dropna=False to preserve categories with no data. - - >>> foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c']) - >>> bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f']) - >>> pd.crosstab(foo, bar) - col_0 d e - row_0 - a 1 0 - b 0 1 - >>> pd.crosstab(foo, bar, dropna=False) - col_0 d e f - row_0 - a 1 0 0 - b 0 1 0 - c 0 0 0 - """ - if values is None and aggfunc is not None: - raise ValueError("aggfunc cannot be used without values.") - - if values is not None and aggfunc is None: - raise ValueError("values cannot be used without an aggfunc.") - - if not is_nested_list_like(index): - index = [index] - if not is_nested_list_like(columns): - columns = [columns] - - common_idx = None - pass_objs = [x for x in index + columns if isinstance(x, (ABCSeries, ABCDataFrame))] - if pass_objs: - common_idx = get_objs_combined_axis(pass_objs, intersect=True, sort=False) - - rownames = _get_names(index, rownames, prefix="row") - colnames = _get_names(columns, colnames, prefix="col") - - # duplicate names mapped to unique names for pivot op - ( - rownames_mapper, - unique_rownames, - colnames_mapper, - unique_colnames, - ) = _build_names_mapper(rownames, colnames) - - from pandas import DataFrame - - data = { - **dict(zip(unique_rownames, index)), - **dict(zip(unique_colnames, columns)), - } - df = DataFrame(data, index=common_idx) - - if values is None: - df["__dummy__"] = 0 - kwargs = {"aggfunc": len, "fill_value": 0} - else: - df["__dummy__"] = values - kwargs = {"aggfunc": aggfunc} - - # error: Argument 7 to "pivot_table" of "DataFrame" has incompatible type - # "**Dict[str, object]"; expected "Union[...]" - table = df.pivot_table( - "__dummy__", - index=unique_rownames, - columns=unique_colnames, - margins=margins, - margins_name=margins_name, - dropna=dropna, - **kwargs, # type: ignore[arg-type] - ) - - # Post-process - if normalize is not False: - table = _normalize( - table, normalize=normalize, margins=margins, margins_name=margins_name - ) - - table = table.rename_axis(index=rownames_mapper, axis=0) - table = table.rename_axis(columns=colnames_mapper, axis=1) - - return table - - -def _normalize( - table: DataFrame, normalize, margins: bool, margins_name: Hashable = "All" -) -> DataFrame: - if not isinstance(normalize, (bool, str)): - axis_subs = {0: "index", 1: "columns"} - try: - normalize = axis_subs[normalize] - except KeyError as err: - raise ValueError("Not a valid normalize argument") from err - - if margins is False: - # Actual Normalizations - normalizers: dict[bool | str, Callable] = { - "all": lambda x: x / x.sum(axis=1).sum(axis=0), - "columns": lambda x: x / x.sum(), - "index": lambda x: x.div(x.sum(axis=1), axis=0), - } - - normalizers[True] = normalizers["all"] - - try: - f = normalizers[normalize] - except KeyError as err: - raise ValueError("Not a valid normalize argument") from err - - table = f(table) - table = table.fillna(0) - - elif margins is True: - # keep index and column of pivoted table - table_index = table.index - table_columns = table.columns - last_ind_or_col = table.iloc[-1, :].name - - # check if margin name is not in (for MI cases) and not equal to last - # index/column and save the column and index margin - if (margins_name not in last_ind_or_col) & (margins_name != last_ind_or_col): - raise ValueError(f"{margins_name} not in pivoted DataFrame") - column_margin = table.iloc[:-1, -1] - index_margin = table.iloc[-1, :-1] - - # keep the core table - table = table.iloc[:-1, :-1] - - # Normalize core - table = _normalize(table, normalize=normalize, margins=False) - - # Fix Margins - if normalize == "columns": - column_margin = column_margin / column_margin.sum() - table = concat([table, column_margin], axis=1) - table = table.fillna(0) - table.columns = table_columns - - elif normalize == "index": - index_margin = index_margin / index_margin.sum() - table = table._append(index_margin) - table = table.fillna(0) - table.index = table_index - - elif normalize == "all" or normalize is True: - column_margin = column_margin / column_margin.sum() - index_margin = index_margin / index_margin.sum() - index_margin.loc[margins_name] = 1 - table = concat([table, column_margin], axis=1) - table = table._append(index_margin) - - table = table.fillna(0) - table.index = table_index - table.columns = table_columns - - else: - raise ValueError("Not a valid normalize argument") - - else: - raise ValueError("Not a valid margins argument") - - return table - - -def _get_names(arrs, names, prefix: str = "row"): - if names is None: - names = [] - for i, arr in enumerate(arrs): - if isinstance(arr, ABCSeries) and arr.name is not None: - names.append(arr.name) - else: - names.append(f"{prefix}_{i}") - else: - if len(names) != len(arrs): - raise AssertionError("arrays and names must have the same length") - if not isinstance(names, list): - names = list(names) - - return names - - -def _build_names_mapper( - rownames: list[str], colnames: list[str] -) -> tuple[dict[str, str], list[str], dict[str, str], list[str]]: - """ - Given the names of a DataFrame's rows and columns, returns a set of unique row - and column names and mappers that convert to original names. - - A row or column name is replaced if it is duplicate among the rows of the inputs, - among the columns of the inputs or between the rows and the columns. - - Parameters - ---------- - rownames: list[str] - colnames: list[str] - - Returns - ------- - Tuple(Dict[str, str], List[str], Dict[str, str], List[str]) - - rownames_mapper: dict[str, str] - a dictionary with new row names as keys and original rownames as values - unique_rownames: list[str] - a list of rownames with duplicate names replaced by dummy names - colnames_mapper: dict[str, str] - a dictionary with new column names as keys and original column names as values - unique_colnames: list[str] - a list of column names with duplicate names replaced by dummy names - - """ - - def get_duplicates(names): - seen: set = set() - return {name for name in names if name not in seen} - - shared_names = set(rownames).intersection(set(colnames)) - dup_names = get_duplicates(rownames) | get_duplicates(colnames) | shared_names - - rownames_mapper = { - f"row_{i}": name for i, name in enumerate(rownames) if name in dup_names - } - unique_rownames = [ - f"row_{i}" if name in dup_names else name for i, name in enumerate(rownames) - ] - - colnames_mapper = { - f"col_{i}": name for i, name in enumerate(colnames) if name in dup_names - } - unique_colnames = [ - f"col_{i}" if name in dup_names else name for i, name in enumerate(colnames) - ] - - return rownames_mapper, unique_rownames, colnames_mapper, unique_colnames diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/__init__.py deleted file mode 100644 index 857e12e5467a6a7d2263d9add33e65b9499778fa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -from pandas.core.window.ewm import ( - ExponentialMovingWindow, - ExponentialMovingWindowGroupby, -) -from pandas.core.window.expanding import ( - Expanding, - ExpandingGroupby, -) -from pandas.core.window.rolling import ( - Rolling, - RollingGroupby, - Window, -) - -__all__ = [ - "Expanding", - "ExpandingGroupby", - "ExponentialMovingWindow", - "ExponentialMovingWindowGroupby", - "Rolling", - "RollingGroupby", - "Window", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/conftest.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/conftest.py deleted file mode 100644 index f73400dfe689e91c4c2b457c4be1a0a41380fd6a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/conftest.py +++ /dev/null @@ -1,68 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas.core.arrays.integer import ( - Int8Dtype, - Int16Dtype, - Int32Dtype, - Int64Dtype, - UInt8Dtype, - UInt16Dtype, - UInt32Dtype, - UInt64Dtype, -) - - -@pytest.fixture( - params=[ - Int8Dtype, - Int16Dtype, - Int32Dtype, - Int64Dtype, - UInt8Dtype, - UInt16Dtype, - UInt32Dtype, - UInt64Dtype, - ] -) -def dtype(request): - """Parametrized fixture returning integer 'dtype'""" - return request.param() - - -@pytest.fixture -def data(dtype): - """ - Fixture returning 'data' array with valid and missing values according to - parametrized integer 'dtype'. - - Used to test dtype conversion with and without missing values. - """ - return pd.array( - list(range(8)) + [np.nan] + list(range(10, 98)) + [np.nan] + [99, 100], - dtype=dtype, - ) - - -@pytest.fixture -def data_missing(dtype): - """ - Fixture returning array with exactly one NaN and one valid integer, - according to parametrized integer 'dtype'. - - Used to test dtype conversion with and without missing values. - """ - return pd.array([np.nan, 1], dtype=dtype) - - -@pytest.fixture(params=["data", "data_missing"]) -def all_data(request, data, data_missing): - """Parametrized fixture returning 'data' or 'data_missing' integer arrays. - - Used to test dtype conversion with and without missing values. - """ - if request.param == "data": - return data - elif request.param == "data_missing": - return data_missing diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/configuration.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/configuration.py deleted file mode 100644 index a8092d1ae069c4095901e7f5cb8e6fa49ef63033..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/configuration.py +++ /dev/null @@ -1,366 +0,0 @@ -"""Configuration management setup - -Some terminology: -- name - As written in config files. -- value - Value associated with a name -- key - Name combined with it's section (section.name) -- variant - A single word describing where the configuration key-value pair came from -""" - -import configparser -import locale -import os -import sys -from typing import Any, Dict, Iterable, List, NewType, Optional, Tuple - -from pip._internal.exceptions import ( - ConfigurationError, - ConfigurationFileCouldNotBeLoaded, -) -from pip._internal.utils import appdirs -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.logging import getLogger -from pip._internal.utils.misc import ensure_dir, enum - -RawConfigParser = configparser.RawConfigParser # Shorthand -Kind = NewType("Kind", str) - -CONFIG_BASENAME = "pip.ini" if WINDOWS else "pip.conf" -ENV_NAMES_IGNORED = "version", "help" - -# The kinds of configurations there are. -kinds = enum( - USER="user", # User Specific - GLOBAL="global", # System Wide - SITE="site", # [Virtual] Environment Specific - ENV="env", # from PIP_CONFIG_FILE - ENV_VAR="env-var", # from Environment Variables -) -OVERRIDE_ORDER = kinds.GLOBAL, kinds.USER, kinds.SITE, kinds.ENV, kinds.ENV_VAR -VALID_LOAD_ONLY = kinds.USER, kinds.GLOBAL, kinds.SITE - -logger = getLogger(__name__) - - -# NOTE: Maybe use the optionx attribute to normalize keynames. -def _normalize_name(name: str) -> str: - """Make a name consistent regardless of source (environment or file)""" - name = name.lower().replace("_", "-") - if name.startswith("--"): - name = name[2:] # only prefer long opts - return name - - -def _disassemble_key(name: str) -> List[str]: - if "." not in name: - error_message = ( - "Key does not contain dot separated section and key. " - "Perhaps you wanted to use 'global.{}' instead?" - ).format(name) - raise ConfigurationError(error_message) - return name.split(".", 1) - - -def get_configuration_files() -> Dict[Kind, List[str]]: - global_config_files = [ - os.path.join(path, CONFIG_BASENAME) for path in appdirs.site_config_dirs("pip") - ] - - site_config_file = os.path.join(sys.prefix, CONFIG_BASENAME) - legacy_config_file = os.path.join( - os.path.expanduser("~"), - "pip" if WINDOWS else ".pip", - CONFIG_BASENAME, - ) - new_config_file = os.path.join(appdirs.user_config_dir("pip"), CONFIG_BASENAME) - return { - kinds.GLOBAL: global_config_files, - kinds.SITE: [site_config_file], - kinds.USER: [legacy_config_file, new_config_file], - } - - -class Configuration: - """Handles management of configuration. - - Provides an interface to accessing and managing configuration files. - - This class converts provides an API that takes "section.key-name" style - keys and stores the value associated with it as "key-name" under the - section "section". - - This allows for a clean interface wherein the both the section and the - key-name are preserved in an easy to manage form in the configuration files - and the data stored is also nice. - """ - - def __init__(self, isolated: bool, load_only: Optional[Kind] = None) -> None: - super().__init__() - - if load_only is not None and load_only not in VALID_LOAD_ONLY: - raise ConfigurationError( - "Got invalid value for load_only - should be one of {}".format( - ", ".join(map(repr, VALID_LOAD_ONLY)) - ) - ) - self.isolated = isolated - self.load_only = load_only - - # Because we keep track of where we got the data from - self._parsers: Dict[Kind, List[Tuple[str, RawConfigParser]]] = { - variant: [] for variant in OVERRIDE_ORDER - } - self._config: Dict[Kind, Dict[str, Any]] = { - variant: {} for variant in OVERRIDE_ORDER - } - self._modified_parsers: List[Tuple[str, RawConfigParser]] = [] - - def load(self) -> None: - """Loads configuration from configuration files and environment""" - self._load_config_files() - if not self.isolated: - self._load_environment_vars() - - def get_file_to_edit(self) -> Optional[str]: - """Returns the file with highest priority in configuration""" - assert self.load_only is not None, "Need to be specified a file to be editing" - - try: - return self._get_parser_to_modify()[0] - except IndexError: - return None - - def items(self) -> Iterable[Tuple[str, Any]]: - """Returns key-value pairs like dict.items() representing the loaded - configuration - """ - return self._dictionary.items() - - def get_value(self, key: str) -> Any: - """Get a value from the configuration.""" - try: - return self._dictionary[key] - except KeyError: - raise ConfigurationError(f"No such key - {key}") - - def set_value(self, key: str, value: Any) -> None: - """Modify a value in the configuration.""" - self._ensure_have_load_only() - - assert self.load_only - fname, parser = self._get_parser_to_modify() - - if parser is not None: - section, name = _disassemble_key(key) - - # Modify the parser and the configuration - if not parser.has_section(section): - parser.add_section(section) - parser.set(section, name, value) - - self._config[self.load_only][key] = value - self._mark_as_modified(fname, parser) - - def unset_value(self, key: str) -> None: - """Unset a value in the configuration.""" - self._ensure_have_load_only() - - assert self.load_only - if key not in self._config[self.load_only]: - raise ConfigurationError(f"No such key - {key}") - - fname, parser = self._get_parser_to_modify() - - if parser is not None: - section, name = _disassemble_key(key) - if not ( - parser.has_section(section) and parser.remove_option(section, name) - ): - # The option was not removed. - raise ConfigurationError( - "Fatal Internal error [id=1]. Please report as a bug." - ) - - # The section may be empty after the option was removed. - if not parser.items(section): - parser.remove_section(section) - self._mark_as_modified(fname, parser) - - del self._config[self.load_only][key] - - def save(self) -> None: - """Save the current in-memory state.""" - self._ensure_have_load_only() - - for fname, parser in self._modified_parsers: - logger.info("Writing to %s", fname) - - # Ensure directory exists. - ensure_dir(os.path.dirname(fname)) - - with open(fname, "w") as f: - parser.write(f) - - # - # Private routines - # - - def _ensure_have_load_only(self) -> None: - if self.load_only is None: - raise ConfigurationError("Needed a specific file to be modifying.") - logger.debug("Will be working with %s variant only", self.load_only) - - @property - def _dictionary(self) -> Dict[str, Any]: - """A dictionary representing the loaded configuration.""" - # NOTE: Dictionaries are not populated if not loaded. So, conditionals - # are not needed here. - retval = {} - - for variant in OVERRIDE_ORDER: - retval.update(self._config[variant]) - - return retval - - def _load_config_files(self) -> None: - """Loads configuration from configuration files""" - config_files = dict(self.iter_config_files()) - if config_files[kinds.ENV][0:1] == [os.devnull]: - logger.debug( - "Skipping loading configuration files due to " - "environment's PIP_CONFIG_FILE being os.devnull" - ) - return - - for variant, files in config_files.items(): - for fname in files: - # If there's specific variant set in `load_only`, load only - # that variant, not the others. - if self.load_only is not None and variant != self.load_only: - logger.debug("Skipping file '%s' (variant: %s)", fname, variant) - continue - - parser = self._load_file(variant, fname) - - # Keeping track of the parsers used - self._parsers[variant].append((fname, parser)) - - def _load_file(self, variant: Kind, fname: str) -> RawConfigParser: - logger.verbose("For variant '%s', will try loading '%s'", variant, fname) - parser = self._construct_parser(fname) - - for section in parser.sections(): - items = parser.items(section) - self._config[variant].update(self._normalized_keys(section, items)) - - return parser - - def _construct_parser(self, fname: str) -> RawConfigParser: - parser = configparser.RawConfigParser() - # If there is no such file, don't bother reading it but create the - # parser anyway, to hold the data. - # Doing this is useful when modifying and saving files, where we don't - # need to construct a parser. - if os.path.exists(fname): - locale_encoding = locale.getpreferredencoding(False) - try: - parser.read(fname, encoding=locale_encoding) - except UnicodeDecodeError: - # See https://github.com/pypa/pip/issues/4963 - raise ConfigurationFileCouldNotBeLoaded( - reason=f"contains invalid {locale_encoding} characters", - fname=fname, - ) - except configparser.Error as error: - # See https://github.com/pypa/pip/issues/4893 - raise ConfigurationFileCouldNotBeLoaded(error=error) - return parser - - def _load_environment_vars(self) -> None: - """Loads configuration from environment variables""" - self._config[kinds.ENV_VAR].update( - self._normalized_keys(":env:", self.get_environ_vars()) - ) - - def _normalized_keys( - self, section: str, items: Iterable[Tuple[str, Any]] - ) -> Dict[str, Any]: - """Normalizes items to construct a dictionary with normalized keys. - - This routine is where the names become keys and are made the same - regardless of source - configuration files or environment. - """ - normalized = {} - for name, val in items: - key = section + "." + _normalize_name(name) - normalized[key] = val - return normalized - - def get_environ_vars(self) -> Iterable[Tuple[str, str]]: - """Returns a generator with all environmental vars with prefix PIP_""" - for key, val in os.environ.items(): - if key.startswith("PIP_"): - name = key[4:].lower() - if name not in ENV_NAMES_IGNORED: - yield name, val - - # XXX: This is patched in the tests. - def iter_config_files(self) -> Iterable[Tuple[Kind, List[str]]]: - """Yields variant and configuration files associated with it. - - This should be treated like items of a dictionary. - """ - # SMELL: Move the conditions out of this function - - # environment variables have the lowest priority - config_file = os.environ.get("PIP_CONFIG_FILE", None) - if config_file is not None: - yield kinds.ENV, [config_file] - else: - yield kinds.ENV, [] - - config_files = get_configuration_files() - - # at the base we have any global configuration - yield kinds.GLOBAL, config_files[kinds.GLOBAL] - - # per-user configuration next - should_load_user_config = not self.isolated and not ( - config_file and os.path.exists(config_file) - ) - if should_load_user_config: - # The legacy config file is overridden by the new config file - yield kinds.USER, config_files[kinds.USER] - - # finally virtualenv configuration first trumping others - yield kinds.SITE, config_files[kinds.SITE] - - def get_values_in_config(self, variant: Kind) -> Dict[str, Any]: - """Get values present in a config file""" - return self._config[variant] - - def _get_parser_to_modify(self) -> Tuple[str, RawConfigParser]: - # Determine which parser to modify - assert self.load_only - parsers = self._parsers[self.load_only] - if not parsers: - # This should not happen if everything works correctly. - raise ConfigurationError( - "Fatal Internal error [id=2]. Please report as a bug." - ) - - # Use the highest priority parser. - return parsers[-1] - - # XXX: This is patched in the tests. - def _mark_as_modified(self, fname: str, parser: RawConfigParser) -> None: - file_parser_tuple = (fname, parser) - if file_parser_tuple not in self._modified_parsers: - self._modified_parsers.append(file_parser_tuple) - - def __repr__(self) -> str: - return f"{self.__class__.__name__}({self._dictionary!r})" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/haxe.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/haxe.py deleted file mode 100644 index 6e99b10bc9903452f23f7a8e2b59a46758a8d6e6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/haxe.py +++ /dev/null @@ -1,937 +0,0 @@ -""" - pygments.lexers.haxe - ~~~~~~~~~~~~~~~~~~~~ - - Lexers for Haxe and related stuff. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import ExtendedRegexLexer, RegexLexer, include, bygroups, \ - default -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Generic, Whitespace - -__all__ = ['HaxeLexer', 'HxmlLexer'] - - -class HaxeLexer(ExtendedRegexLexer): - """ - For Haxe source code. - - .. versionadded:: 1.3 - """ - - name = 'Haxe' - url = 'http://haxe.org/' - aliases = ['haxe', 'hxsl', 'hx'] - filenames = ['*.hx', '*.hxsl'] - mimetypes = ['text/haxe', 'text/x-haxe', 'text/x-hx'] - - # keywords extracted from lexer.mll in the haxe compiler source - keyword = (r'(?:function|class|static|var|if|else|while|do|for|' - r'break|return|continue|extends|implements|import|' - r'switch|case|default|public|private|try|untyped|' - r'catch|new|this|throw|extern|enum|in|interface|' - r'cast|override|dynamic|typedef|package|' - r'inline|using|null|true|false|abstract)\b') - - # idtype in lexer.mll - typeid = r'_*[A-Z]\w*' - - # combined ident and dollar and idtype - ident = r'(?:_*[a-z]\w*|_+[0-9]\w*|' + typeid + r'|_+|\$\w+)' - - binop = (r'(?:%=|&=|\|=|\^=|\+=|\-=|\*=|/=|<<=|>\s*>\s*=|>\s*>\s*>\s*=|==|' - r'!=|<=|>\s*=|&&|\|\||<<|>>>|>\s*>|\.\.\.|<|>|%|&|\||\^|\+|\*|' - r'/|\-|=>|=)') - - # ident except keywords - ident_no_keyword = r'(?!' + keyword + ')' + ident - - flags = re.DOTALL | re.MULTILINE - - preproc_stack = [] - - def preproc_callback(self, match, ctx): - proc = match.group(2) - - if proc == 'if': - # store the current stack - self.preproc_stack.append(ctx.stack[:]) - elif proc in ['else', 'elseif']: - # restore the stack back to right before #if - if self.preproc_stack: - ctx.stack = self.preproc_stack[-1][:] - elif proc == 'end': - # remove the saved stack of previous #if - if self.preproc_stack: - self.preproc_stack.pop() - - # #if and #elseif should follow by an expr - if proc in ['if', 'elseif']: - ctx.stack.append('preproc-expr') - - # #error can be optionally follow by the error msg - if proc in ['error']: - ctx.stack.append('preproc-error') - - yield match.start(), Comment.Preproc, '#' + proc - ctx.pos = match.end() - - tokens = { - 'root': [ - include('spaces'), - include('meta'), - (r'(?:package)\b', Keyword.Namespace, ('semicolon', 'package')), - (r'(?:import)\b', Keyword.Namespace, ('semicolon', 'import')), - (r'(?:using)\b', Keyword.Namespace, ('semicolon', 'using')), - (r'(?:extern|private)\b', Keyword.Declaration), - (r'(?:abstract)\b', Keyword.Declaration, 'abstract'), - (r'(?:class|interface)\b', Keyword.Declaration, 'class'), - (r'(?:enum)\b', Keyword.Declaration, 'enum'), - (r'(?:typedef)\b', Keyword.Declaration, 'typedef'), - - # top-level expression - # although it is not supported in haxe, but it is common to write - # expression in web pages the positive lookahead here is to prevent - # an infinite loop at the EOF - (r'(?=.)', Text, 'expr-statement'), - ], - - # space/tab/comment/preproc - 'spaces': [ - (r'\s+', Whitespace), - (r'//[^\n\r]*', Comment.Single), - (r'/\*.*?\*/', Comment.Multiline), - (r'(#)(if|elseif|else|end|error)\b', preproc_callback), - ], - - 'string-single-interpol': [ - (r'\$\{', String.Interpol, ('string-interpol-close', 'expr')), - (r'\$\$', String.Escape), - (r'\$(?=' + ident + ')', String.Interpol, 'ident'), - include('string-single'), - ], - - 'string-single': [ - (r"'", String.Single, '#pop'), - (r'\\.', String.Escape), - (r'.', String.Single), - ], - - 'string-double': [ - (r'"', String.Double, '#pop'), - (r'\\.', String.Escape), - (r'.', String.Double), - ], - - 'string-interpol-close': [ - (r'\$'+ident, String.Interpol), - (r'\}', String.Interpol, '#pop'), - ], - - 'package': [ - include('spaces'), - (ident, Name.Namespace), - (r'\.', Punctuation, 'import-ident'), - default('#pop'), - ], - - 'import': [ - include('spaces'), - (ident, Name.Namespace), - (r'\*', Keyword), # wildcard import - (r'\.', Punctuation, 'import-ident'), - (r'in', Keyword.Namespace, 'ident'), - default('#pop'), - ], - - 'import-ident': [ - include('spaces'), - (r'\*', Keyword, '#pop'), # wildcard import - (ident, Name.Namespace, '#pop'), - ], - - 'using': [ - include('spaces'), - (ident, Name.Namespace), - (r'\.', Punctuation, 'import-ident'), - default('#pop'), - ], - - 'preproc-error': [ - (r'\s+', Whitespace), - (r"'", String.Single, ('#pop', 'string-single')), - (r'"', String.Double, ('#pop', 'string-double')), - default('#pop'), - ], - - 'preproc-expr': [ - (r'\s+', Whitespace), - (r'\!', Comment.Preproc), - (r'\(', Comment.Preproc, ('#pop', 'preproc-parenthesis')), - - (ident, Comment.Preproc, '#pop'), - - # Float - (r'\.[0-9]+', Number.Float), - (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float), - (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float), - (r'[0-9]+\.[0-9]+', Number.Float), - (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float), - - # Int - (r'0x[0-9a-fA-F]+', Number.Hex), - (r'[0-9]+', Number.Integer), - - # String - (r"'", String.Single, ('#pop', 'string-single')), - (r'"', String.Double, ('#pop', 'string-double')), - ], - - 'preproc-parenthesis': [ - (r'\s+', Whitespace), - (r'\)', Comment.Preproc, '#pop'), - default('preproc-expr-in-parenthesis'), - ], - - 'preproc-expr-chain': [ - (r'\s+', Whitespace), - (binop, Comment.Preproc, ('#pop', 'preproc-expr-in-parenthesis')), - default('#pop'), - ], - - # same as 'preproc-expr' but able to chain 'preproc-expr-chain' - 'preproc-expr-in-parenthesis': [ - (r'\s+', Whitespace), - (r'\!', Comment.Preproc), - (r'\(', Comment.Preproc, - ('#pop', 'preproc-expr-chain', 'preproc-parenthesis')), - - (ident, Comment.Preproc, ('#pop', 'preproc-expr-chain')), - - # Float - (r'\.[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), - (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), - (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), - (r'[0-9]+\.[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), - (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, ('#pop', 'preproc-expr-chain')), - - # Int - (r'0x[0-9a-fA-F]+', Number.Hex, ('#pop', 'preproc-expr-chain')), - (r'[0-9]+', Number.Integer, ('#pop', 'preproc-expr-chain')), - - # String - (r"'", String.Single, - ('#pop', 'preproc-expr-chain', 'string-single')), - (r'"', String.Double, - ('#pop', 'preproc-expr-chain', 'string-double')), - ], - - 'abstract': [ - include('spaces'), - default(('#pop', 'abstract-body', 'abstract-relation', - 'abstract-opaque', 'type-param-constraint', 'type-name')), - ], - - 'abstract-body': [ - include('spaces'), - (r'\{', Punctuation, ('#pop', 'class-body')), - ], - - 'abstract-opaque': [ - include('spaces'), - (r'\(', Punctuation, ('#pop', 'parenthesis-close', 'type')), - default('#pop'), - ], - - 'abstract-relation': [ - include('spaces'), - (r'(?:to|from)', Keyword.Declaration, 'type'), - (r',', Punctuation), - default('#pop'), - ], - - 'meta': [ - include('spaces'), - (r'@', Name.Decorator, ('meta-body', 'meta-ident', 'meta-colon')), - ], - - # optional colon - 'meta-colon': [ - include('spaces'), - (r':', Name.Decorator, '#pop'), - default('#pop'), - ], - - # same as 'ident' but set token as Name.Decorator instead of Name - 'meta-ident': [ - include('spaces'), - (ident, Name.Decorator, '#pop'), - ], - - 'meta-body': [ - include('spaces'), - (r'\(', Name.Decorator, ('#pop', 'meta-call')), - default('#pop'), - ], - - 'meta-call': [ - include('spaces'), - (r'\)', Name.Decorator, '#pop'), - default(('#pop', 'meta-call-sep', 'expr')), - ], - - 'meta-call-sep': [ - include('spaces'), - (r'\)', Name.Decorator, '#pop'), - (r',', Punctuation, ('#pop', 'meta-call')), - ], - - 'typedef': [ - include('spaces'), - default(('#pop', 'typedef-body', 'type-param-constraint', - 'type-name')), - ], - - 'typedef-body': [ - include('spaces'), - (r'=', Operator, ('#pop', 'optional-semicolon', 'type')), - ], - - 'enum': [ - include('spaces'), - default(('#pop', 'enum-body', 'bracket-open', - 'type-param-constraint', 'type-name')), - ], - - 'enum-body': [ - include('spaces'), - include('meta'), - (r'\}', Punctuation, '#pop'), - (ident_no_keyword, Name, ('enum-member', 'type-param-constraint')), - ], - - 'enum-member': [ - include('spaces'), - (r'\(', Punctuation, - ('#pop', 'semicolon', 'flag', 'function-param')), - default(('#pop', 'semicolon', 'flag')), - ], - - 'class': [ - include('spaces'), - default(('#pop', 'class-body', 'bracket-open', 'extends', - 'type-param-constraint', 'type-name')), - ], - - 'extends': [ - include('spaces'), - (r'(?:extends|implements)\b', Keyword.Declaration, 'type'), - (r',', Punctuation), # the comma is made optional here, since haxe2 - # requires the comma but haxe3 does not allow it - default('#pop'), - ], - - 'bracket-open': [ - include('spaces'), - (r'\{', Punctuation, '#pop'), - ], - - 'bracket-close': [ - include('spaces'), - (r'\}', Punctuation, '#pop'), - ], - - 'class-body': [ - include('spaces'), - include('meta'), - (r'\}', Punctuation, '#pop'), - (r'(?:static|public|private|override|dynamic|inline|macro)\b', - Keyword.Declaration), - default('class-member'), - ], - - 'class-member': [ - include('spaces'), - (r'(var)\b', Keyword.Declaration, - ('#pop', 'optional-semicolon', 'var')), - (r'(function)\b', Keyword.Declaration, - ('#pop', 'optional-semicolon', 'class-method')), - ], - - # local function, anonymous or not - 'function-local': [ - include('spaces'), - (ident_no_keyword, Name.Function, - ('#pop', 'optional-expr', 'flag', 'function-param', - 'parenthesis-open', 'type-param-constraint')), - default(('#pop', 'optional-expr', 'flag', 'function-param', - 'parenthesis-open', 'type-param-constraint')), - ], - - 'optional-expr': [ - include('spaces'), - include('expr'), - default('#pop'), - ], - - 'class-method': [ - include('spaces'), - (ident, Name.Function, ('#pop', 'optional-expr', 'flag', - 'function-param', 'parenthesis-open', - 'type-param-constraint')), - ], - - # function arguments - 'function-param': [ - include('spaces'), - (r'\)', Punctuation, '#pop'), - (r'\?', Punctuation), - (ident_no_keyword, Name, - ('#pop', 'function-param-sep', 'assign', 'flag')), - ], - - 'function-param-sep': [ - include('spaces'), - (r'\)', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'function-param')), - ], - - 'prop-get-set': [ - include('spaces'), - (r'\(', Punctuation, ('#pop', 'parenthesis-close', - 'prop-get-set-opt', 'comma', 'prop-get-set-opt')), - default('#pop'), - ], - - 'prop-get-set-opt': [ - include('spaces'), - (r'(?:default|null|never|dynamic|get|set)\b', Keyword, '#pop'), - (ident_no_keyword, Text, '#pop'), # custom getter/setter - ], - - 'expr-statement': [ - include('spaces'), - # makes semicolon optional here, just to avoid checking the last - # one is bracket or not. - default(('#pop', 'optional-semicolon', 'expr')), - ], - - 'expr': [ - include('spaces'), - (r'@', Name.Decorator, ('#pop', 'optional-expr', 'meta-body', - 'meta-ident', 'meta-colon')), - (r'(?:\+\+|\-\-|~(?!/)|!|\-)', Operator), - (r'\(', Punctuation, ('#pop', 'expr-chain', 'parenthesis')), - (r'(?:static|public|private|override|dynamic|inline)\b', - Keyword.Declaration), - (r'(?:function)\b', Keyword.Declaration, ('#pop', 'expr-chain', - 'function-local')), - (r'\{', Punctuation, ('#pop', 'expr-chain', 'bracket')), - (r'(?:true|false|null)\b', Keyword.Constant, ('#pop', 'expr-chain')), - (r'(?:this)\b', Keyword, ('#pop', 'expr-chain')), - (r'(?:cast)\b', Keyword, ('#pop', 'expr-chain', 'cast')), - (r'(?:try)\b', Keyword, ('#pop', 'catch', 'expr')), - (r'(?:var)\b', Keyword.Declaration, ('#pop', 'var')), - (r'(?:new)\b', Keyword, ('#pop', 'expr-chain', 'new')), - (r'(?:switch)\b', Keyword, ('#pop', 'switch')), - (r'(?:if)\b', Keyword, ('#pop', 'if')), - (r'(?:do)\b', Keyword, ('#pop', 'do')), - (r'(?:while)\b', Keyword, ('#pop', 'while')), - (r'(?:for)\b', Keyword, ('#pop', 'for')), - (r'(?:untyped|throw)\b', Keyword), - (r'(?:return)\b', Keyword, ('#pop', 'optional-expr')), - (r'(?:macro)\b', Keyword, ('#pop', 'macro')), - (r'(?:continue|break)\b', Keyword, '#pop'), - (r'(?:\$\s*[a-z]\b|\$(?!'+ident+'))', Name, ('#pop', 'dollar')), - (ident_no_keyword, Name, ('#pop', 'expr-chain')), - - # Float - (r'\.[0-9]+', Number.Float, ('#pop', 'expr-chain')), - (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'expr-chain')), - (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'expr-chain')), - (r'[0-9]+\.[0-9]+', Number.Float, ('#pop', 'expr-chain')), - (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, ('#pop', 'expr-chain')), - - # Int - (r'0x[0-9a-fA-F]+', Number.Hex, ('#pop', 'expr-chain')), - (r'[0-9]+', Number.Integer, ('#pop', 'expr-chain')), - - # String - (r"'", String.Single, ('#pop', 'expr-chain', 'string-single-interpol')), - (r'"', String.Double, ('#pop', 'expr-chain', 'string-double')), - - # EReg - (r'~/(\\\\|\\[^\\]|[^/\\\n])*/[gimsu]*', String.Regex, ('#pop', 'expr-chain')), - - # Array - (r'\[', Punctuation, ('#pop', 'expr-chain', 'array-decl')), - ], - - 'expr-chain': [ - include('spaces'), - (r'(?:\+\+|\-\-)', Operator), - (binop, Operator, ('#pop', 'expr')), - (r'(?:in)\b', Keyword, ('#pop', 'expr')), - (r'\?', Operator, ('#pop', 'expr', 'ternary', 'expr')), - (r'(\.)(' + ident_no_keyword + ')', bygroups(Punctuation, Name)), - (r'\[', Punctuation, 'array-access'), - (r'\(', Punctuation, 'call'), - default('#pop'), - ], - - # macro reification - 'macro': [ - include('spaces'), - include('meta'), - (r':', Punctuation, ('#pop', 'type')), - - (r'(?:extern|private)\b', Keyword.Declaration), - (r'(?:abstract)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'abstract')), - (r'(?:class|interface)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'macro-class')), - (r'(?:enum)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'enum')), - (r'(?:typedef)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'typedef')), - - default(('#pop', 'expr')), - ], - - 'macro-class': [ - (r'\{', Punctuation, ('#pop', 'class-body')), - include('class') - ], - - # cast can be written as "cast expr" or "cast(expr, type)" - 'cast': [ - include('spaces'), - (r'\(', Punctuation, ('#pop', 'parenthesis-close', - 'cast-type', 'expr')), - default(('#pop', 'expr')), - ], - - # optionally give a type as the 2nd argument of cast() - 'cast-type': [ - include('spaces'), - (r',', Punctuation, ('#pop', 'type')), - default('#pop'), - ], - - 'catch': [ - include('spaces'), - (r'(?:catch)\b', Keyword, ('expr', 'function-param', - 'parenthesis-open')), - default('#pop'), - ], - - # do-while loop - 'do': [ - include('spaces'), - default(('#pop', 'do-while', 'expr')), - ], - - # the while after do - 'do-while': [ - include('spaces'), - (r'(?:while)\b', Keyword, ('#pop', 'parenthesis', - 'parenthesis-open')), - ], - - 'while': [ - include('spaces'), - (r'\(', Punctuation, ('#pop', 'expr', 'parenthesis')), - ], - - 'for': [ - include('spaces'), - (r'\(', Punctuation, ('#pop', 'expr', 'parenthesis')), - ], - - 'if': [ - include('spaces'), - (r'\(', Punctuation, ('#pop', 'else', 'optional-semicolon', 'expr', - 'parenthesis')), - ], - - 'else': [ - include('spaces'), - (r'(?:else)\b', Keyword, ('#pop', 'expr')), - default('#pop'), - ], - - 'switch': [ - include('spaces'), - default(('#pop', 'switch-body', 'bracket-open', 'expr')), - ], - - 'switch-body': [ - include('spaces'), - (r'(?:case|default)\b', Keyword, ('case-block', 'case')), - (r'\}', Punctuation, '#pop'), - ], - - 'case': [ - include('spaces'), - (r':', Punctuation, '#pop'), - default(('#pop', 'case-sep', 'case-guard', 'expr')), - ], - - 'case-sep': [ - include('spaces'), - (r':', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'case')), - ], - - 'case-guard': [ - include('spaces'), - (r'(?:if)\b', Keyword, ('#pop', 'parenthesis', 'parenthesis-open')), - default('#pop'), - ], - - # optional multiple expr under a case - 'case-block': [ - include('spaces'), - (r'(?!(?:case|default)\b|\})', Keyword, 'expr-statement'), - default('#pop'), - ], - - 'new': [ - include('spaces'), - default(('#pop', 'call', 'parenthesis-open', 'type')), - ], - - 'array-decl': [ - include('spaces'), - (r'\]', Punctuation, '#pop'), - default(('#pop', 'array-decl-sep', 'expr')), - ], - - 'array-decl-sep': [ - include('spaces'), - (r'\]', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'array-decl')), - ], - - 'array-access': [ - include('spaces'), - default(('#pop', 'array-access-close', 'expr')), - ], - - 'array-access-close': [ - include('spaces'), - (r'\]', Punctuation, '#pop'), - ], - - 'comma': [ - include('spaces'), - (r',', Punctuation, '#pop'), - ], - - 'colon': [ - include('spaces'), - (r':', Punctuation, '#pop'), - ], - - 'semicolon': [ - include('spaces'), - (r';', Punctuation, '#pop'), - ], - - 'optional-semicolon': [ - include('spaces'), - (r';', Punctuation, '#pop'), - default('#pop'), - ], - - # identity that CAN be a Haxe keyword - 'ident': [ - include('spaces'), - (ident, Name, '#pop'), - ], - - 'dollar': [ - include('spaces'), - (r'\{', Punctuation, ('#pop', 'expr-chain', 'bracket-close', 'expr')), - default(('#pop', 'expr-chain')), - ], - - 'type-name': [ - include('spaces'), - (typeid, Name, '#pop'), - ], - - 'type-full-name': [ - include('spaces'), - (r'\.', Punctuation, 'ident'), - default('#pop'), - ], - - 'type': [ - include('spaces'), - (r'\?', Punctuation), - (ident, Name, ('#pop', 'type-check', 'type-full-name')), - (r'\{', Punctuation, ('#pop', 'type-check', 'type-struct')), - (r'\(', Punctuation, ('#pop', 'type-check', 'type-parenthesis')), - ], - - 'type-parenthesis': [ - include('spaces'), - default(('#pop', 'parenthesis-close', 'type')), - ], - - 'type-check': [ - include('spaces'), - (r'->', Punctuation, ('#pop', 'type')), - (r'<(?!=)', Punctuation, 'type-param'), - default('#pop'), - ], - - 'type-struct': [ - include('spaces'), - (r'\}', Punctuation, '#pop'), - (r'\?', Punctuation), - (r'>', Punctuation, ('comma', 'type')), - (ident_no_keyword, Name, ('#pop', 'type-struct-sep', 'type', 'colon')), - include('class-body'), - ], - - 'type-struct-sep': [ - include('spaces'), - (r'\}', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'type-struct')), - ], - - # type-param can be a normal type or a constant literal... - 'type-param-type': [ - # Float - (r'\.[0-9]+', Number.Float, '#pop'), - (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, '#pop'), - (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, '#pop'), - (r'[0-9]+\.[0-9]+', Number.Float, '#pop'), - (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, '#pop'), - - # Int - (r'0x[0-9a-fA-F]+', Number.Hex, '#pop'), - (r'[0-9]+', Number.Integer, '#pop'), - - # String - (r"'", String.Single, ('#pop', 'string-single')), - (r'"', String.Double, ('#pop', 'string-double')), - - # EReg - (r'~/(\\\\|\\[^\\]|[^/\\\n])*/[gim]*', String.Regex, '#pop'), - - # Array - (r'\[', Operator, ('#pop', 'array-decl')), - - include('type'), - ], - - # type-param part of a type - # ie. the path in Map - 'type-param': [ - include('spaces'), - default(('#pop', 'type-param-sep', 'type-param-type')), - ], - - 'type-param-sep': [ - include('spaces'), - (r'>', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'type-param')), - ], - - # optional type-param that may include constraint - # ie. - 'type-param-constraint': [ - include('spaces'), - (r'<(?!=)', Punctuation, ('#pop', 'type-param-constraint-sep', - 'type-param-constraint-flag', 'type-name')), - default('#pop'), - ], - - 'type-param-constraint-sep': [ - include('spaces'), - (r'>', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'type-param-constraint-sep', - 'type-param-constraint-flag', 'type-name')), - ], - - # the optional constraint inside type-param - 'type-param-constraint-flag': [ - include('spaces'), - (r':', Punctuation, ('#pop', 'type-param-constraint-flag-type')), - default('#pop'), - ], - - 'type-param-constraint-flag-type': [ - include('spaces'), - (r'\(', Punctuation, ('#pop', 'type-param-constraint-flag-type-sep', - 'type')), - default(('#pop', 'type')), - ], - - 'type-param-constraint-flag-type-sep': [ - include('spaces'), - (r'\)', Punctuation, '#pop'), - (r',', Punctuation, 'type'), - ], - - # a parenthesis expr that contain exactly one expr - 'parenthesis': [ - include('spaces'), - default(('#pop', 'parenthesis-close', 'flag', 'expr')), - ], - - 'parenthesis-open': [ - include('spaces'), - (r'\(', Punctuation, '#pop'), - ], - - 'parenthesis-close': [ - include('spaces'), - (r'\)', Punctuation, '#pop'), - ], - - 'var': [ - include('spaces'), - (ident_no_keyword, Text, ('#pop', 'var-sep', 'assign', 'flag', 'prop-get-set')), - ], - - # optional more var decl. - 'var-sep': [ - include('spaces'), - (r',', Punctuation, ('#pop', 'var')), - default('#pop'), - ], - - # optional assignment - 'assign': [ - include('spaces'), - (r'=', Operator, ('#pop', 'expr')), - default('#pop'), - ], - - # optional type flag - 'flag': [ - include('spaces'), - (r':', Punctuation, ('#pop', 'type')), - default('#pop'), - ], - - # colon as part of a ternary operator (?:) - 'ternary': [ - include('spaces'), - (r':', Operator, '#pop'), - ], - - # function call - 'call': [ - include('spaces'), - (r'\)', Punctuation, '#pop'), - default(('#pop', 'call-sep', 'expr')), - ], - - # after a call param - 'call-sep': [ - include('spaces'), - (r'\)', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'call')), - ], - - # bracket can be block or object - 'bracket': [ - include('spaces'), - (r'(?!(?:\$\s*[a-z]\b|\$(?!'+ident+')))' + ident_no_keyword, Name, - ('#pop', 'bracket-check')), - (r"'", String.Single, ('#pop', 'bracket-check', 'string-single')), - (r'"', String.Double, ('#pop', 'bracket-check', 'string-double')), - default(('#pop', 'block')), - ], - - 'bracket-check': [ - include('spaces'), - (r':', Punctuation, ('#pop', 'object-sep', 'expr')), # is object - default(('#pop', 'block', 'optional-semicolon', 'expr-chain')), # is block - ], - - # code block - 'block': [ - include('spaces'), - (r'\}', Punctuation, '#pop'), - default('expr-statement'), - ], - - # object in key-value pairs - 'object': [ - include('spaces'), - (r'\}', Punctuation, '#pop'), - default(('#pop', 'object-sep', 'expr', 'colon', 'ident-or-string')) - ], - - # a key of an object - 'ident-or-string': [ - include('spaces'), - (ident_no_keyword, Name, '#pop'), - (r"'", String.Single, ('#pop', 'string-single')), - (r'"', String.Double, ('#pop', 'string-double')), - ], - - # after a key-value pair in object - 'object-sep': [ - include('spaces'), - (r'\}', Punctuation, '#pop'), - (r',', Punctuation, ('#pop', 'object')), - ], - - - - } - - def analyse_text(text): - if re.match(r'\w+\s*:\s*\w', text): - return 0.3 - - -class HxmlLexer(RegexLexer): - """ - Lexer for haXe build files. - - .. versionadded:: 1.6 - """ - name = 'Hxml' - url = 'https://haxe.org/manual/compiler-usage-hxml.html' - aliases = ['haxeml', 'hxml'] - filenames = ['*.hxml'] - - tokens = { - 'root': [ - # Separator - (r'(--)(next)', bygroups(Punctuation, Generic.Heading)), - # Compiler switches with one dash - (r'(-)(prompt|debug|v)', bygroups(Punctuation, Keyword.Keyword)), - # Compilerswitches with two dashes - (r'(--)(neko-source|flash-strict|flash-use-stage|no-opt|no-traces|' - r'no-inline|times|no-output)', bygroups(Punctuation, Keyword)), - # Targets and other options that take an argument - (r'(-)(cpp|js|neko|x|as3|swf9?|swf-lib|php|xml|main|lib|D|resource|' - r'cp|cmd)( +)(.+)', - bygroups(Punctuation, Keyword, Whitespace, String)), - # Options that take only numerical arguments - (r'(-)(swf-version)( +)(\d+)', - bygroups(Punctuation, Keyword, Whitespace, Number.Integer)), - # An Option that defines the size, the fps and the background - # color of an flash movie - (r'(-)(swf-header)( +)(\d+)(:)(\d+)(:)(\d+)(:)([A-Fa-f0-9]{6})', - bygroups(Punctuation, Keyword, Whitespace, Number.Integer, - Punctuation, Number.Integer, Punctuation, Number.Integer, - Punctuation, Number.Hex)), - # options with two dashes that takes arguments - (r'(--)(js-namespace|php-front|php-lib|remap|gen-hx-classes)( +)' - r'(.+)', bygroups(Punctuation, Keyword, Whitespace, String)), - # Single line comment, multiline ones are not allowed. - (r'#.*', Comment.Single) - ] - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/_monitor.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/_monitor.py deleted file mode 100644 index f71aa56817ca77eba5df4a2dd11cb0c4a9a7ea1c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/_monitor.py +++ /dev/null @@ -1,95 +0,0 @@ -import atexit -from threading import Event, Thread, current_thread -from time import time -from warnings import warn - -__all__ = ["TMonitor", "TqdmSynchronisationWarning"] - - -class TqdmSynchronisationWarning(RuntimeWarning): - """tqdm multi-thread/-process errors which may cause incorrect nesting - but otherwise no adverse effects""" - pass - - -class TMonitor(Thread): - """ - Monitoring thread for tqdm bars. - Monitors if tqdm bars are taking too much time to display - and readjusts miniters automatically if necessary. - - Parameters - ---------- - tqdm_cls : class - tqdm class to use (can be core tqdm or a submodule). - sleep_interval : float - Time to sleep between monitoring checks. - """ - _test = {} # internal vars for unit testing - - def __init__(self, tqdm_cls, sleep_interval): - Thread.__init__(self) - self.daemon = True # kill thread when main killed (KeyboardInterrupt) - self.woken = 0 # last time woken up, to sync with monitor - self.tqdm_cls = tqdm_cls - self.sleep_interval = sleep_interval - self._time = self._test.get("time", time) - self.was_killed = self._test.get("Event", Event)() - atexit.register(self.exit) - self.start() - - def exit(self): - self.was_killed.set() - if self is not current_thread(): - self.join() - return self.report() - - def get_instances(self): - # returns a copy of started `tqdm_cls` instances - return [i for i in self.tqdm_cls._instances.copy() - # Avoid race by checking that the instance started - if hasattr(i, 'start_t')] - - def run(self): - cur_t = self._time() - while True: - # After processing and before sleeping, notify that we woke - # Need to be done just before sleeping - self.woken = cur_t - # Sleep some time... - self.was_killed.wait(self.sleep_interval) - # Quit if killed - if self.was_killed.is_set(): - return - # Then monitor! - # Acquire lock (to access _instances) - with self.tqdm_cls.get_lock(): - cur_t = self._time() - # Check tqdm instances are waiting too long to print - instances = self.get_instances() - for instance in instances: - # Check event in loop to reduce blocking time on exit - if self.was_killed.is_set(): - return - # Only if mininterval > 1 (else iterations are just slow) - # and last refresh exceeded maxinterval - if ( - instance.miniters > 1 - and (cur_t - instance.last_print_t) >= instance.maxinterval - ): - # force bypassing miniters on next iteration - # (dynamic_miniters adjusts mininterval automatically) - instance.miniters = 1 - # Refresh now! (works only for manual tqdm) - instance.refresh(nolock=True) - # Remove accidental long-lived strong reference - del instance - if instances != self.get_instances(): # pragma: nocover - warn("Set changed size during iteration" + - " (see https://github.com/tqdm/tqdm/issues/481)", - TqdmSynchronisationWarning, stacklevel=2) - # Remove accidental long-lived strong references - del instances - - def report(self): - return not self.was_killed.is_set() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/util/util.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/util/util.py deleted file mode 100644 index 35c77e4025842f548565334a3c04cba90f9283d6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/util/util.py +++ /dev/null @@ -1,42 +0,0 @@ -from __future__ import annotations - -import typing -from types import TracebackType - - -def to_bytes( - x: str | bytes, encoding: str | None = None, errors: str | None = None -) -> bytes: - if isinstance(x, bytes): - return x - elif not isinstance(x, str): - raise TypeError(f"not expecting type {type(x).__name__}") - if encoding or errors: - return x.encode(encoding or "utf-8", errors=errors or "strict") - return x.encode() - - -def to_str( - x: str | bytes, encoding: str | None = None, errors: str | None = None -) -> str: - if isinstance(x, str): - return x - elif not isinstance(x, bytes): - raise TypeError(f"not expecting type {type(x).__name__}") - if encoding or errors: - return x.decode(encoding or "utf-8", errors=errors or "strict") - return x.decode() - - -def reraise( - tp: type[BaseException] | None, - value: BaseException, - tb: TracebackType | None = None, -) -> typing.NoReturn: - try: - if value.__traceback__ is not tb: - raise value.with_traceback(tb) - raise value - finally: - value = None # type: ignore[assignment] - tb = None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py deleted file mode 100644 index 28277e1d60187948fb05e269f94fe259339a6bf6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py +++ /dev/null @@ -1,84 +0,0 @@ -""" -This middleware can be used when a known proxy is fronting the application, -and is trusted to be properly setting the `X-Forwarded-Proto` and -`X-Forwarded-For` headers with the connecting client information. - -Modifies the `client` and `scheme` information so that they reference -the connecting client, rather that the connecting proxy. - -https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers#Proxies -""" -from typing import List, Optional, Tuple, Union, cast - -from uvicorn._types import ( - ASGI3Application, - ASGIReceiveCallable, - ASGISendCallable, - HTTPScope, - Scope, - WebSocketScope, -) - - -class ProxyHeadersMiddleware: - def __init__( - self, - app: "ASGI3Application", - trusted_hosts: Union[List[str], str] = "127.0.0.1", - ) -> None: - self.app = app - if isinstance(trusted_hosts, str): - self.trusted_hosts = {item.strip() for item in trusted_hosts.split(",")} - else: - self.trusted_hosts = set(trusted_hosts) - self.always_trust = "*" in self.trusted_hosts - - def get_trusted_client_host( - self, x_forwarded_for_hosts: List[str] - ) -> Optional[str]: - if self.always_trust: - return x_forwarded_for_hosts[0] - - for host in reversed(x_forwarded_for_hosts): - if host not in self.trusted_hosts: - return host - - return None - - async def __call__( - self, scope: "Scope", receive: "ASGIReceiveCallable", send: "ASGISendCallable" - ) -> None: - if scope["type"] in ("http", "websocket"): - scope = cast(Union["HTTPScope", "WebSocketScope"], scope) - client_addr: Optional[Tuple[str, int]] = scope.get("client") - client_host = client_addr[0] if client_addr else None - - if self.always_trust or client_host in self.trusted_hosts: - headers = dict(scope["headers"]) - - if b"x-forwarded-proto" in headers: - # Determine if the incoming request was http or https based on - # the X-Forwarded-Proto header. - x_forwarded_proto = ( - headers[b"x-forwarded-proto"].decode("latin1").strip() - ) - if scope["type"] == "websocket": - scope["scheme"] = ( - "wss" if x_forwarded_proto == "https" else "ws" - ) - else: - scope["scheme"] = x_forwarded_proto - - if b"x-forwarded-for" in headers: - # Determine the client address from the last trusted IP in the - # X-Forwarded-For header. We've lost the connecting client's port - # information by now, so only include the host. - x_forwarded_for = headers[b"x-forwarded-for"].decode("latin1") - x_forwarded_for_hosts = [ - item.strip() for item in x_forwarded_for.split(",") - ] - host = self.get_trusted_client_host(x_forwarded_for_hosts) - port = 0 - scope["client"] = (host, port) # type: ignore[arg-type] - - return await self.app(scope, receive, send) diff --git a/spaces/pycoming/bingo/Dockerfile b/spaces/pycoming/bingo/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/pyimagesearch/nmt-bahdanau/app.py b/spaces/pyimagesearch/nmt-bahdanau/app.py deleted file mode 100644 index ddda081a7cce3b62ef032ce3744d73434a6f6536..0000000000000000000000000000000000000000 --- a/spaces/pyimagesearch/nmt-bahdanau/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -import tensorflow as tf -import tensorflow_text as tf_text -from huggingface_hub import Repository - -repo = Repository( - local_dir="nmt-bahdanau-attention", - clone_from="pyimagesearch/nmt-bahdanau-attention", - use_auth_token=os.environ.get("token") -) -reloaded = tf.saved_model.load("nmt-bahdanau-attention/translator") - -title="Neural Machine Translation with Bahdanau's Attention" -description="The model used here is a POC and not SOTA on NMT." - -examples=["how are you?", "good morning.", "how is your health?"] - -def get_translation(sentence): - result = reloaded.translate( - sourceText=tf.constant([sentence]) - )["text"].numpy()[0].decode() - return result - -nmt_space = gr.Interface( - fn=get_translation, - inputs=gr.Textbox(label="English Sentence"), - outputs=gr.Textbox(label="French Sentence"), - title=title, - description=description, - examples=examples, -) - -nmt_space.launch() \ No newline at end of file diff --git a/spaces/pytholic/streamlit-image-classification-demo/config/args.py b/spaces/pytholic/streamlit-image-classification-demo/config/args.py deleted file mode 100644 index b5e99ff901032166af2c322c3ce5bba86c2f17c3..0000000000000000000000000000000000000000 --- a/spaces/pytholic/streamlit-image-classification-demo/config/args.py +++ /dev/null @@ -1,23 +0,0 @@ -from dataclasses import dataclass - - -@dataclass -class Args: - """ - Training arguments. - """ - - # Learning rate for the optimizer - learning_rate: float = 1e-3 - # Training batch size - batch_size: int = 64 - # Total numebr of classes - num_classes: int = 10 - # Maximum number of training epochs - max_epochs: int = 100 - # Input shape - input_shape: tuple = (3, 224, 224) - # Use pretrained weights - # Can be "IMAGENET1K_V1", "IMAGENET1K_V2", "DEFAULT" - # CHec more at https://pytorch.org/vision/stable/models.html - weights: str = None diff --git a/spaces/q896656681/xiaoxiannv/README.md b/spaces/q896656681/xiaoxiannv/README.md deleted file mode 100644 index 7724641ab9019286fc7771ef49c9d68381b25e81..0000000000000000000000000000000000000000 --- a/spaces/q896656681/xiaoxiannv/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Xiaoxiannv -emoji: ⚡ -colorFrom: green -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/qingxu98/academic-chatgpt-beta/request_llm/bridge_all.py b/spaces/qingxu98/academic-chatgpt-beta/request_llm/bridge_all.py deleted file mode 100644 index f1f4ee1aa889c9484856943c6dac5398ba2607f9..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/academic-chatgpt-beta/request_llm/bridge_all.py +++ /dev/null @@ -1,210 +0,0 @@ - -""" - 该文件中主要包含2个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" -import tiktoken -from functools import wraps, lru_cache -from concurrent.futures import ThreadPoolExecutor - -from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui -from .bridge_chatgpt import predict as chatgpt_ui - -from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui -from .bridge_chatglm import predict as chatglm_ui - -# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui -# from .bridge_tgui import predict as tgui_ui - -colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044'] - -class LazyloadTiktoken(object): - def __init__(self, model): - self.model = model - - @staticmethod - @lru_cache(maxsize=128) - def get_encoder(model): - print('正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数') - tmp = tiktoken.encoding_for_model(model) - print('加载tokenizer完毕') - return tmp - - def encode(self, *args, **kwargs): - encoder = self.get_encoder(self.model) - return encoder.encode(*args, **kwargs) - - def decode(self, *args, **kwargs): - encoder = self.get_encoder(self.model) - return encoder.decode(*args, **kwargs) - -tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo") -tokenizer_gpt4 = LazyloadTiktoken("gpt-4") -get_token_num_gpt35 = lambda txt: len(tokenizer_gpt35.encode(txt, disallowed_special=())) -get_token_num_gpt4 = lambda txt: len(tokenizer_gpt4.encode(txt, disallowed_special=())) - -model_info = { - # openai - "gpt-3.5-turbo": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": "https://api.openai.com/v1/chat/completions", - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - - "gpt-4": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": "https://api.openai.com/v1/chat/completions", - "max_token": 8192, - "tokenizer": tokenizer_gpt4, - "token_cnt": get_token_num_gpt4, - }, - - # api_2d - "api2d-gpt-3.5-turbo": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": "https://openai.api2d.net/v1/chat/completions", - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - - "api2d-gpt-4": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": "https://openai.api2d.net/v1/chat/completions", - "max_token": 8192, - "tokenizer": tokenizer_gpt4, - "token_cnt": get_token_num_gpt4, - }, - - # chatglm - "chatglm": { - "fn_with_ui": chatglm_ui, - "fn_without_ui": chatglm_noui, - "endpoint": None, - "max_token": 1024, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - -} - - -def LLM_CATCH_EXCEPTION(f): - """ - 装饰器函数,将错误显示出来 - """ - def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience): - try: - return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) - except Exception as e: - from toolbox import get_conf - import traceback - proxies, = get_conf('proxies') - tb_str = '\n```\n' + traceback.format_exc() + '\n```\n' - observe_window[0] = tb_str - return tb_str - return decorated - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - """ - 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - LLM的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - import threading, time, copy - - model = llm_kwargs['llm_model'] - n_model = 1 - if '&' not in model: - assert not model.startswith("tgui"), "TGUI不支持函数插件的实现" - - # 如果只询问1个大语言模型: - method = model_info[model]["fn_without_ui"] - return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) - else: - # 如果同时询问多个大语言模型: - executor = ThreadPoolExecutor(max_workers=4) - models = model.split('&') - n_model = len(models) - - window_len = len(observe_window) - assert window_len==3 - window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True] - - futures = [] - for i in range(n_model): - model = models[i] - method = model_info[model]["fn_without_ui"] - llm_kwargs_feedin = copy.deepcopy(llm_kwargs) - llm_kwargs_feedin['llm_model'] = model - future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_slience) - futures.append(future) - - def mutex_manager(window_mutex, observe_window): - while True: - time.sleep(0.5) - if not window_mutex[-1]: break - # 看门狗(watchdog) - for i in range(n_model): - window_mutex[i][1] = observe_window[1] - # 观察窗(window) - chat_string = [] - for i in range(n_model): - chat_string.append( f"【{str(models[i])} 说】: {window_mutex[i][0]} " ) - res = '

    \n\n---\n\n'.join(chat_string) - # # # # # # # # # # # - observe_window[0] = res - - t_model = threading.Thread(target=mutex_manager, args=(window_mutex, observe_window), daemon=True) - t_model.start() - - return_string_collect = [] - while True: - worker_done = [h.done() for h in futures] - if all(worker_done): - executor.shutdown() - break - time.sleep(1) - - for i, future in enumerate(futures): # wait and get - return_string_collect.append( f"【{str(models[i])} 说】: {future.result()} " ) - - window_mutex[-1] = False # stop mutex thread - res = '
    \n\n---\n\n'.join(return_string_collect) - return res - - -def predict(inputs, llm_kwargs, *args, **kwargs): - """ - 发送至LLM,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是LLM的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - - method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] - yield from method(inputs, llm_kwargs, *args, **kwargs) - diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Emagicone Store Manager Keygen Generator VERIFIED.md b/spaces/quidiaMuxgu/Expedit-SAM/Emagicone Store Manager Keygen Generator VERIFIED.md deleted file mode 100644 index dae267209433d54cfe98301ce62eb015fd0af1e2..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Emagicone Store Manager Keygen Generator VERIFIED.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    usually, woocommerce works in a similar fashion. it has a back end for orders and comments, a front end for easily managing products, and a parts for product information. under the hood, it all connects to the core woocommerce api. you can organize your products easily and manage all information connected to them. when the orders come in, it will send them to your email inbox. and if someone leaves a comment, they will go to your inbox as well.

    -

    there’s a search feature to search all of your products, manage orders, and cancel pending orders. you can also use the manager to change a product, add a new category, promote the product, or delete the product. you can tag the products and add tags to them. it’s pretty simple and to the point.

    -

    emagicone store manager keygen generator


    Download Filehttps://geags.com/2uCscD



    -

    this extension allows you to add products directly to your store manager. it will create all the products for you along with the corresponding image and descriptions. it’s amazing how easy it is for you to manage your inventory. and, if you are an experienced woocommerce user, you can see all the options available to you.

    -

    this is a really helpful and basic extension that can help you organize and manage your woocommerce store. this extension works like the woocommerce store manager, but it doesn’t have all the advanced features that come with the core plugin.

    -

    if you’re an experienced wordpress user, you know how easy it is to install and use woocommerce on your site. as an independent wp theme developer, it can be challenging to incorporate the theme into your existing website. in addition to this, you also have to figure out how to set up and sell products using woocommerce. because woocommerce has so many add-ons and extensions, it can be difficult to get the best value out of this plugin.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (vintha Prapancham Telugu Dubbed Movie Free Download).md b/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (vintha Prapancham Telugu Dubbed Movie Free Download).md deleted file mode 100644 index 5ff5090c5fb24edcd61c9ae07c6701bdbaea4004..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (vintha Prapancham Telugu Dubbed Movie Free Download).md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (vintha prapancham telugu dubbed movie free download)


    DOWNLOADhttps://geags.com/2uCs57



    - -Bhaskar The Rascal Malayalam Full Movie Download Utorrent 14 ... HD Online Player (vintha Prapancham Telugu Dubbed Movie Free Download). 29 juillet ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dameware Mini Remote Control 8.0.0.102 Crack Tips and Tricks for Using It Effectively.md b/spaces/raedeXanto/academic-chatgpt-beta/Dameware Mini Remote Control 8.0.0.102 Crack Tips and Tricks for Using It Effectively.md deleted file mode 100644 index 4362eb538e9dfbd6b4cbbac7774f40f829250f3c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dameware Mini Remote Control 8.0.0.102 Crack Tips and Tricks for Using It Effectively.md +++ /dev/null @@ -1,171 +0,0 @@ -
    -

    DameWare Mini Remote Control 8.0.0.102 Crack: A Powerful and Secure Remote Control Software

    -

    If you are looking for a reliable and easy-to-use remote control software that can help you manage and troubleshoot remote computers, you might want to check out DameWare Mini Remote Control 8.0.0.102 Crack.

    -

    DameWare Mini Remote Control is a popular remote control program that uses Microsoft Windows API calls to communicate with local and remote machines. It allows you to quickly and easily deploy the client agent service to remote computers without requiring any machine reboots.

    -

    dameware mini remote control 8.0.0.102 crack


    Downloadhttps://tinourl.com/2uL4on



    -

    With DameWare Mini Remote Control, you can securely connect to remote computers using various authentication methods, including Smart Card authentication and two-factor authentication. You can also chat with end-users during remote support sessions, access out-of-band computers with Intel AMT using KVM, and switch between standalone and centralized modes with one click.

    -

    In this article, we will explain what DameWare Mini Remote Control is, how to install and activate it using the crack file, why you should use it, what are its advantages and disadvantages, and what are some alternatives to it.

    -

    What is DameWare Mini Remote Control?

    -

    DameWare Mini Remote Control is a powerful remote control software that enables you to remotely access and control Windows, Mac OS X, and Linux computers from your Windows desktop or laptop.

    -

    It is designed for IT professionals who need to provide technical support and assistance to end-users, as well as perform administrative tasks on remote machines.

    -

    dameware mini remote control 12.2.4.11 free download
    -dameware mini remote control full version offline installer
    -dameware mini remote control for windows mac and linux
    -dameware mini remote control license key
    -dameware mini remote control with intel amt support
    -dameware mini remote control chat with end-users feature
    -dameware mini remote control secure smart card authentication
    -dameware mini remote control 12.2.4.11 + key - kolompc
    -dameware mini remote control internet managers
    -dameware mini remote control cross-platform software
    -dameware mini remote control simplified system of connection
    -dameware mini remote control intuitive interface
    -dameware mini remote control new features and improvements
    -dameware mini remote control https strict transport security
    -dameware mini remote control windows server 2019 support
    -dameware mini remote control bitbucket issue
    -dameware mini remote control high power software
    -dameware mini remote control excellent solution for remote problems
    -dameware mini remote control uses microsoft windows api calls
    -dameware mini remote control soundcloud stream
    -dameware mini remote control 12.2.3.15 windows free download
    -dameware mini remote control 12.2.3.15 version history
    -dameware mini remote control 12.2.3.15 file name and size
    -dameware mini remote control 12.2.2.12 direct download link
    -dameware mini remote control 12.2.2.12 release date and languages
    -dameware mini remote control 12.2.1.27 technical details and system requirements
    -dameware mini remote control 12.2.1.27 product information and created by
    -dameware mini remote control 12.2.0.1206 effective use in corporate environment
    -dameware mini remote control 12.2.0.1206 screenshots and password for archive
    -dameware mini remote control 12.1.2.584 license type and shareware
    -dameware mini remote control 12.1.2.584 removed dependency on ms xml 6.0 sp1
    -dameware mini remote control one of the best values in remote software
    -dameware mini remote control licensed by number of help desk technicians
    -dameware mini remote control used by thousands of it admins for more than ten years
    -dameware mini remote control connect to servers desktops and notebooks seamlessly
    -dameware mini remote control remotely manage computers without end-user intervention
    -dameware mini remote control customize and deploy agents automatically or on demand
    -dameware mini remote control access computers outside the network via internet proxy server
    -dameware mini remote control leverage built-in tools for troubleshooting and system diagnostics
    -dameware mini remote control view multiple monitors on a single screen or multiple screens
    -dameware mini remote control lock keyboard and mouse during maintenance sessions
    -dameware mini remote control take screenshots of the end-user's desktop
    -dameware mini remote control copy files between local and remote systems
    -dameware mini remote control reboot crashed computers remotely
    -dameware mini remote control wake on lan feature to power on sleeping machines
    -dameware mini remote control flexible licensing options for organizations of any size
    -dameware mini remote control integrate with solarwinds web help desk software
    -dameware mini remote control get support from solarwinds customer portal
    -dameware mini remote control try it free for 14 days with no obligation

    -

    DameWare Mini Remote Control is part of the DameWare Remote Support suite, which also includes DameWare Mobile, DameWare Exporter, and DameWare NT Utilities.

    -

    Features and benefits of DameWare Mini Remote Control

    -

    Some of the main features and benefits of DameWare Mini Remote Control are:

    -
      -
    • It supports multiple protocols, such as RDP, VNC, SSH, Telnet, HTTP, HTTPS, etc.
    • -
    • It allows you to remotely control any Windows computer that has the TCP/IP protocol enabled.
    • -
    • It can connect to Mac OS X and Linux computers using the VNC protocol.
    • -
    • It can remotely install, start, stop, remove, or upgrade the client agent service on remote computers without requiring any machine reboots or user intervention.
    • -
    • It can securely connect to remote computers using various authentication methods, such as Windows NT Challenge/Response (NTLM), Kerberos (Active Directory), Smart Card authentication (PIV/CAC), two-factor authentication (RSA SecurID), etc.
    • -
    • It can encrypt all data transmitted between the local and remote machines using AES or FIPS encryption algorithms.
    • -
    • It can chat with end-users during remote support sessions using text or voice messages.
    • -
    • It can access out-of-band computers with Intel AMT using KVM (Keyboard-Video-Mouse).
    • -
    • It can switch between standalone mode and centralized mode with one click.
    • -
    • It can customize and automatically deploy remote control agents using MSI packages or batch files.
    • -
    • It can manage remote access privileges based on roles in your organization.
    • -
    • It can capture screenshots and record videos of remote sessions for documentation purposes.
    • -
    • It can transfer files between local and remote machines using drag-and-drop or copy-and-paste functions.
    • -
    • It can reboot or shutdown remote computers with or without user consent.
    • -
    • It can lock or unlock remote keyboards and mice.
    • -
    • It can blank out remote monitors to prevent unauthorized viewing.
    • -
    -

    How to install and activate DameWare Mini Remote Control 8.0.0.102 Crack?

    -

    To install and activate DameWare Mini Remote Control 8.0.0.102 Crack, you need to follow these steps:

    -
      -
    1. Download the setup file from the official website or from one of the links provided in this article.
    2. -
    3. Run the setup file and follow the instructions to install the program on your computer.
    4. -
    5. Download the crack file from one of the links provided in this article.
    6. -
    7. Copy the crack file and paste it into the installation folder of the program (usually C:\Program Files\DameWare Development\DameWare Mini Remote Control).
    8. -
    9. Run the crack file as administrator and click on the Patch button.
    10. -
    11. Wait for the patching process to complete and close the crack file.
    12. -
    13. Launch the program and enjoy its full features without any limitations or restrictions.
    14. -
    -

    Why use DameWare Mini Remote Control 8.0.0.102 Crack?

    -

    DameWare Mini Remote Control 8.0.0.102 Crack is a useful tool for IT professionals who need to remotely access and control multiple computers across different platforms and networks.

    -

    By using this software, you can save time and money by reducing travel costs, increasing productivity, improving customer satisfaction, and resolving issues faster.

    -

    You can also enhance security by using encryption algorithms, authentication methods, access permissions, etc., to protect your data and prevent unauthorized access.

    -

    Advantages of using DameWare Mini Remote Control 8.0.0.102 Crack

    -

    Some of the advantages of using DameWare Mini Remote Control 8.0.0.102 Crack are:

    -
      -
    • You can use it for free without paying any fees or subscriptions.
    • -
    • You can use it without any limitations or restrictions on its features or functions.
    • -
    • You can use it without any watermarks or advertisements on its interface or output.
    • -
    • You can use it without any risk of viruses or malware infections on your computer or network.
    • -
    -

    Disadvantages of using DameWare Mini Remote Control 8.0.0.102 Crack

    -

    Some of the disadvantages of using DameWare Mini Remote Control 8.0.0.102 Crack are:

    -
      -
    • You may violate the terms and conditions of the software license agreement by using an unauthorized version of the program.
    • -
    • You may face legal consequences or penalties for infringing the intellectual property rights of the software developer or owner.
    • -
    • You may not receive any updates or technical support from the software developer or owner in case of any issues or problems with the program.
    • -
    • You may compromise the security and integrity of your computer or network by downloading or installing files from untrusted sources that may contain viruses or malware infections.
    • -
    -

    Alternatives to DameWare Mini Remote Control 8.0.0.102 Crack

    -

    If you are looking for some alternatives to DameWare Mini Remote Control 8.0.0.102 Crack, you may want to consider these options:

    -

    TeamViewer

    -

    TeamViewer is a popular remote control software that allows you to remotely access and control any device over the internet from anywhere in the world.

    -

    You can use it for various purposes such as online meetings, web conferencing, file sharing, screen sharing, etc., with up to 300 participants at a time.

    -

    You can also use it for personal or commercial use with different plans and pricing options available depending on your needs and preferences.

    -

    AnyDesk

    -

    AnyDesk is a fast and secure remote control software that enables you to remotely access and control any computer from your own device over a low-latency network connection.

    -

    You can use it for various applications such as IT support, collaboration, presentation, education, gaming, etc., with high-quality video and audio transmission.

    -

    TeamViewer

    -

    TeamViewer is a popular remote control software that allows you to remotely access and control any device over the internet from anywhere in the world.

    -

    You can use it for various purposes such as online meetings, web conferencing, file sharing, screen sharing, etc., with up to 300 participants at a time.

    -

    You can also use it for personal or commercial use with different plans and pricing options available depending on your needs and preferences.

    -

    AnyDesk

    -

    AnyDesk is a fast and secure remote control software that enables you to remotely access and control any computer from your own device over a low-latency network connection.

    -

    You can use it for various applications such as IT support, collaboration, presentation, education, gaming, etc., with high-quality video and audio transmission.

    -

    You can also use it for free for personal use or choose from different plans and pricing options available for professional or business use.

    -

    Remote Desktop Manager

    -

    Remote Desktop Manager is a powerful remote control software that allows you to manage and access multiple remote connections from a single interface.

    -

    You can use it to connect to various types of remote servers, such as RDP, VNC, SSH, Telnet, FTP, WebDAV, etc., as well as cloud services, such as AWS, Azure, Google Cloud, etc.

    -

    You can also use it to store and organize your credentials, passwords, certificates, etc., in a secure vault with encryption and two-factor authentication.

    -

    Conclusion

    -

    DameWare Mini Remote Control 8.0.0.102 Crack is a powerful and secure remote control software that can help you remotely access and control Windows, Mac OS X, and Linux computers from your Windows desktop or laptop.

    -

    It offers various features and benefits that can enhance your productivity, efficiency, security, and customer satisfaction.

    -

    However, using the crack version of the software may also have some disadvantages and risks that you should be aware of before deciding to use it.

    -

    If you are looking for some alternatives to DameWare Mini Remote Control 8.0.0.102 Crack, you may want to consider TeamViewer, AnyDesk, or Remote Desktop Manager as some of the best remote desktop software available today.

    -

    FAQs

    -

    Here are some frequently asked questions about DameWare Mini Remote Control 8.0.0.102 Crack:

    -
      -
    1. What are the system requirements for DameWare Mini Remote Control 8.0.0.102 Crack?
    2. -

      The system requirements for DameWare Mini Remote Control 8.0.0.102 Crack are:

      -
        -
      • Operating system: Windows XP/Vista/7/8/10 (32-bit or 64-bit)
      • -
      • Processor: 1 GHz or faster
      • -
      • Memory: 512 MB RAM or more
      • -
      • Disk space: 150 MB free space or more
      • -
      • Internet connection: Required for activation and updates
      • -
      -
    3. Is DameWare Mini Remote Control 8.0.0.102 Crack safe to use?
    4. -

      DameWare Mini Remote Control 8.0.0.102 Crack is not safe to use because it is an unauthorized version of the software that may contain viruses or malware infections that can harm your computer or network.

      -

      It may also violate the software license agreement and the intellectual property rights of the software developer or owner, which may result in legal consequences or penalties.

      -
    5. How can I get support for DameWare Mini Remote Control 8.0.0.102 Crack?
    6. -

      You cannot get any support for DameWare Mini Remote Control 8.0.0.102 Crack because it is an unauthorized version of the software that does not receive any updates or technical support from the software developer or owner.

      -

      If you encounter any issues or problems with the program, you will have to rely on your own troubleshooting skills or seek help from other sources online. -

    7. Where can I download DameWare Mini Remote Control 8.0.0.102 Crack?
    8. -

      You can download DameWare Mini Remote Control 8.0.0.102 Crack from one of the links provided in this article or from other websites that offer cracked software downloads.

      -

      However, we do not recommend downloading or using cracked software because it is illegal and unsafe.

      -
    9. How can I uninstall DameWare Mini Remote Control 8.0.0.102 Crack?
    10. -

      You can uninstall DameWare Mini Remote Control 8.0.0.102 Crack by following these steps:

      -
        -
      1. Go to Start > Control Panel > Programs and Features (or Add or Remove Programs).
      2. -
      3. Select DameWare Mini Remote Control from the list of programs and click on Uninstall (or Change/Remove).
      4. -
      5. Follow the instructions to complete the uninstallation process.
      6. -
      7. Delete the crack file from the installation folder of the program (usually C:\Program Files\DameWare Development\DameWare Mini Remote Control).
      8. -
      9. Delete any leftover files or folders related to the program from your computer.
      10. -
      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Forscan Versin 223beta 223beta Keygen The Best Tool for Ford Vehicles.md b/spaces/raedeXanto/academic-chatgpt-beta/Forscan Versin 223beta 223beta Keygen The Best Tool for Ford Vehicles.md deleted file mode 100644 index f21440db097a6159183e7e477d6d70b18820692d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Forscan Versin 223beta 223beta Keygen The Best Tool for Ford Vehicles.md +++ /dev/null @@ -1,107 +0,0 @@ - -

    Forscan Version 2.3.23 Beta Keygen: How to Download, Install and Activate License of Ford Forscan

    -

    If you own a Ford, Mazda, Lincoln or Mercury vehicle, you may have heard of Forscan. But what is it exactly and why do you need it? In this article, we will explain everything you need to know about Forscan version 2.3.23 beta keygen, how to download, install and activate it, and how to use it to diagnose and configure your vehicle at official machine level.

    -

    Forscan Versin 223beta 223beta Keygen


    Downloadhttps://tinourl.com/2uL5G1



    -

    What is Forscan and why do you need it?

    -

    Forscan is a software scanner for Ford, Mazda, Lincoln and Mercury vehicles

    -

    Forscan is a software scanner that allows you to access various modules and functions of your vehicle that are normally hidden or inaccessible by standard OBD2 scanners. It can read and clear diagnostic trouble codes (DTCs), display live data, run tests, perform service procedures, program keys, change settings, activate features and more.

    -

    Forscan allows you to diagnose and configure your vehicle at official machine level

    -

    Unlike generic OBD2 scanners that only work with standard protocols and parameters, Forscan can communicate with your vehicle using manufacturer-specific protocols and commands. This means that you can access more information and functions than a regular mechanic or dealer can. You can also modify your vehicle's behavior according to your preferences or needs.

    -

    Forscan requires a compatible hardware adapter and a license to work

    -

    To use Forscan, you need a compatible hardware adapter that can connect your vehicle's OBD port with your computer. There are many options available on the market, such as ELS27, SVCI J2534, VXDIAG VCX NANO, ELM327 etc. You also need a license to activate the software. You can either buy a one-year extended license or get a free two-month trial license.

    -

    How to download Forscan version 2.3.23 beta keygen?

    -

    Visit forscan.org and download Forscan version for Windows

    -

    The first step is to visit forscan.org/download.html and download Forscan version for Windows (i.e FORScan version beta for Windows). This is the latest version of the software that supports Ford vehicles up to 2023 model year.

    -

    Run Forscan setup and install the software on your computer

    -

    The next step is to run Forscan setup on your computer and follow the instructions on the screen. You need to select your language, accept the license agreement, choose whether to create a desktop shortcut or not, and finish the installation.

    -

    Copy the hardware ID from the About icon

    -

    The last step is to copy the hardware ID from the About icon on the software's main screen. You will need this ID later to obtain the license key.

    -

    How to obtain the free two-month license of Forscan?

    -

    Go to forscan.org and choose Get Free Extended License

    -

    To get the free two-month license of Forscan, you need to go back to forscan.org/download.html and choose Get Free Extended License (2 month trial). This will take you to a page where you can generate a license key based on your hardware ID.

    -

    Register and login Forscan forum and enter your details and hardware ID

    -

    To generate a license key, you need to register and login Forscan forum (forscan.org/forum) first. Then you need to enter your first name, last name or company name (i.e Linux), contact phone number (optional) and hardware ID that you copied earlier. Then press Generate License button.

    -

    Download and save the license key file

    -

    After generating the license key, you need to download it by clicking on Download License button. Save it in a location that you can easily find later.

    -

    Forscan Activation Keygen
    -Forscan Extended License Keygen
    -Forscan Version 2.2.8.beta serial key gen
    -Forscan software scanner for Ford, Mazda, Lincoln and Mercury
    -Forscan OBD2 scanner
    -Forscan PATS programming feature
    -Forscan vehicle database update
    -Forscan for iOS
    -Forscan for Windows
    -Forscan Extended License key generator
    -OBDLink EX adapter for FORScan
    -FORScan trouble codes
    -FORScan BdyCM Local Interconnect Network
    -FORScan Slave Boy of Pompeii Pearson Always Learning.rar
    -FORScan Siberian Mouse Custom Tonya Real Bj Avi
    -FORScan version 2.3.33 beta for Windows
    -FORScan version 2.3.12 core
    -FORScan version 2.3.25 beta for Windows
    -FORScan version 2.3.28 beta for Windows
    -FORScan version 2.3.29 beta for Windows
    -FORScan version 2.3.30 beta for Windows
    -FORScan version 2.3.31 beta for Windows
    -FORScan version 2.3.32 beta for Windows
    -FORScan version 2.3.34 beta for Windows
    -FORScan version 2.3.35 beta for Windows
    -FORScan version 2.4.0 beta for Windows
    -FORScan version 2.4.1 beta for Windows
    -FORScan version 2.4.2 beta for Windows
    -FORScan version 2.4.3 beta for Windows
    -FORScan version 2.4.4 beta for Windows
    -FORScan version 2.4.5 beta for Windows
    -FORScan version 2.4.6 beta for Windows
    -FORScan version 2.4.7 beta for Windows
    -FORScan version 2.4.8 beta for Windows
    -FORScan version 2.4.9 beta for Windows
    -FORScan version 2.5.0 beta for Windows
    -FORScan version 2.5.1 beta for Windows
    -FORScan version 2.5.2 beta for Windows
    -FORScan version 2.5.3 beta for Windows
    -FORScan version 2.5.4 beta for Windows
    -FORScan version 2.5.5 beta for Windows
    -FORScan version 2.5.6 beta for Windows
    -FORScan version 2.5.7 beta for Windows
    -FORScan version 2.5.8 beta for Windows
    -FORScan version 2.5.9 beta for Windows
    -Forscan Crack Full Version Direct Download
    -Forscan Serials Generator
    -Forscan Keygens Torrent
    -Forscan ZippyShare Download
    -Forscan Uploaded Download

    -

    Upload the license file to Forscan software and activate it

    -

    The final step is to upload the license file to Forscan software by clicking on Load License Key button on the main screen. Then browse to the location where you saved the file and select it. Press YES to continue. You should see a message that says Activate License Success. You've got 2 month free trial.

    -

    How to connect Forscan with your vehicle?

    -

    Choose a compatible hardware adapter such as ELS27, SVCI J2534, VXDIAG VCX NANO, ELM327 etc.

    -

    To connect Forscan with your vehicle, you need a compatible hardware adapter that can bridge the communication between your vehicle's OBD port and your computer's USB or Bluetooth port. There are many options available on the market, such as ELS27 original or clone cable with green board (recommended), SVCI J2534 (also works with IDS), VXDIAG VCX NANO for Ford/Mazda (also works with IDS), ELM327 code reader (cheap but limited), UCDS (expensive but powerful), OBDLink SX/MX (STN11xx), CANtieCAR (in “FORScan” mode), Tactrix OpenPort J2534 Pass-Thru etc.

    -

    Connect the adapter with your vehicle via OBD socket and your computer via USB or Bluetooth

    -

    The next step is to connect one end of the adapter with your vehicle's OBD socket (usually located under the dashboard) and another end with your computer's USB or Bluetooth port (depending on what type of adapter you have). Make sure both your vehicle's ignition switch is ON (but engine not running) and your computer's power supply is stable.

    -

    Select Auto connection type in Forscan settings and save it

    -

    The last step is to select Auto connection type in Forscan settings by clicking on Setting button-> Connection tab -> Connection Type -> Auto -> Save Setting button. This will allow Forscan software to automatically detect what type of adapter you are using.

    -

    How to use Forscan to diagnose and configure your vehicle?

    -

    Run Forscan software and select your vehicle from the list

    -

    To use Forscan software, you need to run it on your computer after connecting it with your vehicle via adapter. You should see a list of available modules or functions on the left side of the screen. You can select your vehicle from this list by clicking on its icon.

    -

    Choose the module or function you want to access from the menu

    -

    After selecting your vehicle, you can choose what module or function you want to access from the menu on top of screen such as: - Vehicle Information: shows basic information about your vehicle such as VIN number, model year etc. - DTC: shows I have continued writing the article based on the outline and the search results. Here is the rest of the article: - DTC: shows and clears diagnostic trouble codes and freeze frame data - Live Data: shows various parameters and sensors of your vehicle in real time - Tests: runs various tests and procedures such as KOEO, KOER, cylinder balance, fuel pump etc. - Service: performs various service functions such as oil reset, DPF regeneration, ABS bleeding etc. - Configuration and Programming: changes various settings and features of your vehicle such as tire size, speed limit, daytime running lights etc. You can also use the search function to find a specific module or function by typing its name or code.

    -

    Follow the instructions on the screen and perform diagnosis or programming as needed

    -

    After choosing the module or function you want to access, you need to follow the instructions on the screen and perform diagnosis or programming as needed. For example, if you want to read DTCs, you need to press the Play button and wait for the software to scan your vehicle. Then you can see the list of DTCs and their descriptions. You can also clear them by pressing the Erase button. If you want to change a setting or feature, you need to select it from the list and press Edit button. Then you can change its value or state according to your preference. You can also save or restore your original configuration by using Backup and Restore buttons.

    -

    Conclusion

    -

    In conclusion, Forscan version 2.3.23 beta keygen is a powerful software scanner that allows you to diagnose and configure your Ford, Mazda, Lincoln or Mercury vehicle at official machine level. It requires a compatible hardware adapter and a license to work. You can download it from forscan.org and obtain a free two-month license from there. You can also buy a one-year extended license if you want to use it longer. You can connect it with your vehicle via OBD socket and your computer via USB or Bluetooth. You can access various modules and functions of your vehicle such as DTCs, live data, tests, service, configuration and programming. You can also use the search function to find a specific module or function by typing its name or code. Forscan is a must-have tool for any Ford, Mazda, Lincoln or Mercury owner who wants to get more out of their vehicle.

    -

    FAQs

    -

    What are the benefits of using Forscan?

    -

    Some of the benefits of using Forscan are: - You can access more information and functions than a regular OBD2 scanner - You can diagnose and fix problems yourself without going to a mechanic or dealer - You can customize your vehicle's behavior according to your preferences or needs - You can save money and time by avoiding unnecessary repairs or services - You can learn more about your vehicle's systems and how they work

    -

    What are the risks of using Forscan?

    -

    Some of the risks of using Forscan are: - You may damage your vehicle's systems or components if you use it incorrectly or irresponsibly - You may void your vehicle's warranty if you change some settings or features that are not allowed by the manufacturer - You may cause legal issues if you change some settings or features that are not compliant with local laws or regulations - You may lose some functionality or compatibility if you update your vehicle's software with incompatible versions

    -

    How to update Forscan software?

    -

    To update Forscan software, you need to visit forscan.org/download.html and download the latest version of Forscan for Windows. Then you need to run Forscan setup on your computer and follow the instructions on the screen. You need to accept the license agreement, choose whether to create a desktop shortcut or not, and finish the installation. Your previous settings and license will be preserved.

    -

    How to contact Forscan support?

    -

    To contact Forscan support, you need to visit forscan.org/forum/ and register an account if you don't have one already. Then you need to login and post your question or issue in the appropriate section of the forum. You can also search for existing topics that may have similar questions or issues as yours. The Forscan team and other users will try to help you as soon as possible.

    -

    How to donate to Forscan project?

    -

    To donate to Forscan project, you need to visit forscan.org/donate.html and choose one of the donation methods available such as PayPal, WebMoney, Yandex.Money etc. You can also buy a one-year extended license for $10 USD which will support the project financially and give you access to more features and functions.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/scripts/hifi/train_hifi.sh b/spaces/rahul999r/Rahul_Kannada_TTS/scripts/hifi/train_hifi.sh deleted file mode 100644 index 287ca1159b5bf8f779d66885197fadbcd23b911e..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/scripts/hifi/train_hifi.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -gender='male' - -config='../../config/hifi/config_v1.json' -modeldir='../../checkpoints/hifi/'$gender -logdir='../../logs/hifi/'$gender - - -#################################################### - - - -python ../../src/hifi_gan/train.py \ - --config $config \ - --input_training_file '../../data/hifi/'$gender'/train.txt' \ - --input_validation_file '../../data/hifi/'$gender'/valid.txt' \ - --checkpoint_path $modeldir \ - --logs_path $logdir \ - --checkpoint_interval 10000 \ - --stdout_interval 50 diff --git a/spaces/rainy3/chatgpt_academic/crazy_functions/test_project/cpp/cppipc/waiter.h b/spaces/rainy3/chatgpt_academic/crazy_functions/test_project/cpp/cppipc/waiter.h deleted file mode 100644 index ee45fe3517be95ac1688a3e3540189edeb0d860c..0000000000000000000000000000000000000000 --- a/spaces/rainy3/chatgpt_academic/crazy_functions/test_project/cpp/cppipc/waiter.h +++ /dev/null @@ -1,83 +0,0 @@ -#pragma once - -#include -#include -#include -#include - -#include "libipc/def.h" -#include "libipc/mutex.h" -#include "libipc/condition.h" -#include "libipc/platform/detail.h" - -namespace ipc { -namespace detail { - -class waiter { - ipc::sync::condition cond_; - ipc::sync::mutex lock_; - std::atomic quit_ {false}; - -public: - static void init(); - - waiter() = default; - waiter(char const *name) { - open(name); - } - - ~waiter() { - close(); - } - - bool valid() const noexcept { - return cond_.valid() && lock_.valid(); - } - - bool open(char const *name) noexcept { - quit_.store(false, std::memory_order_relaxed); - if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) { - return false; - } - if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) { - cond_.close(); - return false; - } - return valid(); - } - - void close() noexcept { - cond_.close(); - lock_.close(); - } - - template - bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept { - IPC_UNUSED_ std::lock_guard guard {lock_}; - while ([this, &pred] { - return !quit_.load(std::memory_order_relaxed) - && std::forward(pred)(); - }()) { - if (!cond_.wait(lock_, tm)) return false; - } - return true; - } - - bool notify() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.notify(lock_); - } - - bool broadcast() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.broadcast(lock_); - } - - bool quit_waiting() { - quit_.store(true, std::memory_order_release); - return broadcast(); - } -}; - -} // namespace detail -} // namespace ipc diff --git a/spaces/ramiin2/AutoGPT/autogpt/commands/__init__.py b/spaces/ramiin2/AutoGPT/autogpt/commands/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ramiin2/AutoGPT/autogpt/memory/milvus.py b/spaces/ramiin2/AutoGPT/autogpt/memory/milvus.py deleted file mode 100644 index 44aa72b956224fa4c2a16d5f40b0eaeb35e98581..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/autogpt/memory/milvus.py +++ /dev/null @@ -1,115 +0,0 @@ -""" Milvus memory storage provider.""" -from pymilvus import Collection, CollectionSchema, DataType, FieldSchema, connections - -from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding - - -class MilvusMemory(MemoryProviderSingleton): - """Milvus memory storage provider.""" - - def __init__(self, cfg) -> None: - """Construct a milvus memory storage connection. - - Args: - cfg (Config): Auto-GPT global config. - """ - # connect to milvus server. - connections.connect(address=cfg.milvus_addr) - fields = [ - FieldSchema(name="pk", dtype=DataType.INT64, is_primary=True, auto_id=True), - FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=1536), - FieldSchema(name="raw_text", dtype=DataType.VARCHAR, max_length=65535), - ] - - # create collection if not exist and load it. - self.milvus_collection = cfg.milvus_collection - self.schema = CollectionSchema(fields, "auto-gpt memory storage") - self.collection = Collection(self.milvus_collection, self.schema) - # create index if not exist. - if not self.collection.has_index(): - self.collection.release() - self.collection.create_index( - "embeddings", - { - "metric_type": "IP", - "index_type": "HNSW", - "params": {"M": 8, "efConstruction": 64}, - }, - index_name="embeddings", - ) - self.collection.load() - - def add(self, data) -> str: - """Add an embedding of data into memory. - - Args: - data (str): The raw text to construct embedding index. - - Returns: - str: log. - """ - embedding = get_ada_embedding(data) - result = self.collection.insert([[embedding], [data]]) - _text = ( - "Inserting data into memory at primary key: " - f"{result.primary_keys[0]}:\n data: {data}" - ) - return _text - - def get(self, data): - """Return the most relevant data in memory. - Args: - data: The data to compare to. - """ - return self.get_relevant(data, 1) - - def clear(self) -> str: - """Drop the index in memory. - - Returns: - str: log. - """ - self.collection.drop() - self.collection = Collection(self.milvus_collection, self.schema) - self.collection.create_index( - "embeddings", - { - "metric_type": "IP", - "index_type": "HNSW", - "params": {"M": 8, "efConstruction": 64}, - }, - index_name="embeddings", - ) - self.collection.load() - return "Obliviated" - - def get_relevant(self, data: str, num_relevant: int = 5): - """Return the top-k relevant data in memory. - Args: - data: The data to compare to. - num_relevant (int, optional): The max number of relevant data. - Defaults to 5. - - Returns: - list: The top-k relevant data. - """ - # search the embedding and return the most relevant text. - embedding = get_ada_embedding(data) - search_params = { - "metrics_type": "IP", - "params": {"nprobe": 8}, - } - result = self.collection.search( - [embedding], - "embeddings", - search_params, - num_relevant, - output_fields=["raw_text"], - ) - return [item.entity.value_of_field("raw_text") for item in result[0]] - - def get_stats(self) -> str: - """ - Returns: The stats of the milvus cache. - """ - return f"Entities num: {self.collection.num_entities}" diff --git a/spaces/raseel-zymr/Document-QandA/README.md b/spaces/raseel-zymr/Document-QandA/README.md deleted file mode 100644 index 9acdcb39ef5831dd96d8c2efe80e92a28d632738..0000000000000000000000000000000000000000 --- a/spaces/raseel-zymr/Document-QandA/README.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Document QandA -emoji: 🏆 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -# Document Question & Answer -A Langchain-based application to upload any text or PDF document, ask relevant Questions to it and expect summarised answers. - - -### Pre-requisites - - $ pip install langchain huggingface_hub sentence_transformers faiss-cpu unstructured chromadb Cython tiktoken unstructured[local-inference] - -Or - - $ pip install -r requirements.txt - -* Install the above Python packages -### Reference: -* Vectorstore: https://python.langchain.com/en/latest/modules/indexes/vectorstores.html \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bsr 2013 Sri Lanka Pdf Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bsr 2013 Sri Lanka Pdf Download.md deleted file mode 100644 index 56532d69c144681a9ccd4a3bd4c15b73a84614fb..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bsr 2013 Sri Lanka Pdf Download.md +++ /dev/null @@ -1,22 +0,0 @@ -

    Bsr 2013 Sri Lanka Pdf Download


    DOWNLOAD ===== https://urlgoal.com/2uCK3H



    -
    -This method is required for compliance with the Standard and is for the information of the user. In particular the following points are to be noted. - -2\. Tree Ring dating (TRD) is based on the measurement of tree rings on trunks and on branches of living trees and also on the measurement of radial and tangential growth rate of trees in standing and felled condition. This is a much more accurate method to date and determine the age of trees than using the tree-ring width chronology, the tree diameter or the height. - -3\. All TRD sites in Sri Lanka, of either the long or the short chronologies should be based on trees, the wood from which being extracted before the measurements are made. This stipulation is required because the tree-ring measurements of the wood that is obtained from the TRD sites were made after the measurements were completed, and this gives rise to the possibility of errors in the result. - -4\. In TRD, the date that is obtained is the oldest date for which the age of the tree and its individual rings, calculated from the mean growth increments of the rings, is known with a probability of greater than or equal to 0.95. This means that the age of the tree and the ring-width of the individual tree rings are calculated independently from the same ring. - -5\. TRD is only suitable to date trees that are at least 100 years old. - -6\. The most important result is that the quality of a chronology should always be assessed in relation to the quality of the conventional age-measurement method. If tree-ring width chronology is used to obtain age estimates for trees less than 100 years old, these age estimates cannot be used for any other purposes other than to find the maximum age of trees which are less than 100 years old. - -In the case of Sri Lanka, the measurements of tree-ring width are based on trees. The trees that are included in the measurements should be intact trees and these should be native trees. The site should be part of an area where the climate is similar to the climate for which the tree-ring chronology has been established. Most importantly, the method should have been applied in the area where the tree-ring chronology has been established. - -The TRD method consists of the following steps: - -1\. A cross-section of the trunk of a tree is taken using an axe to obtain samples of wood of suitable length. The diameter and the thickness of the 4fefd39f24
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Filme Indiene Complete Traduse In Romana.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Filme Indiene Complete Traduse In Romana.md deleted file mode 100644 index 4d4d1cf7091677e1e9b14d4e90f37d9a41765fe7..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Filme Indiene Complete Traduse In Romana.md +++ /dev/null @@ -1,8 +0,0 @@ -
    -

    this is a list of the most popular films in india. by box office revenue. the top 20 films of all time based on domestic box office revenue. the top 10 highest-grossing films of india by domestic box office revenue. note that the films listed below are not necessarily in the top 10 highest-grossing. filme in ruajã cu marius constantin puiu de la filme românești. filme indiene consemnate in. filme indiene muzicant tradus in romana. filme indiene romane gratis tradus online. cine știe traduce..

    -

    farsi filme - complete subtitrate - subtitrat in romana - фарси - выпуск третий концерта фильма о конференции в столице северного спб. конференция не так популярна, как ожидалось, но так и не была обсуждаема. мы узнали об этом сами, не подчеркивая, что это пропаганда. мы не должны были подключиться к этой информации. это может быть для иностранных коллег, но не для русских. настоящий фарси фильм называется, как и должно быть, "видеобувная конференция". видеобувная конференция - это фильм, образ которого идентифицирован с помощью самой бувила. самой бувилы. идентифицируется из-за себя. все, после смерти, хотят узнать, что они были и кто они были.

    -

    Filme Indiene Complete Traduse In Romana


    Downloadhttps://urlgoal.com/2uCJFq



    -

    bollywood 8 filme in 2 ore. bollywood remarcati si traduse in romana, tradus in romana, subtitrate in romana, inainte de a pleca cu asteptarea primului film in germania. . filme bollywood - bollywood current and upcoming films online subtitrat in romana, bollywood latest and old films online subtitrat in romana.

    -

    best acest film bollywood - bollywood full length movie. . filme bollywood subtitrate in romana, filme bollywood 2017 full length movie, filme bollywood 2017 subtitrate in romana. filme bollywood 2016 subtitrate in romana, filme bollywood 2016 full length movie, filme bollywood 2016 subtitrate in romana. filme bollywood 2017 full length movie, filme bollywood 2017 subtitrate in romana, filme bollywood 2017 subtitrate in romana.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/apps/crop_img.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/apps/crop_img.py deleted file mode 100644 index 4854d1f5a6361963659a9d79f41c404d801e9193..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/apps/crop_img.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import cv2 -import numpy as np - -from pathlib import Path -import argparse - -def get_bbox(msk): - rows = np.any(msk, axis=1) - cols = np.any(msk, axis=0) - rmin, rmax = np.where(rows)[0][[0,-1]] - cmin, cmax = np.where(cols)[0][[0,-1]] - - return rmin, rmax, cmin, cmax - -def process_img(img, msk, bbox=None): - if bbox is None: - bbox = get_bbox(msk > 100) - cx = (bbox[3] + bbox[2])//2 - cy = (bbox[1] + bbox[0])//2 - - w = img.shape[1] - h = img.shape[0] - height = int(1.138*(bbox[1] - bbox[0])) - hh = height//2 - - # crop - dw = min(cx, w-cx, hh) - if cy-hh < 0: - img = cv2.copyMakeBorder(img,hh-cy,0,0,0,cv2.BORDER_CONSTANT,value=[0,0,0]) - msk = cv2.copyMakeBorder(msk,hh-cy,0,0,0,cv2.BORDER_CONSTANT,value=0) - cy = hh - if cy+hh > h: - img = cv2.copyMakeBorder(img,0,cy+hh-h,0,0,cv2.BORDER_CONSTANT,value=[0,0,0]) - msk = cv2.copyMakeBorder(msk,0,cy+hh-h,0,0,cv2.BORDER_CONSTANT,value=0) - img = img[cy-hh:(cy+hh),cx-dw:cx+dw,:] - msk = msk[cy-hh:(cy+hh),cx-dw:cx+dw] - dw = img.shape[0] - img.shape[1] - if dw != 0: - img = cv2.copyMakeBorder(img,0,0,dw//2,dw//2,cv2.BORDER_CONSTANT,value=[0,0,0]) - msk = cv2.copyMakeBorder(msk,0,0,dw//2,dw//2,cv2.BORDER_CONSTANT,value=0) - img = cv2.resize(img, (512, 512)) - msk = cv2.resize(msk, (512, 512)) - - kernel = np.ones((3,3),np.uint8) - msk = cv2.erode((255*(msk > 100)).astype(np.uint8), kernel, iterations = 1) - - return img, msk - -def main(): - ''' - given foreground mask, this script crops and resizes an input image and mask for processing. - ''' - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input_image', type=str, help='if the image has alpha channel, it will be used as mask') - parser.add_argument('-m', '--input_mask', type=str) - parser.add_argument('-o', '--out_path', type=str, default='./sample_images') - args = parser.parse_args() - - img = cv2.imread(args.input_image, cv2.IMREAD_UNCHANGED) - if img.shape[2] == 4: - msk = img[:,:,3:] - img = img[:,:,:3] - else: - msk = cv2.imread(args.input_mask, cv2.IMREAD_GRAYSCALE) - - img_new, msk_new = process_img(img, msk) - - img_name = Path(args.input_image).stem - - cv2.imwrite(os.path.join(args.out_path, img_name + '.png'), img_new) - cv2.imwrite(os.path.join(args.out_path, img_name + '_mask.png'), msk_new) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/riccardogiorato/playground_diffusion/README.md b/spaces/riccardogiorato/playground_diffusion/README.md deleted file mode 100644 index 9f16f8219f76147735266ffd32efe3997e65b65b..0000000000000000000000000000000000000000 --- a/spaces/riccardogiorato/playground_diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Playground Diffusion -emoji: 🎮 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: mit ---- - -This Space is based on [anzorq/finetuned_diffusion](https://huggingface.co/spaces/anzorq/finetuned_diffusion), go and support them and thank them for their open source work! \ No newline at end of file diff --git a/spaces/rkrstacic/Chatbot-integration-built-on-processes/api_call_module.py b/spaces/rkrstacic/Chatbot-integration-built-on-processes/api_call_module.py deleted file mode 100644 index 5053ba376062f47fb494d99457b186108bbd59c8..0000000000000000000000000000000000000000 --- a/spaces/rkrstacic/Chatbot-integration-built-on-processes/api_call_module.py +++ /dev/null @@ -1,15 +0,0 @@ -import requests -import json - -url = 'https://hf.space/embed/rkrstacic/Software-module-for-answering-questions-on-processes/+/api/predict' - - -def _query(payload): - data = json.dumps(payload) - response = requests.request("POST", url, data=data) - return json.loads(response.content.decode("utf-8")) - - -def get_answer(question, process): - return _query({"data": [question, process]})["data"][0] - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Badmashiyaan - Fun Never Ends Tamil Movie Hd Free grafiktreiber stark in High Quality and Fast Speed.md b/spaces/rorallitri/biomedical-language-models/logs/Download Badmashiyaan - Fun Never Ends Tamil Movie Hd Free grafiktreiber stark in High Quality and Fast Speed.md deleted file mode 100644 index c58814aba0b4bf840fbbcce5c89242b2e24ad662..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Badmashiyaan - Fun Never Ends Tamil Movie Hd Free grafiktreiber stark in High Quality and Fast Speed.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Badmashiyaan - Fun Never Ends Tamil Movie Hd Free grafiktreiber stark


    DOWNLOADhttps://tinurll.com/2uzmiH



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/safetensors/convert/convert.py b/spaces/safetensors/convert/convert.py deleted file mode 100644 index b26de3ffefc01a23ca34848faaa1bd4e447d7141..0000000000000000000000000000000000000000 --- a/spaces/safetensors/convert/convert.py +++ /dev/null @@ -1,375 +0,0 @@ -import argparse -import json -import os -import shutil -from collections import defaultdict -from inspect import signature -from tempfile import TemporaryDirectory -from typing import Dict, List, Optional, Set, Tuple - -import torch - -from huggingface_hub import CommitInfo, CommitOperationAdd, Discussion, HfApi, hf_hub_download -from huggingface_hub.file_download import repo_folder_name -from safetensors.torch import load_file, save_file -from transformers import AutoConfig - - -COMMIT_DESCRIPTION = """ -This is an automated PR created with https://huggingface.co/spaces/safetensors/convert - -This new file is equivalent to `pytorch_model.bin` but safe in the sense that -no arbitrary code can be put into it. - -These files also happen to load much faster than their pytorch counterpart: -https://colab.research.google.com/github/huggingface/notebooks/blob/main/safetensors_doc/en/speed.ipynb - -The widgets on your model page will run using this model even if this is not merged -making sure the file actually works. - -If you find any issues: please report here: https://huggingface.co/spaces/safetensors/convert/discussions - -Feel free to ignore this PR. -""" - -ConversionResult = Tuple[List["CommitOperationAdd"], List[Tuple[str, "Exception"]]] - - -class AlreadyExists(Exception): - pass - - -def shared_pointers(tensors): - ptrs = defaultdict(list) - for k, v in tensors.items(): - ptrs[v.data_ptr()].append(k) - failing = [] - for ptr, names in ptrs.items(): - if len(names) > 1: - failing.append(names) - return failing - - -def check_file_size(sf_filename: str, pt_filename: str): - sf_size = os.stat(sf_filename).st_size - pt_size = os.stat(pt_filename).st_size - - if (sf_size - pt_size) / pt_size > 0.01: - raise RuntimeError( - f"""The file size different is more than 1%: - - {sf_filename}: {sf_size} - - {pt_filename}: {pt_size} - """ - ) - - -def rename(pt_filename: str) -> str: - filename, ext = os.path.splitext(pt_filename) - local = f"{filename}.safetensors" - local = local.replace("pytorch_model", "model") - return local - - -def convert_multi(model_id: str, folder: str, token: Optional[str]) -> ConversionResult: - filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin.index.json", token=token, cache_dir=folder) - with open(filename, "r") as f: - data = json.load(f) - - filenames = set(data["weight_map"].values()) - local_filenames = [] - for filename in filenames: - pt_filename = hf_hub_download(repo_id=model_id, filename=filename, token=token, cache_dir=folder) - - sf_filename = rename(pt_filename) - sf_filename = os.path.join(folder, sf_filename) - convert_file(pt_filename, sf_filename) - local_filenames.append(sf_filename) - - index = os.path.join(folder, "model.safetensors.index.json") - with open(index, "w") as f: - newdata = {k: v for k, v in data.items()} - newmap = {k: rename(v) for k, v in data["weight_map"].items()} - newdata["weight_map"] = newmap - json.dump(newdata, f, indent=4) - local_filenames.append(index) - - operations = [ - CommitOperationAdd(path_in_repo=local.split("/")[-1], path_or_fileobj=local) for local in local_filenames - ] - errors: List[Tuple[str, "Exception"]] = [] - - return operations, errors - - -def convert_single(model_id: str, folder: str, token: Optional[str]) -> ConversionResult: - pt_filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin", token=token, cache_dir=folder) - - sf_name = "model.safetensors" - sf_filename = os.path.join(folder, sf_name) - convert_file(pt_filename, sf_filename) - operations = [CommitOperationAdd(path_in_repo=sf_name, path_or_fileobj=sf_filename)] - errors: List[Tuple[str, "Exception"]] = [] - return operations, errors - - -def convert_file( - pt_filename: str, - sf_filename: str, -): - loaded = torch.load(pt_filename, map_location="cpu") - if "state_dict" in loaded: - loaded = loaded["state_dict"] - shared = shared_pointers(loaded) - for shared_weights in shared: - for name in shared_weights[1:]: - loaded.pop(name) - - # For tensors to be contiguous - loaded = {k: v.contiguous() for k, v in loaded.items()} - - dirname = os.path.dirname(sf_filename) - os.makedirs(dirname, exist_ok=True) - save_file(loaded, sf_filename, metadata={"format": "pt"}) - check_file_size(sf_filename, pt_filename) - reloaded = load_file(sf_filename) - for k in loaded: - pt_tensor = loaded[k] - sf_tensor = reloaded[k] - if not torch.equal(pt_tensor, sf_tensor): - raise RuntimeError(f"The output tensors do not match for key {k}") - - -def create_diff(pt_infos: Dict[str, List[str]], sf_infos: Dict[str, List[str]]) -> str: - errors = [] - for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]: - pt_set = set(pt_infos[key]) - sf_set = set(sf_infos[key]) - - pt_only = pt_set - sf_set - sf_only = sf_set - pt_set - - if pt_only: - errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings") - if sf_only: - errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings") - return "\n".join(errors) - - -def check_final_model(model_id: str, folder: str, token: Optional[str]): - config = hf_hub_download(repo_id=model_id, filename="config.json", token=token, cache_dir=folder) - shutil.copy(config, os.path.join(folder, "config.json")) - config = AutoConfig.from_pretrained(folder) - - import transformers - - class_ = getattr(transformers, config.architectures[0]) - with torch.device("meta"): - (pt_model, pt_infos) = class_.from_pretrained(folder, output_loading_info=True) - (sf_model, sf_infos) = class_.from_pretrained(folder, output_loading_info=True) - - if pt_infos != sf_infos: - error_string = create_diff(pt_infos, sf_infos) - raise ValueError(f"Different infos when reloading the model: {error_string}") - - #### XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - #### SKIPPING THE REST OF THE test to save RAM - return - pt_params = pt_model.state_dict() - sf_params = sf_model.state_dict() - - pt_shared = shared_pointers(pt_params) - sf_shared = shared_pointers(sf_params) - if pt_shared != sf_shared: - raise RuntimeError("The reconstructed model is wrong, shared tensors are different {shared_pt} != {shared_tf}") - - sig = signature(pt_model.forward) - input_ids = torch.arange(10).unsqueeze(0) - pixel_values = torch.randn(1, 3, 224, 224) - input_values = torch.arange(1000).float().unsqueeze(0) - # Hardcoded for whisper basically - input_features = torch.zeros((1, 80, 3000)) - kwargs = {} - if "input_ids" in sig.parameters: - kwargs["input_ids"] = input_ids - if "input_features" in sig.parameters: - kwargs["input_features"] = input_features - if "decoder_input_ids" in sig.parameters: - kwargs["decoder_input_ids"] = input_ids - if "pixel_values" in sig.parameters: - kwargs["pixel_values"] = pixel_values - if "input_values" in sig.parameters: - kwargs["input_values"] = input_values - if "bbox" in sig.parameters: - kwargs["bbox"] = torch.zeros((1, 10, 4)).long() - if "image" in sig.parameters: - kwargs["image"] = pixel_values - - if torch.cuda.is_available(): - pt_model = pt_model.cuda() - sf_model = sf_model.cuda() - kwargs = {k: v.cuda() for k, v in kwargs.items()} - - try: - pt_logits = pt_model(**kwargs)[0] - except Exception as e: - try: - # Musicgen special exception. - decoder_input_ids = torch.ones((input_ids.shape[0] * pt_model.decoder.num_codebooks, 1), dtype=torch.long) - if torch.cuda.is_available(): - decoder_input_ids = decoder_input_ids.cuda() - - kwargs["decoder_input_ids"] = decoder_input_ids - pt_logits = pt_model(**kwargs)[0] - except Exception: - raise e - sf_logits = sf_model(**kwargs)[0] - - torch.testing.assert_close(sf_logits, pt_logits) - print(f"Model {model_id} is ok !") - - -def previous_pr(api: "HfApi", model_id: str, pr_title: str) -> Optional["Discussion"]: - try: - main_commit = api.list_repo_commits(model_id)[0].commit_id - discussions = api.get_repo_discussions(repo_id=model_id) - except Exception: - return None - for discussion in discussions: - if discussion.is_pull_request and discussion.title == pr_title: - commits = api.list_repo_commits(model_id, revision=discussion.git_reference) - - if main_commit == commits[1].commit_id: - return discussion - return None - - -def convert_generic(model_id: str, folder: str, filenames: Set[str], token: Optional[str]) -> ConversionResult: - operations = [] - errors = [] - - extensions = set([".bin", ".ckpt"]) - for filename in filenames: - prefix, ext = os.path.splitext(filename) - if ext in extensions: - pt_filename = hf_hub_download(model_id, filename=filename, token=token, cache_dir=folder) - dirname, raw_filename = os.path.split(filename) - if raw_filename == "pytorch_model.bin": - # XXX: This is a special case to handle `transformers` and the - # `transformers` part of the model which is actually loaded by `transformers`. - sf_in_repo = os.path.join(dirname, "model.safetensors") - else: - sf_in_repo = f"{prefix}.safetensors" - sf_filename = os.path.join(folder, sf_in_repo) - try: - convert_file(pt_filename, sf_filename) - operations.append(CommitOperationAdd(path_in_repo=sf_in_repo, path_or_fileobj=sf_filename)) - except Exception as e: - errors.append((pt_filename, e)) - return operations, errors - - -def convert(api: "HfApi", model_id: str, force: bool = False) -> Tuple["CommitInfo", List[Tuple[str, "Exception"]]]: - pr_title = "Adding `safetensors` variant of this model" - info = api.model_info(model_id) - filenames = set(s.rfilename for s in info.siblings) - - with TemporaryDirectory() as d: - folder = os.path.join(d, repo_folder_name(repo_id=model_id, repo_type="models")) - os.makedirs(folder) - new_pr = None - try: - operations = None - pr = previous_pr(api, model_id, pr_title) - - library_name = getattr(info, "library_name", None) - if any(filename.endswith(".safetensors") for filename in filenames) and not force: - raise AlreadyExists(f"Model {model_id} is already converted, skipping..") - elif pr is not None and not force: - url = f"https://huggingface.co/{model_id}/discussions/{pr.num}" - new_pr = pr - raise AlreadyExists(f"Model {model_id} already has an open PR check out {url}") - elif library_name == "transformers": - if "pytorch_model.bin" in filenames: - operations, errors = convert_single(model_id, folder, token=api.token) - elif "pytorch_model.bin.index.json" in filenames: - operations, errors = convert_multi(model_id, folder, token=api.token) - else: - raise RuntimeError(f"Model {model_id} doesn't seem to be a valid pytorch model. Cannot convert") - check_final_model(model_id, folder, token=api.token) - else: - operations, errors = convert_generic(model_id, folder, filenames, token=api.token) - - if operations: - new_pr = api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=pr_title, - commit_description=COMMIT_DESCRIPTION, - create_pr=True, - ) - print(f"Pr created at {new_pr.pr_url}") - else: - print("No files to convert") - finally: - shutil.rmtree(folder) - return new_pr, errors - - -if __name__ == "__main__": - DESCRIPTION = """ - Simple utility tool to convert automatically some weights on the hub to `safetensors` format. - It is PyTorch exclusive for now. - It works by downloading the weights (PT), converting them locally, and uploading them back - as a PR on the hub. - """ - parser = argparse.ArgumentParser(description=DESCRIPTION) - parser.add_argument( - "model_id", - type=str, - help="The name of the model on the hub to convert. E.g. `gpt2` or `facebook/wav2vec2-base-960h`", - ) - parser.add_argument( - "--force", - action="store_true", - help="Create the PR even if it already exists of if the model was already converted.", - ) - parser.add_argument( - "-y", - action="store_true", - help="Ignore safety prompt", - ) - args = parser.parse_args() - model_id = args.model_id - api = HfApi() - if args.y: - txt = "y" - else: - txt = input( - "This conversion script will unpickle a pickled file, which is inherently unsafe. If you do not trust this file, we invite you to use" - " https://huggingface.co/spaces/safetensors/convert or google colab or other hosted solution to avoid potential issues with this file." - " Continue [Y/n] ?" - ) - if txt.lower() in {"", "y"}: - try: - commit_info, errors = convert(api, model_id, force=args.force) - string = f""" -### Success 🔥 -Yay! This model was successfully converted and a PR was open using your token, here: -[{commit_info.pr_url}]({commit_info.pr_url}) - """ - if errors: - string += "\nErrors during conversion:\n" - string += "\n".join( - f"Error while converting {filename}: {e}, skipped conversion" for filename, e in errors - ) - print(string) - except Exception as e: - print( - f""" -### Error 😢😢😢 - -{e} - """ - ) - else: - print(f"Answer was `{txt}` aborting.") diff --git a/spaces/safetensors/convert2/app.py b/spaces/safetensors/convert2/app.py deleted file mode 100644 index 8a7a260246bccac0e09e4ee0a32eadeee470716c..0000000000000000000000000000000000000000 --- a/spaces/safetensors/convert2/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import csv -from datetime import datetime -import os -from typing import Optional -import gradio as gr - -from convert import convert -from huggingface_hub import HfApi, Repository - - -DATASET_REPO_URL = "https://huggingface.co/datasets/safetensors/conversions" -DATA_FILENAME = "data.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) - -HF_TOKEN = os.environ.get("HF_TOKEN") - -repo: Optional[Repository] = None -# if HF_TOKEN: -# repo = Repository(local_dir="data", clone_from=DATASET_REPO_URL, token=HF_TOKEN) - - -def run(token: str, model_id: str) -> str: - if token == "" or model_id == "": - return """ - ### Invalid input 🐞 - - Please fill a token and model_id. - """ - try: - api = HfApi(token=token) - is_private = api.model_info(repo_id=model_id).private - print("is_private", is_private) - - commit_info = convert(api=api, model_id=model_id) - print("[commit_info]", commit_info) - - # save in a (public) dataset: - if repo is not None and not is_private: - repo.git_pull(rebase=True) - print("pulled") - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter( - csvfile, fieldnames=["model_id", "pr_url", "time"] - ) - writer.writerow( - { - "model_id": model_id, - "pr_url": commit_info.pr_url, - "time": str(datetime.now()), - } - ) - commit_url = repo.push_to_hub() - print("[dataset]", commit_url) - - return f""" - ### Success 🔥 - - Yay! This model was successfully converted and a PR was open using your token, here: - - [{commit_info.pr_url}]({commit_info.pr_url}) - """ - except Exception as e: - return f""" - ### Error 😢😢😢 - - {e} - """ - - -DESCRIPTION = """ -The steps are the following: - -- Paste a read-access token from hf.co/settings/tokens. Read access is enough given that we will open a PR against the source repo. -- Input a model id from the Hub -- Click "Submit" -- That's it! You'll get feedback if it works or not, and if it worked, you'll get the URL of the opened PR 🔥 - -⚠️ For now only `pytorch_model.bin` files are supported but we'll extend in the future. -""" - -demo = gr.Interface( - title="Convert any model to Safetensors and open a PR", - description=DESCRIPTION, - allow_flagging="never", - article="Check out the [Safetensors repo on GitHub](https://github.com/huggingface/safetensors)", - inputs=[ - gr.Text(max_lines=1, label="your_hf_token"), - gr.Text(max_lines=1, label="model_id"), - ], - outputs=[gr.Markdown(label="output")], - fn=run, -).queue() - -demo.launch() diff --git a/spaces/sanchit-gandhi/whisper-language-id/app.py b/spaces/sanchit-gandhi/whisper-language-id/app.py deleted file mode 100644 index ee54608a2c778409175c009e2dd109e40e777169..0000000000000000000000000000000000000000 --- a/spaces/sanchit-gandhi/whisper-language-id/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import torch -import torch.nn.functional as F - -from transformers import WhisperForConditionalGeneration, WhisperProcessor -from transformers.models.whisper.tokenization_whisper import LANGUAGES -from transformers.pipelines.audio_utils import ffmpeg_read - -import gradio as gr - - -model_id = "openai/whisper-large-v2" - -device = "cuda" if torch.cuda.is_available() else "cpu" - -processor = WhisperProcessor.from_pretrained(model_id) -model = WhisperForConditionalGeneration.from_pretrained(model_id) -model.eval() -model.to(device) - -sampling_rate = processor.feature_extractor.sampling_rate - -bos_token_id = processor.tokenizer.all_special_ids[-106] -decoder_input_ids = torch.tensor([bos_token_id]).to(device) - - -def process_audio_file(file): - with open(file, "rb") as f: - inputs = f.read() - - audio = ffmpeg_read(inputs, sampling_rate) - return audio - - -def transcribe(Microphone, File_Upload): - warn_output = "" - if (Microphone is not None) and (File_Upload is not None): - warn_output = "WARNING: You've uploaded an audio file and used the microphone. " \ - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - file = Microphone - - elif (Microphone is None) and (File_Upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - elif Microphone is not None: - file = Microphone - else: - file = File_Upload - - audio_data = process_audio_file(file) - - input_features = processor(audio_data, return_tensors="pt").input_features - - with torch.no_grad(): - logits = model.forward(input_features.to(device), decoder_input_ids=decoder_input_ids).logits - - pred_ids = torch.argmax(logits, dim=-1) - probability = F.softmax(logits, dim=-1).max() - - lang_ids = processor.decode(pred_ids[0]) - - lang_ids = lang_ids.lstrip("<|").rstrip("|>") - language = LANGUAGES.get(lang_ids, "not detected") - - return language.capitalize(), probability.cpu().numpy() - - -iface = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type='filepath', optional=True), - gr.inputs.Audio(source="upload", type='filepath', optional=True), - ], - outputs=[ - gr.outputs.Textbox(label="Language"), - gr.Number(label="Probability"), - ], - layout="horizontal", - theme="huggingface", - title="Whisper Language Identification", - description="Demo for Language Identification using OpenAI's [Whisper Large V2](https://huggingface.co/openai/whisper-large-v2).", - allow_flagging='never', -) -iface.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/sayakpaul/sidd-denoising-maxim/maxim/layers.py b/spaces/sayakpaul/sidd-denoising-maxim/maxim/layers.py deleted file mode 100644 index e9fd870e335674cf6bd040bf27c9194d53dc4409..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/sidd-denoising-maxim/maxim/layers.py +++ /dev/null @@ -1,101 +0,0 @@ -import einops -import tensorflow as tf -from tensorflow.experimental import numpy as tnp -from tensorflow.keras import backend as K -from tensorflow.keras import layers - - -@tf.keras.utils.register_keras_serializable("maxim") -class BlockImages(layers.Layer): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def call(self, x, patch_size): - bs, h, w, num_channels = ( - K.int_shape(x)[0], - K.int_shape(x)[1], - K.int_shape(x)[2], - K.int_shape(x)[3], - ) - - grid_height, grid_width = h // patch_size[0], w // patch_size[1] - - x = einops.rearrange( - x, - "n (gh fh) (gw fw) c -> n (gh gw) (fh fw) c", - gh=grid_height, - gw=grid_width, - fh=patch_size[0], - fw=patch_size[1], - ) - - return x - - def get_config(self): - config = super().get_config().copy() - return config - - -@tf.keras.utils.register_keras_serializable("maxim") -class UnblockImages(layers.Layer): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def call(self, x, grid_size, patch_size): - x = einops.rearrange( - x, - "n (gh gw) (fh fw) c -> n (gh fh) (gw fw) c", - gh=grid_size[0], - gw=grid_size[1], - fh=patch_size[0], - fw=patch_size[1], - ) - - return x - - def get_config(self): - config = super().get_config().copy() - return config - - -@tf.keras.utils.register_keras_serializable("maxim") -class SwapAxes(layers.Layer): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def call(self, x, axis_one, axis_two): - return tnp.swapaxes(x, axis_one, axis_two) - - def get_config(self): - config = super().get_config().copy() - return config - - -@tf.keras.utils.register_keras_serializable("maxim") -class Resizing(layers.Layer): - def __init__(self, height, width, antialias=True, method="bilinear", **kwargs): - super().__init__(**kwargs) - self.height = height - self.width = width - self.antialias = antialias - self.method = method - - def call(self, x): - return tf.image.resize( - x, - size=(self.height, self.width), - antialias=self.antialias, - method=self.method, - ) - - def get_config(self): - config = super().get_config().copy() - config.update( - { - "height": self.height, - "width": self.width, - "antialias": self.antialias, - "method": self.method, - } - ) - return config diff --git a/spaces/scedlatioru/img-to-music/example/Fabulous - Angelas True Colors Download For Pc [torrent Full].md b/spaces/scedlatioru/img-to-music/example/Fabulous - Angelas True Colors Download For Pc [torrent Full].md deleted file mode 100644 index 97aae87fe9d2e1e2191478e16ae7b8c922dd8eca..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Fabulous - Angelas True Colors Download For Pc [torrent Full].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fabulous - Angela's True Colors Download For Pc [torrent Full]


    Download Filehttps://gohhs.com/2uEzOQ



    - -October 1, 2019 - About this game. After blowing up the fashion world at New York Fashion Week, Angela wants more! Next stop: HOLLYWOOD! She uses the new "Magic Screen" to become an actress, singer and model in a world where fame brings fame. But when Angela meets the "tough" Hollywood guys who are not averse to trying out her new world, everything changes. Myth is the original action game from the creators of The Escapists. Play as Angela, the famous model and host of the show on the TeenVision channel, who is trying to regain popularity and success in Hollywood. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/The.Ultimate.Fake.ID.Guide.2010.Version.8 !!HOT!!.md b/spaces/scedlatioru/img-to-music/example/The.Ultimate.Fake.ID.Guide.2010.Version.8 !!HOT!!.md deleted file mode 100644 index d8b8aff002b539fbf0b15c908a7437d2b8b33a9d..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/The.Ultimate.Fake.ID.Guide.2010.Version.8 !!HOT!!.md +++ /dev/null @@ -1,10 +0,0 @@ - -

    Topfakeid and IDtop are no longer active, possibly owing to recent news about tighter regulations in fake-id-manufacturing. This guide will be updated with a link to a reader-submitted ID list. YMMV.

    -

    If you need help and you have a problem call our expert. They will guide you on the step by step process of getting a fake id and then we can make it legal after the case is done. We use quality products and services and that's a guarantee.

    -

    The.Ultimate.Fake.ID.Guide.2010.Version.8


    Download File »»» https://gohhs.com/2uEAaf



    -

    The ultimate fake id was always the best at getting ids, however, they can be on the costly side. They look very real. The hologram on the id is very accurate. Both foil and hologram layers are very easy to apply. The lamination is great. Their machines are very fast. You will get your ids quickly, however, they can be expensive.

    -

    The ultimate fake id has been the best at everything for me for the past two years. They offer us a decent amount of ids by just us having to deal with them exclusively. The cards are all good with no abrupt areas. Their laminators are very good as well. Their holograms look very real on the ids we have seen. The foil laminate on the ids are also very high quality.

    -

    I have fakes at IDTop and laminated IDs (see above) - I do not provide custom printed fakes for liars who want to look like a real driver's license. Instead, a list of all authentic license types for each state is available here:

    -

    Id top for me has been the best service in the game when it comes to fake ids. They are affordable, while having a decent amount of ids at their disposal. We were promised that our ids would be ready within 15 days. The hologram looks real on the id's, however, there are a few minor issues with the foil laminate. It can be seen in the photos but it isn't shown on the sample id we received.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/sdhsdhk/bingo111/src/components/ui/separator.tsx b/spaces/sdhsdhk/bingo111/src/components/ui/separator.tsx deleted file mode 100644 index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/components/ui/separator.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SeparatorPrimitive from '@radix-ui/react-separator' - -import { cn } from '@/lib/utils' - -const Separator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, orientation = 'horizontal', decorative = true, ...props }, - ref - ) => ( - - ) -) -Separator.displayName = SeparatorPrimitive.Root.displayName - -export { Separator } diff --git a/spaces/shaneweisz/AutoCounterspeech/app.py b/spaces/shaneweisz/AutoCounterspeech/app.py deleted file mode 100644 index 676a31e8bd724af2fae02d98df63168e7c11fa13..0000000000000000000000000000000000000000 --- a/spaces/shaneweisz/AutoCounterspeech/app.py +++ /dev/null @@ -1,29 +0,0 @@ -from response_generation import ResponseGenerator -import gradio as gr - -DEFAULT_MODEL = "shaneweisz/DialoGPT-finetuned-gab-multiCONAN" -DECODING_CONFIG = {"max_new_tokens": 100, "min_new_tokens": 20, "no_repeat_ngram_size": 5, "num_beams": 10} - -TITLE = "Automatic Generation of Counterspeech to Fight Hate Speech" -DESCRIPTION = """ -Enter a hate speech comment (or select one of the provided examples below), click Submit, and see if the system generates an appropriate counterspeech response. -""" - -ARTICLE = f""" -This system has been built by [Shane Weisz](https://shaneweisz.com) for his research project on _Automating Counterspeech in Dialogue Systems_ as part of the [MPhil in Machine Learning and Machine Intelligence](https://www.mlmi.eng.cam.ac.uk/) at Cambridge University. The project is supervised by [Dr Marcus Tomalin](https://www.crassh.cam.ac.uk/about/people/marcus-tomalin/) and forms part of the [Giving Voice to Digital Democracies](https://www.crassh.cam.ac.uk/research/projects-centres/giving-voice-to-digital-democracies/) project on the _The Social Impact of Artificially Intelligent Communications Technology_. -

    -The system is built by fine-tuning [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium#:~:text=DialoGPT%20is%20a%20SOTA%20large,single%2Dturn%20conversation%20Turing%20test) on the [MultiCONAN](https://github.com/marcoguerini/CONAN#Multitarget-CONAN) dataset, a dataset comprising a set of hate speech inputs and appropriate [counterspeech](https://dangerousspeech.org/counterspeech/) responses produced under the supervision of trained NGO operators from [Stop Hate UK](https://www.stophateuk.org/). -

    -**Model:** {DEFAULT_MODEL}
    -**Decoding parameters:** {DECODING_CONFIG} -

    -_Please note: This system is a prototype and cannot be guaranteed to always generate appropriate responses. Any inappropriate responses expressed by the system should not be construed as reflective of the views or values of the researchers._ -""" - -model = ResponseGenerator(DEFAULT_MODEL, DECODING_CONFIG) - -def respond(input): - return model.respond(input) - -demo = gr.Interface(fn=respond, inputs="text", outputs="text", examples=["Muslims are all terrorists", "Jews are stingy and only care about money", "Damn feminists trying to take over the world. Can't women just accept their place?"], cache_examples = False, title = TITLE, description = DESCRIPTION, article = ARTICLE) -demo.launch() diff --git a/spaces/shgao/EditAnything/utils/sketch_helpers.py b/spaces/shgao/EditAnything/utils/sketch_helpers.py deleted file mode 100644 index e71551158b3ae3d3f7f2bff1a3976c50ccb16a24..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/utils/sketch_helpers.py +++ /dev/null @@ -1,84 +0,0 @@ -import numpy as np -import cv2 -from PIL import Image -from skimage.color import rgb2lab -from skimage.color import lab2rgb -from sklearn.cluster import KMeans - - -def count_high_freq_colors(image): - im = image.getcolors(maxcolors=1024 * 1024) - sorted_colors = sorted(im, key=lambda x: x[0], reverse=True) - - freqs = [c[0] for c in sorted_colors] - mean_freq = sum(freqs) / len(freqs) - - high_freq_colors = [c for c in sorted_colors if c[0] > max(2, mean_freq * 1.25)] - return high_freq_colors - - -def get_high_freq_colors(image, similarity_threshold=30): - image_copy = image.copy() - high_freq_colors = count_high_freq_colors(image) - # Check for similar colors and replace the lower frequency color with the higher frequency color in the image - for i, (freq1, color1) in enumerate(high_freq_colors): - for j, (freq2, color2) in enumerate(high_freq_colors): - if (color_distance(color1, color2) < similarity_threshold) or ( - color_distance(color1, opaque_color_on_white(color2, 0.5)) < 5): - if (freq2 > freq1): - replace_color(image_copy, color1, color2) - - high_freq_colors = count_high_freq_colors(image_copy) - print(high_freq_colors) - return [high_freq_colors, image_copy] - - -def color_quantization(image, color_frequency_list): - # Convert the color frequency list to a set of unique colors - unique_colors = set([color for _, color in color_frequency_list]) - - # Create a mask for the image with True where the color is in the unique colors set - mask = np.any(np.all(image.reshape(-1, 1, 3) == np.array(list(unique_colors)), axis=2), axis=1).reshape( - image.shape[:2]) - - # Create a new image with all pixels set to white - new_image = np.full_like(image, 255) - - # Copy the pixels from the original image that have a color in the color frequency list - new_image[mask] = image[mask] - return new_image - - -def create_binary_matrix(img_arr, target_color): - # Create mask of pixels with target color - mask = np.all(img_arr == target_color, axis=-1) - - # Convert mask to binary matrix - binary_matrix = mask.astype(int) - from datetime import datetime - binary_file_name = f'mask-{datetime.now().timestamp()}.png' - cv2.imwrite(binary_file_name, binary_matrix * 255) - - # binary_matrix = torch.from_numpy(binary_matrix).unsqueeze(0).unsqueeze(0) - return binary_file_name - - -def color_distance(color1, color2): - return sum((a - b) ** 2 for a, b in zip(color1, color2)) ** 0.5 - - -def replace_color(image, old_color, new_color): - pixels = image.load() - width, height = image.size - for x in range(width): - for y in range(height): - if pixels[x, y] == old_color: - pixels[x, y] = new_color - - -def opaque_color_on_white(color, a): - r, g, b = color - opaque_red = int((1 - a) * 255 + a * r) - opaque_green = int((1 - a) * 255 + a * g) - opaque_blue = int((1 - a) * 255 + a * b) - return (opaque_red, opaque_green, opaque_blue) diff --git a/spaces/simonduerr/diffdock/esm/esm/modules.py b/spaces/simonduerr/diffdock/esm/esm/modules.py deleted file mode 100644 index dc7b1ae2ef4caa1f42dc400ed9a7fcc33ca348ad..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/esm/esm/modules.py +++ /dev/null @@ -1,418 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .multihead_attention import MultiheadAttention # noqa -from .axial_attention import ColumnSelfAttention, RowSelfAttention - - -def gelu(x): - """Implementation of the gelu activation function. - - For information: OpenAI GPT's gelu is slightly different - (and gives slightly different results): - 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) - """ - return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) - - -def symmetrize(x): - "Make layer symmetric in final two dimensions, used for contact prediction." - return x + x.transpose(-1, -2) - - -def apc(x): - "Perform average product correct, used for contact prediction." - a1 = x.sum(-1, keepdims=True) - a2 = x.sum(-2, keepdims=True) - a12 = x.sum((-1, -2), keepdims=True) - - avg = a1 * a2 - avg.div_(a12) # in-place to reduce memory - normalized = x - avg - return normalized - - -class ESM1LayerNorm(nn.Module): - def __init__(self, hidden_size, eps=1e-12, affine=True): - """Construct a layernorm layer in the TF style (eps inside the sqrt).""" - super().__init__() - self.hidden_size = (hidden_size,) if isinstance(hidden_size, int) else tuple(hidden_size) - self.eps = eps - self.affine = bool(affine) - if self.affine: - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.bias = nn.Parameter(torch.zeros(hidden_size)) - else: - self.weight, self.bias = None, None - - def forward(self, x): - dims = tuple(-(i + 1) for i in range(len(self.hidden_size))) - means = x.mean(dims, keepdim=True) - x_zeromean = x - means - variances = x_zeromean.pow(2).mean(dims, keepdim=True) - x = x_zeromean / torch.sqrt(variances + self.eps) - if self.affine: - x = (self.weight * x) + self.bias - return x - - -try: - from apex.normalization import FusedLayerNorm as _FusedLayerNorm - - class ESM1bLayerNorm(_FusedLayerNorm): - @torch.jit.unused - def forward(self, x): - if not x.is_cuda: - return super().forward(x) - else: - with torch.cuda.device(x.device): - return super().forward(x) - -except ImportError: - from torch.nn import LayerNorm as ESM1bLayerNorm - - -class TransformerLayer(nn.Module): - """Transformer layer block.""" - - def __init__( - self, - embed_dim, - ffn_embed_dim, - attention_heads, - add_bias_kv=True, - use_esm1b_layer_norm=False, - use_rotary_embeddings: bool = False, - ): - super().__init__() - self.embed_dim = embed_dim - self.ffn_embed_dim = ffn_embed_dim - self.attention_heads = attention_heads - self.use_rotary_embeddings = use_rotary_embeddings - self._init_submodules(add_bias_kv, use_esm1b_layer_norm) - - def _init_submodules(self, add_bias_kv, use_esm1b_layer_norm): - BertLayerNorm = ESM1bLayerNorm if use_esm1b_layer_norm else ESM1LayerNorm - - self.self_attn = MultiheadAttention( - self.embed_dim, - self.attention_heads, - add_bias_kv=add_bias_kv, - add_zero_attn=False, - use_rotary_embeddings=self.use_rotary_embeddings, - ) - self.self_attn_layer_norm = BertLayerNorm(self.embed_dim) - - self.fc1 = nn.Linear(self.embed_dim, self.ffn_embed_dim) - self.fc2 = nn.Linear(self.ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = BertLayerNorm(self.embed_dim) - - def forward( - self, x, self_attn_mask=None, self_attn_padding_mask=None, need_head_weights=False - ): - residual = x - x = self.self_attn_layer_norm(x) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - need_weights=True, - need_head_weights=need_head_weights, - attn_mask=self_attn_mask, - ) - x = residual + x - - residual = x - x = self.final_layer_norm(x) - x = gelu(self.fc1(x)) - x = self.fc2(x) - x = residual + x - - return x, attn - - -class AxialTransformerLayer(nn.Module): - """Implements an Axial MSA Transformer block.""" - - def __init__( - self, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - max_tokens_per_msa: int = 2**14, - ) -> None: - super().__init__() - - # Initialize parameters - self.embedding_dim = embedding_dim - self.dropout_prob = dropout - - row_self_attention = RowSelfAttention( - embedding_dim, - num_attention_heads, - dropout=dropout, - max_tokens_per_msa=max_tokens_per_msa, - ) - - column_self_attention = ColumnSelfAttention( - embedding_dim, - num_attention_heads, - dropout=dropout, - max_tokens_per_msa=max_tokens_per_msa, - ) - - feed_forward_layer = FeedForwardNetwork( - embedding_dim, - ffn_embedding_dim, - activation_dropout=activation_dropout, - max_tokens_per_msa=max_tokens_per_msa, - ) - - self.row_self_attention = self.build_residual(row_self_attention) - self.column_self_attention = self.build_residual(column_self_attention) - self.feed_forward_layer = self.build_residual(feed_forward_layer) - - def build_residual(self, layer: nn.Module): - return NormalizedResidualBlock( - layer, - self.embedding_dim, - self.dropout_prob, - ) - - def forward( - self, - x: torch.Tensor, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_head_weights: bool = False, - ): - """ - LayerNorm is applied either before or after the self-attention/ffn - modules similar to the original Transformer implementation. - """ - x, row_attn = self.row_self_attention( - x, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - ) - x, column_attn = self.column_self_attention( - x, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - ) - x = self.feed_forward_layer(x) - if need_head_weights: - return x, column_attn, row_attn - else: - return x - - -class LearnedPositionalEmbedding(nn.Embedding): - """ - This module learns positional embeddings up to a fixed maximum size. - Padding ids are ignored by either offsetting based on padding_idx - or by setting padding_idx to None and ensuring that the appropriate - position ids are passed to the forward function. - """ - - def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int): - if padding_idx is not None: - num_embeddings_ = num_embeddings + padding_idx + 1 - else: - num_embeddings_ = num_embeddings - super().__init__(num_embeddings_, embedding_dim, padding_idx) - self.max_positions = num_embeddings - - def forward(self, input: torch.Tensor): - """Input is expected to be of size [bsz x seqlen].""" - if input.size(1) > self.max_positions: - raise ValueError( - f"Sequence length {input.size(1)} above maximum " - f" sequence length of {self.max_positions}" - ) - mask = input.ne(self.padding_idx).int() - positions = (torch.cumsum(mask, dim=1).type_as(mask) * mask).long() + self.padding_idx - return F.embedding( - positions, - self.weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) - - -class SinusoidalPositionalEmbedding(nn.Module): - def __init__(self, embed_dim, padding_idx, learned=False): - super().__init__() - self.embed_dim = embed_dim - self.padding_idx = padding_idx - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - self.weights = None - - def forward(self, x): - bsz, seq_len = x.shape - max_pos = self.padding_idx + 1 + seq_len - if self.weights is None or max_pos > self.weights.size(0): - self.weights = self.get_embedding(max_pos) - self.weights = self.weights.type_as(self._float_tensor) - - positions = self.make_positions(x) - return self.weights.index_select(0, positions.view(-1)).view(bsz, seq_len, -1).detach() - - def make_positions(self, x): - mask = x.ne(self.padding_idx) - range_buf = torch.arange(x.size(1), device=x.device).expand_as(x) + self.padding_idx + 1 - positions = range_buf.expand_as(x) - return positions * mask.long() + self.padding_idx * (1 - mask.long()) - - def get_embedding(self, num_embeddings): - half_dim = self.embed_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb) - emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze(1) * emb.unsqueeze(0) - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1) - if self.embed_dim % 2 == 1: - # zero pad - emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1) - if self.padding_idx is not None: - emb[self.padding_idx, :] = 0 - return emb - - -class RobertaLMHead(nn.Module): - """Head for masked language modeling.""" - - def __init__(self, embed_dim, output_dim, weight): - super().__init__() - self.dense = nn.Linear(embed_dim, embed_dim) - self.layer_norm = ESM1bLayerNorm(embed_dim) - self.weight = weight - self.bias = nn.Parameter(torch.zeros(output_dim)) - - def forward(self, features): - x = self.dense(features) - x = gelu(x) - x = self.layer_norm(x) - # project back to size of vocabulary with bias - x = F.linear(x, self.weight) + self.bias - return x - - -class ContactPredictionHead(nn.Module): - """Performs symmetrization, apc, and computes a logistic regression on the output features""" - - def __init__( - self, - in_features: int, - prepend_bos: bool, - append_eos: bool, - bias=True, - eos_idx: Optional[int] = None, - ): - super().__init__() - self.in_features = in_features - self.prepend_bos = prepend_bos - self.append_eos = append_eos - if append_eos and eos_idx is None: - raise ValueError("Using an alphabet with eos token, but no eos token was passed in.") - self.eos_idx = eos_idx - self.regression = nn.Linear(in_features, 1, bias) - self.activation = nn.Sigmoid() - - def forward(self, tokens, attentions): - # remove eos token attentions - if self.append_eos: - eos_mask = tokens.ne(self.eos_idx).to(attentions) - eos_mask = eos_mask.unsqueeze(1) * eos_mask.unsqueeze(2) - attentions = attentions * eos_mask[:, None, None, :, :] - attentions = attentions[..., :-1, :-1] - # remove cls token attentions - if self.prepend_bos: - attentions = attentions[..., 1:, 1:] - batch_size, layers, heads, seqlen, _ = attentions.size() - attentions = attentions.view(batch_size, layers * heads, seqlen, seqlen) - - # features: B x C x T x T - attentions = attentions.to( - self.regression.weight.device - ) # attentions always float32, may need to convert to float16 - attentions = apc(symmetrize(attentions)) - attentions = attentions.permute(0, 2, 3, 1) - return self.activation(self.regression(attentions).squeeze(3)) - - -class NormalizedResidualBlock(nn.Module): - def __init__( - self, - layer: nn.Module, - embedding_dim: int, - dropout: float = 0.1, - ): - super().__init__() - self.embedding_dim = embedding_dim - - self.layer = layer - self.dropout_module = nn.Dropout( - dropout, - ) - self.layer_norm = ESM1bLayerNorm(self.embedding_dim) - - def forward(self, x, *args, **kwargs): - residual = x - x = self.layer_norm(x) - outputs = self.layer(x, *args, **kwargs) - if isinstance(outputs, tuple): - x, *out = outputs - else: - x = outputs - out = None - - x = self.dropout_module(x) - x = residual + x - - if out is not None: - return (x,) + tuple(out) - else: - return x - - -class FeedForwardNetwork(nn.Module): - def __init__( - self, - embedding_dim: int, - ffn_embedding_dim: int, - activation_dropout: float = 0.1, - max_tokens_per_msa: int = 2**14, - ): - super().__init__() - self.embedding_dim = embedding_dim - self.ffn_embedding_dim = ffn_embedding_dim - self.max_tokens_per_msa = max_tokens_per_msa - self.activation_fn = nn.GELU() - self.activation_dropout_module = nn.Dropout( - activation_dropout, - ) - self.fc1 = nn.Linear(embedding_dim, ffn_embedding_dim) - self.fc2 = nn.Linear(ffn_embedding_dim, embedding_dim) - - def forward(self, x): - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - return x diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Parking Multiplayer The Ultimate Simulation Game for Windows 11 Users.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Parking Multiplayer The Ultimate Simulation Game for Windows 11 Users.md deleted file mode 100644 index 5bf39eb6828f77fadcd286c76b83d602492920b7..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Parking Multiplayer The Ultimate Simulation Game for Windows 11 Users.md +++ /dev/null @@ -1,102 +0,0 @@ - -

    Car Parking Multiplayer: How to Download and Play on PC Windows 11

    -

    Introduction

    -

    Do you love driving and parking games? Do you want to experience a realistic and immersive open-world simulation? Do you want to play with thousands of other players online and have fun? If you answered yes to any of these questions, then you should try Car Parking Multiplayer, a game that offers more than just parking.

    -

    Car Parking Multiplayer is a game that supports open-world multiplayer mode, car tuning, police mode, and free walking. Plus, you can decide to jump out of the car and walk around. There are several areas that you can explore in the game and you can choose to play either single-player mode or online mode if you want a more chaotic (fun) scene.

    -

    car parking multiplayer download pc windows 11


    Download > https://ssurll.com/2uNYca



    -

    But how can you play this game on your PC Windows 11? Is there a way to enjoy the game on a bigger screen and with better controls? In this article, we will show you how to download and play Car Parking Multiplayer on PC Windows 11 using two methods: using an Android emulator or using a web browser. We will also give you some features and tips about the game that will help you get started.

    -

    How to download and play Car Parking Multiplayer on PC Windows 11

    -

    Method 1: Using an Android emulator

    -

    An Android emulator is a software that allows you to run Android applications on your PC Windows 11. It simulates an Android device on your computer and lets you access the Google Play Store and other Android features. There are many Android emulators available online, such as BlueStacks, LDPlayer, Nox, KOPlayer, etc. You can choose any one of them according to your preference and compatibility.

    -

    Step 1: Download and install an Android emulator

    -

    The first step is to download and install an Android emulator on your PC Windows 11. You can go to the official website of the emulator that you want to use and follow the instructions there. For example, if you want to use BlueStacks, you can go to [10](https://www.bluestacks.com) and click on the Download button. Then, run the installer file and follow the steps to complete the installation.

    -

    Step 2: Download the APK/XAPK file of Car Parking Multiplayer

    -

    The next step is to download the APK/XAPK file of Car Parking Multiplayer. This is the file that contains the game data and allows you to install it on your emulator. You can find this file on various websites online, such as [1](https://appsonwindows.com/apk/419703/). Make sure that you download it from a trusted source and save it to an easy-to-find location on your computer.

    -

    Step 3: Install and launch Car Parking Multiplayer on the emulator

    -

    The final step is to install and launch Car Parking Multiplayer on the emulator. To do this, you need to drag and drop the APK/XAPK file onto the emulator window or use the built-in file manager to locate and install it. Once the installation is done, you will see the game icon on the emulator home screen. Click on it and start playing Car Parking Multiplayer on your PC Windows 11.

    -

    Method 2: Using a web browser

    -

    Another method to play Car Parking Multiplayer on PC Windows 11 is to use a web browser. This method does not require you to download or install anything on your computer, but it may not offer the same performance and features as the emulator method. However, it is still a convenient and easy way to enjoy the game online.

    -

    car parking multiplayer pc windows 11 free download
    -car parking multiplayer windows 11 apk download
    -how to install car parking multiplayer on windows 11
    -car parking multiplayer for windows 11 laptop
    -car parking multiplayer windows 11 bluestacks
    -car parking multiplayer windows 11 gameloop
    -car parking multiplayer windows 11 emulator
    -car parking multiplayer windows 11 youtube
    -car parking multiplayer windows 11 gameplay
    -car parking multiplayer windows 11 review
    -car parking multiplayer windows 11 system requirements
    -car parking multiplayer windows 11 update
    -car parking multiplayer windows 11 online
    -car parking multiplayer windows 11 offline
    -car parking multiplayer windows 11 mod apk
    -car parking multiplayer windows 11 cheats
    -car parking multiplayer windows 11 hack
    -car parking multiplayer windows 11 tips and tricks
    -car parking multiplayer windows 11 best cars
    -car parking multiplayer windows 11 tuning
    -car parking multiplayer windows 11 custom maps
    -car parking multiplayer windows 11 police mode
    -car parking multiplayer windows 11 open world
    -car parking multiplayer windows 11 free walking
    -car parking multiplayer windows 11 characters
    -car parking multiplayer windows 11 skins
    -car parking multiplayer windows 11 graphics settings
    -car parking multiplayer windows 11 controls
    -car parking multiplayer windows 11 steering wheel support
    -car parking multiplayer windows 11 keyboard and mouse
    -car parking multiplayer windows 11 voice chat
    -car parking multiplayer windows 11 discord server
    -car parking multiplayer windows 11 reddit
    -car parking multiplayer windows 11 facebook group
    -car parking multiplayer windows 11 instagram page
    -car parking multiplayer windows 11 twitter account
    -car parking multiplayer windows 11 official website
    -car parking multiplayer windows 11 developer olzhass
    -car parking multiplayer windows 11 new version download
    -car parking multiplayer windows 11 latest news and updates

    -

    Step 1: Open a web browser on your PC Windows 11

    -

    The first step is to open a web browser on your PC Windows 11. You can use any browser that you like, such as Chrome, Firefox, Edge, etc. Make sure that you have a stable internet connection and that your browser supports HTML5 and WebGL technologies.

    -

    Step 2: Go to the official website of Car Parking Multiplayer

    -

    The next step is to go to the official website of Car Parking Multiplayer. You can do this by typing [2](https://car-parking-multiplayer.com) in the address bar of your browser or by clicking on this link. This will take you to the homepage of the game where you can see some information and screenshots about it.

    -

    Step 3: Click on the Play Now button and enjoy the game

    -

    The final step is to click on the Play Now button and enjoy the game. This will launch the game in a new tab or window of your browser and you can start playing it right away. You can use your mouse and keyboard to control your car and interact with other players online.

    -

    Car Parking Multiplayer game features

    -

    Car Parking Multiplayer is a game that offers more than just parking. It has many features that make it fun and realistic, such as:

    -

    Multiplayer open world mode

    -

    This is the main mode of the game where you can join thousands of other players online and explore different areas of the open world. You can choose from various cars, such as sports cars, trucks, buses, etc., and drive them around freely. You can also chat with other players, join races, perform stunts, and more.

    -

    Car customization

    -

    This is a feature that allows you to customize your car according to your preference and needs. You can change the color, wheels, suspension, engine, turbo, etc., of your car and make it look unique and cool. You can also add stickers, decals, spoilers, neon lights, etc., to your car and show off your style.

    -

    High-quality open world

    -

    This is a feature that makes the game realistic and immersive. The game has high-quality graphics and sound effects that create a lifelike environment for you to enjoy. The game also has day-night cycle, weather system, traffic system, etc., that add more realism and variety to the game.

    -

    Interesting gameplay

    -

    This is a feature that makes the game fun and challenging. The game has different modes and missions that you can play and complete. For example, you can play parking mode where you have to park your car in different scenarios and situations. You can also play police mode where you have to chase or escape from other players who are breaking the law. You can also play free walking mode where you can jump out of your car and walk around.

    -

    Car Parking Multiplayer game tips

    -

    Car Parking Multiplayer is a game that requires some skills and strategies to play well. Here are some tips that will help you get started:

    -

    Learn the basics of parking and driving

    -

    This is a tip that will help you master the game mechanics and controls. You should learn how to park your car properly and avoid hitting obstacles or other cars. You should also learn how to drive your car smoothly and safely without crashing or damaging it. You should also learn how to use the different features of your car, such as lights, horn, indicators, etc.

    -

    Explore the different areas and modes

    -

    This is a tip that will help you discover more content and fun in the game. You should explore the different areas of the open world and see what they have to offer. You should also try out the different modes and missions of the game and see what they challenge you with. You should also experiment with different cars and customizations and see how they affect your performance.

    -

    Customize your car to suit your style and needs

    -

    This is a tip that will help you enhance your car and make it more enjoyable to drive. You should customize your car according to your style and needs. You can change the color, wheels, suspension, engine, turbo, etc., of your car and make it look unique and cool. You can also add stickers, decals, spoilers, neon lights, etc., to your car and show off your style. You can also adjust the settings of your car, such as steering sensitivity, brake force, etc., to make it more comfortable and responsive.

    -

    Interact with other players and have fun

    -

    This is a tip that will help you make the most of the multiplayer mode and have fun. You should interact with other players online and chat with them, join races, perform stunts, and more. You can also cooperate or compete with them in different modes and missions. You can also make friends or enemies with them and create your own stories.

    -

    Conclusion

    -

    Car Parking Multiplayer is a game that offers more than just parking. It is a game that supports open-world multiplayer mode, car tuning, police mode, and free walking. It is a game that has high-quality graphics and sound effects that create a realistic and immersive environment. It is a game that has different modes and missions that offer fun and challenge. It is a game that you can play on your PC Windows 11 using an Android emulator or a web browser.

    -

    If you are looking for a game that will give you a realistic and immersive driving and parking experience, then you should try Car Parking Multiplayer. You can download and play it on your PC Windows 11 using the methods that we have shown you in this article. You can also use the features and tips that we have given you to enhance your gameplay and have more fun.

    -

    So what are you waiting for? Download Car Parking Multiplayer now and enjoy the game on your PC Windows 11!

    -

    FAQs

    -

    Here are some frequently asked questions about Car Parking Multiplayer:

    -

    Q: Is Car Parking Multiplayer free to play?

    -

    A: Yes, Car Parking Multiplayer is free to play. However, it contains some in-app purchases that you can buy to get more coins, cars, customizations, etc.

    -

    Q: Is Car Parking Multiplayer safe to play?

    -

    A: Yes, Car Parking Multiplayer is safe to play. However, you should be careful when downloading the APK/XAPK file of the game from third-party sources. Make sure that you download it from a trusted source and scan it for viruses before installing it.

    -

    Q: How can I update Car Parking Multiplayer on PC Windows 11?

    -

    A: If you are using an Android emulator to play Car Parking Multiplayer on PC Windows 11, you can update the game by downloading the latest APK/XAPK file of the game from the internet and installing it on your emulator. If you are using a web browser to play Car Parking Multiplayer on PC Windows 11, you can update the game by refreshing the browser page or clearing the browser cache.

    -

    Q: How can I contact the developers of Car Parking Multiplayer?

    -

    A: If you have any questions, feedback, or suggestions about Car Parking Multiplayer, you can contact the developers of the game by sending an email to [3](mailto:support@olzhass.com) or by visiting their [4](https://www.facebook.com/CarParkingMultiplayer) page.

    -

    Q: How can I get more coins in Car Parking Multiplayer?

    -

    A: There are several ways to get more coins in Car Parking Multiplayer. You can get coins by completing missions, winning races, watching ads, etc. You can also buy coins with real money through in-app purchases.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Criminal Case The Conspiracy Mod APK - Latest Version with Unlimited Coins and Energy.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Criminal Case The Conspiracy Mod APK - Latest Version with Unlimited Coins and Energy.md deleted file mode 100644 index c444b624dece42c04a5be75b92adf901142fa77c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Criminal Case The Conspiracy Mod APK - Latest Version with Unlimited Coins and Energy.md +++ /dev/null @@ -1,107 +0,0 @@ - -

    Criminal Case: The Conspiracy Mod APK Latest Version

    -

    If you are a fan of hidden object games and crime-solving mysteries, you might want to check out Criminal Case: The Conspiracy, a captivating adventure game from Pretty Simple. In this game, you join the Police of Grimsborough once again to solve a series of murder cases in different crime scenes. You will have to investigate clues, interrogate suspects, analyze evidence, and catch the killers. But what if you want to enjoy the game without any limitations or interruptions? That's where Criminal Case: The Conspiracy mod APK comes in handy. In this article, we will tell you what this mod APK is, what are its benefits, how to download and install it, and how to play it. Read on to find out more.

    -

    What is Criminal Case: The Conspiracy?

    -

    Criminal Case: The Conspiracy is a hidden object, adventure game that was released in 2018 by Pretty Simple, a French game developer. It is the fifth season of the popular Criminal Case series, which has over 100 million downloads on Google Play Store. The game is set in Grimsborough, a fictional city that is plagued by crime and corruption. You play as a detective who works with a team of other investigators to solve various murder cases. Each case consists of several chapters, where you have to explore different locations, find hidden objects, collect clues, interrogate witnesses and suspects, and analyze evidence. At the end of each case, you have to arrest the killer and bring them to justice.

    -

    criminal case the conspiracy mod apk latest version


    Download >>> https://ssurll.com/2uNUXe



    -

    The game features stunning graphics, immersive sound effects, engaging storyline, and challenging puzzles. You can also customize your character's appearance, interact with other players, join a team, and compete with others in leaderboards and tournaments. The game is free to play, but some items can be purchased with real money. You can also watch ads to get extra energy or hints.

    -

    What is a mod APK?

    -

    A mod APK is a modified version of an original APK file, which is the format used for installing applications on Android devices. A mod APK can alter or enhance some features of the original app, such as removing ads, unlocking premium content, adding unlimited resources, or changing the gameplay. A mod APK can be created by anyone who has the skills and tools to do so, but not all mod APKs are safe or legal to use. Some mod APKs may contain malware or viruses that can harm your device or steal your personal information. Some mod APKs may also violate the terms of service or copyright laws of the original app developer.

    -

    Therefore, before downloading or installing any mod APK, you should always do some research and check the reviews and ratings of other users. You should also be aware of the risks and consequences of using a mod APK, such as losing your progress, getting banned from the game, or facing legal action.

    -

    What are the benefits of using Criminal Case: The Conspiracy mod APK?

    -

    If you decide to use Criminal Case: The Conspiracy mod APK, you can enjoy some benefits that are not available in the original version of the game. Here are some of them:

    -
      -
    • Unlimited energy: Energy is required to play any scene in the game. Normally, you have a maximum of 110 energy points that regenerate over time or can be refilled by buying snacks, watching ads, or using real money. With the mod APK, you can have unlimited energy and play as much as you want without waiting or spending money.
    • -
    • Instant analysis: Analysis is the process of examining the evidence you collect from the crime scenes. Normally, you have to wait for a certain amount of time or use stars to speed up the analysis. With the mod APK, you can skip the waiting time and get the results instantly.
    • -
    • No ads: Ads are annoying and distracting, especially when you are trying to focus on solving a case. With the mod APK, you can remove all the ads from the game and enjoy a smoother and more pleasant gaming experience.
    • -
    -

    These are just some of the benefits of using Criminal Case: The Conspiracy mod APK. There may be more features depending on the version and source of the mod APK you download.

    -

    criminal case the conspiracy hack apk download
    -criminal case the conspiracy unlimited energy mod apk
    -criminal case the conspiracy mod apk android 1
    -criminal case the conspiracy mod apk revdl
    -criminal case the conspiracy mod apk rexdl
    -criminal case the conspiracy mod apk offline
    -criminal case the conspiracy mod apk unlimited stars
    -criminal case the conspiracy mod apk unlimited money
    -criminal case the conspiracy mod apk latest update
    -criminal case the conspiracy mod apk free download
    -criminal case the conspiracy mod apk no root
    -criminal case the conspiracy mod apk 2.39
    -criminal case the conspiracy mod apk 2023
    -criminal case the conspiracy mod apk happymod
    -criminal case the conspiracy mod apk pure
    -criminal case the conspiracy mod apk obb
    -criminal case the conspiracy mod apk data
    -criminal case the conspiracy mod apk online
    -criminal case the conspiracy mod apk all unlocked
    -criminal case the conspiracy mod apk full version
    -criminal case the conspiracy cheat apk download
    -criminal case the conspiracy hack tool apk
    -criminal case the conspiracy unlimited hints mod apk
    -criminal case the conspiracy premium edition mod apk
    -criminal case the conspiracy pro mod apk
    -criminal case the conspiracy mega mod apk
    -criminal case the conspiracy vip mod apk
    -criminal case the conspiracy cracked apk download
    -criminal case the conspiracy patched apk download
    -criminal case the conspiracy unlocked apk download
    -criminal case the conspiracy latest version hack apk
    -criminal case the conspiracy latest version cheat apk
    -criminal case the conspiracy latest version premium apk
    -criminal case the conspiracy latest version pro apk
    -criminal case the conspiracy latest version mega apk
    -criminal case the conspiracy latest version vip apk
    -criminal case the conspiracy latest version cracked apk
    -criminal case the conspiracy latest version patched apk
    -criminal case the conspiracy latest version unlocked apk
    -download game criminal case the conspiracy mod apk

    -

    How to download and install Criminal Case: The Conspiracy mod APK?

    -

    If you want to try Criminal Case: The Conspiracy mod APK, you will need to follow these steps:

    -
      -
    1. Find a reliable source: As we mentioned earlier, not all mod APKs are safe or legal to use. You will need to find a trustworthy website that offers the latest version of Criminal Case: The Conspiracy mod APK. You can search online or ask for recommendations from other players who have used it before. Make sure to read the reviews and ratings of the website and the mod APK before downloading it.
    2. -
    3. Download the mod APK file: Once you find a reliable source, you can download the mod APK file to your device. You may need to enable the option of "Unknown sources" in your device settings to allow the installation of apps from sources other than Google Play Store.
    4. -
    5. Install the mod APK file: After downloading the mod APK file, you can tap on it and follow the instructions to install it on your device. You may need to uninstall the original version of Criminal Case: The Conspiracy if you have it installed already.
    6. -
    7. Launch the game and enjoy: Once the installation is complete, you can launch the game and start playing with the mod features. You may need to grant some permissions to the game to access your device's resources.
    8. -
    -

    Note: The steps may vary slightly depending on the source and version of the mod APK you download. Always be careful and cautious when downloading or installing any mod APK.

    -

    How to play Criminal Case: The Conspiracy mod APK?

    -

    Playing Criminal Case: The Conspiracy mod APK is similar to playing the original version of the game, except that you have some extra features that make it easier and more fun. Here are some tips and tricks on how to play the game effectively and enjoyably:

    -
      -
    • Choose your cases wisely: There are 56 cases in Criminal Case: The Conspiracy, divided into 10 districts. Each case has a different difficulty level, ranging from easy to hard. You can choose which case to play based on your preference and skill level. You can also replay any case you have already solved if you want to improve your score or find more clues.
    • -
    • Use your hints sparingly: Hints are helpful when you are stuck or can't find an object in a scene. However, they are limited and cost energy to use. You can get more hints by watching ads or using real money, but with the mod APK, you don't have to worry about that. However, using too many hints can make the game less challenging and rewarding. Try to use your hints sparingly and only when necessary.
    • -
    • Collect stars and coins: Stars and coins are two important currencies in Criminal Case: The Conspiracy. Stars are used to unlock new scenes, analyze evidence, interrogate suspects, and arrest killers. Coins are used to buy items, customize your character, join a team, and access bonus features. You can earn stars by completing scenes and coins by completing tasks, achievements, daily rewards, or watching ads. With the mod APK, you can have unlimited stars and coins and access everything in the game without any restrictions.
    • -
    • Interact with other players: Criminal Case: The Conspiracy is not only a solo game but also a social game. You can interact with other players by joining a team, sending and receiving gifts, chatting with them, helping them in their investigations, or competing with them in leaderboards and tournaments. You can also invite your friends to play with you and share your progress on social media. Playing with other players can make the game more fun and exciting.
    • -
    -

    Conclusion

    -

    Criminal Case: The Conspiracy is a captivating hidden object adventure game that will keep you hooked for hours. You can join the Police of Grimsborough and solve a series of murder cases in different crime scenes. You can also use Criminal Case: The Conspiracy mod APK to enjoy the game without any limitations or interruptions. You can have unlimited energy, instant analysis, no ads, and more. However, you should be careful and cautious when downloading or installing any mod APK, as some of them may be unsafe or illegal to use. You should also play the game with respect and fairness, and not abuse the mod features. If you are ready to test your detective skills and have some fun, you can download Criminal Case: The Conspiracy mod APK from a reliable source and start playing today.

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Criminal Case: The Conspiracy and its mod APK:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is Criminal Case: The Conspiracy mod APK free to use?Yes, Criminal Case: The Conspiracy mod APK is free to use, but you may need to download it from a third-party website that may not be secure or legal.
    Will I lose my progress if I use Criminal Case: The Conspiracy mod APK?No, you will not lose your progress if you use Criminal Case: The Conspiracy mod APK, as long as you use the same account and device that you used for the original version of the game.
    Will I get banned from the game if I use Criminal Case: The Conspiracy mod APK?Possibly, yes. Using any mod APK is against the terms of service of the game developer and may result in your account being suspended or terminated. You should use Criminal Case: The Conspiracy mod APK at your own risk and discretion.
    Can I play Criminal Case: The Conspiracy mod APK offline?No, you cannot play Criminal Case: The Conspiracy mod APK offline, as the game requires an internet connection to access its features and content.
    Can I play Criminal Case: The Conspiracy mod APK on PC?Yes, you can play Criminal Case: The Conspiracy mod APK on PC, but you will need to use an Android emulator software that can run APK files on your computer.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stumble Guys Versi 0.40 Beta and Experience the Latest Features.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stumble Guys Versi 0.40 Beta and Experience the Latest Features.md deleted file mode 100644 index 3355ae2c0fb2ed75052f7fbb8e450baf59791420..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stumble Guys Versi 0.40 Beta and Experience the Latest Features.md +++ /dev/null @@ -1,94 +0,0 @@ - -

    Download Stumble Guys Versi 0.40 Mod Apk Beta: A Guide for Android Users

    -

    If you are looking for a fun and chaotic multiplayer party game, you should check out Stumble Guys. This game lets you race with up to 32 players online through various obstacle courses. You can run, jump, dash, and slide past your opponents and overcome different challenges until you reach the finish line. But what if you want to enjoy more features and advantages in the game? Well, you can download Stumble Guys versi 0.40 mod apk beta and get unlimited coins, skins, emotes, and more. In this article, we will tell you everything you need to know about this mod apk beta version and how to download it on your Android device.

    -

    download stumble guys versi 0.40 mod apk beta


    Download Zip ⚙⚙⚙ https://ssurll.com/2uO130



    -

    What is Stumble Guys?

    -

    Stumble Guys is an online battle royale party game developed by Scopely. It was released in October 2021 for Android and iOS devices. The game is inspired by popular TV shows like Wipeout and Takeshi's Castle, where contestants have to go through hilarious and wacky obstacle courses. The game has 17 unique levels that change every round, making each match unpredictable and exciting. You can also customize your character with different outfits and emotes that you can unlock or buy with coins. The game has a colorful and whacky design that appeals to players of all ages.

    -

    What is Mod Apk Beta?

    -

    Mod apk beta is a modified version of an application that is not officially released by the developer. It usually offers some extra features or advantages that are not available in the original version. For example, a mod apk beta may have unlimited resources, unlocked items, or removed ads. However, mod apk beta versions are not always safe or reliable, as they may contain viruses, malware, or bugs that can harm your device or compromise your privacy. Therefore, you should always be careful when downloading mod apk beta versions from unknown sources.

    -

    Why Download Stumble Guys Versi 0.40 Mod Apk Beta?

    -

    Stumble Guys versi 0.40 mod apk beta is one of the latest versions of the game that has been modified by some fans or hackers. It offers some amazing benefits that can make your gameplay more enjoyable and rewarding. Here are some of the reasons why you should download Stumble Guys versi 0.40 mod apk beta:

    -
      -
    • You can get unlimited coins that you can use to buy or unlock any outfit or emote in the game.
    • -
    • You can get unlimited skins that you can use to change the appearance of your character.
    • -
    • You can get unlimited emotes that you can use to express yourself or taunt your opponents in the game.
    • -
    • You can get access to all the levels and modes in the game without any restrictions.
    • -
    • You can get faster loading speed and smoother performance in the game.
    • -
    -

    How to Download Stumble Guys Versi 0.40 Mod Apk Beta?

    -

    If you are interested in downloading Stumble Guys versi 0.40 mod apk beta, you need to follow these steps:

    -

    subway surfers unlimited coins and keys mod apk happymod
    -subway surfers hack apk download happymod latest version
    -subway surfers mod apk happymod all characters unlocked
    -subway surfers mod apk happymod android 1
    -subway surfers mod apk happymod new update 2022
    -subway surfers mod apk happymod offline
    -subway surfers mod apk happymod old version
    -subway surfers mod apk happymod online
    -subway surfers mod apk happymod unlimited everything
    -subway surfers mod apk happymod v3.13.0
    -subway surfers hack mod apk download happymod 2022
    -subway surfers hack mod apk download happymod android
    -subway surfers hack mod apk download happymod free
    -subway surfers hack mod apk download happymod ios
    -subway surfers hack mod apk download happymod no root
    -subway surfers hack mod apk download happymod unlimited money
    -subway surfers hack mod apk download happymod v3.13.0
    -subway surfers hack mod apk download happymod zip file
    -subway surfers unlimited money and keys mod apk download happymod
    -subway surfers unlimited money and keys mod apk download happymod 2022
    -subway surfers unlimited money and keys mod apk download happymod android
    -subway surfers unlimited money and keys mod apk download happymod free
    -subway surfers unlimited money and keys mod apk download happymod ios
    -subway surfers unlimited money and keys mod apk download happymod latest version
    -subway surfers unlimited money and keys mod apk download happymod no root
    -subway surfers unlimited money and keys mod apk download happymod offline
    -subway surfers unlimited money and keys mod apk download happymod online
    -subway surfers unlimited money and keys mod apk download happymod v3.13.0
    -subway surfers unlimited money and keys mod apk download happymod zip file
    -how to download subway surfers mod apk from happymod 2022
    -how to install subway surfers mod apk from happymod 2022
    -how to play subway surfers mod apk from happymod 2022
    -how to update subway surfers mod apk from happymod 2022
    -how to use subway surfers mod apk from happymod 2022
    -is it safe to download subway surfers mod apk from happymod 2022
    -is it legal to download subway surfers mod apk from happymod 2022
    -what is the difference between subway surfers mod apk and original game from happymod 2022
    -what are the benefits of downloading subway surfers mod apk from happymod 2022
    -what are the features of subway surfers mod apk from happymod 2022
    -what are the requirements for downloading subway surfers mod apk from happymod 2022

    -
      -
    1. Go to a trusted website that provides the download link for Stumble Guys versi 0.40 mod apk beta. You can search for it on Google or use this link: .
    2. -
    3. Click on the download button and wait for the file to be downloaded on your device.
    4. -
    5. Go to your device settings and enable the installation of apps from unknown sources. This will allow you to install the mod apk beta version on your device.
    6. -
    7. Go to your file manager and locate the downloaded file. Tap on it and follow the instructions to install it on your device.
    8. -
    9. Launch the game and enjoy the mod apk beta version with unlimited coins, skins, emotes, and more.
    10. -
    -

    How to Play Stumble Guys Versi 0.40 Mod Apk Beta?

    -

    Playing Stumble Guys versi 0.40 mod apk beta is not much different from playing the official version of the game. You can still join online matches with other players or create your own private room with your friends. You can also choose from different game modes, such as classic, team, or custom. The only difference is that you have more options and advantages in the mod apk beta version. Here are some tips and tricks on how to play and win the game with the mod apk beta version:

    -
      -
    • Use your coins wisely. You can buy or unlock any outfit or emote in the game, but you should also save some coins for future updates or new items.
    • -
    • Use your skins creatively. You can change your skin anytime in the game, so you can use it to confuse your opponents or blend in with the environment.
    • -
    • Use your emotes strategically. You can use your emotes to communicate with your teammates or taunt your enemies in the game. But be careful not to use them too much or at the wrong time, as they may distract you or expose you to danger.
    • -
    • Use your skills smartly. You can run, jump, dash, and slide in the game, but you should also know when and how to use them. For example, you can dash to avoid obstacles or catch up with other players, but you should also conserve your stamina and avoid dashing into traps or pitfalls.
    • -
    • Use your luck wisely. The game is based on random and chaotic events, so you never know what will happen next. Sometimes you may get lucky and find a shortcut or a power-up, but sometimes you may get unlucky and face a difficult challenge or a sabotage. You should always be prepared for anything and adapt to the situation quickly.
    • -
    -

    Conclusion

    -

    Stumble Guys versi 0.40 mod apk beta is a fun and exciting way to enjoy the game with more features and advantages. You can download it easily on your Android device and play it online with other players or with your friends. You can also customize your character with unlimited coins, skins, and emotes that you can get from the mod apk beta version. However, you should also be careful when downloading mod apk beta versions from unknown sources, as they may contain viruses, malware, or bugs that can harm your device or compromise your privacy. You should also respect the original developers of the game and support them by playing the official version of the game as well.

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Stumble Guys versi 0.40 mod apk beta:

    -
      -
    1. Is Stumble Guys versi 0.40 mod apk beta safe to download?
      -Stumble Guys versi 0.40 mod apk beta is not officially released by the developer of the game, so it may not be safe or reliable to download. It may contain viruses, malware, or bugs that can harm your device or compromise your privacy. Therefore, you should always be careful when downloading mod apk beta versions from unknown sources and scan them with an antivirus before installing them on your device.
    2. -
    3. Is Stumble Guys versi 0.40 mod apk beta compatible with my device?
      -Stumble Guys versi 0.40 mod apk beta is compatible with most Android devices that have Android 5.0 or higher operating system. However, some devices may not support the mod apk beta version due to different specifications or settings. Therefore, you should always check the compatibility of the mod apk beta version with your device before downloading it.
    4. -
    5. Can I play Stumble Guys versi 0.40 mod apk beta offline?
      -No, you cannot play Stumble Guys versi 0.40 mod apk beta offline. The game requires an internet connection to play online with other players or with your friends. You need to have a stable and fast internet connection to play the game smoothly and without any lag or interruption.
    6. -
    7. Can I play Stumble Guys versi 0.40 mod apk beta with my friends?
      -Yes, you can play Stumble Guys versi 0.40 mod apk beta with your friends. You can create your own private room and invite your friends to join you. You can also join other public rooms and play with other players from around the world. However, you should be aware that some players may not have the mod apk beta version and may report you for cheating or hacking the game. Therefore, you should be careful when playing with strangers and avoid using the mod apk beta version in a way that may ruin the fun or fairness of the game for others.
    8. -
    9. Can I update Stumble Guys versi 0.40 mod apk beta to the latest version?
      -No, you cannot update Stumble Guys versi 0.40 mod apk beta to the latest version. The mod apk beta version is not compatible with the official version of the game and may not work properly if you try to update it. You may also lose your progress, coins, skins, emotes, and other features that you got from the mod apk beta version. Therefore, you should always backup your data before updating the game and wait for a new mod apk beta version to be released by the modders or hackers.
    10. -
    -

    I hope this article has helped you learn more about Stumble Guys versi 0.40 mod apk beta and how to download it on your Android device. If you have any questions or feedback, please leave a comment below. And if you enjoyed this article, please share it with your friends and family who may also be interested in playing this game. Thank you for reading and have fun!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Drive with Traffic Full Real HUD and More in Extreme Car Driving Simulator.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Drive with Traffic Full Real HUD and More in Extreme Car Driving Simulator.md deleted file mode 100644 index ebab48dd82a3b37b11676174524226564fc739f1..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Drive with Traffic Full Real HUD and More in Extreme Car Driving Simulator.md +++ /dev/null @@ -1,163 +0,0 @@ -
    -

    Extreme Car Driving Simulator 3D Download: How to Play the Best Car Simulator Game on Your PC or Mobile Device

    -

    Do you love driving fast cars and performing amazing stunts? Do you want to experience the thrill of racing and drifting in a realistic open world environment? If you answered yes, then you should try Extreme Car Driving Simulator, the best car simulator game for PC and mobile devices.

    -

    What is Extreme Car Driving Simulator?

    -

    Extreme Car Driving Simulator is a casual racing game developed by AxesInMotion Racing. It was released in 2014 and has since become one of the most popular car simulator games on the market. The game lets you drive, drift, and feel a variety of sports cars in a detailed 3D city. You can choose from different modes, such as free roam, checkpoint, traffic, or ghost mode, and explore the city at your own pace. You can also customize your car with different colors, wheels, and stickers.

    -

    extreme car driving simulator 3d download


    Download Filehttps://ssurll.com/2uNWg4



    -

    Features of the game

    -

    Some of the features that make Extreme Car Driving Simulator stand out are:

    -
      -
    • Full real HUD including revs, gear, and speed.
    • -
    • ABS, TC, and ESP simulation. You can also turn them off.
    • -
    • Realistic car damage. Crash your car and see the effects.
    • -
    • Accurate physics. Feel the force of gravity and inertia as you drive.
    • -
    • Control your car with a steering wheel, accelerometer, or arrows.
    • -
    • Several different cameras. Switch between first-person, third-person, or top-down view.
    • -
    • Gamepad support. Play with your favorite controller.
    • -
    -

    How to download and install the game

    -

    The game is available for both PC and mobile devices. You can download it from various sources depending on your device and preference. Here are some of the options:

    - - - - - - - - - - - - - - - - - - - - - - - - - - -
    DeviceSourceLink
    PCBlueStacksPlay Extreme Car Driving Simulator on PC - BlueStacks
    AndroidGoogle Play StoreExtreme Car Driving Simulator - Apps on Google Play
    iOSApp Store‎Extreme Car Driving Simulator on the App Store
    Web browserCrazyGamesExtreme Car Driving Simulator - CrazyGames
    -

    How to play Extreme Car Driving Simulator on PC

    -

    Benefits of playing on PC

    -

    If you have a PC, you might want to consider playing Extreme Car Driving Simulator on it instead of your mobile device. Here are some of the benefits of playing on PC:

    -
      -
    • Better graphics. Enjoy the stunning 3D graphics and smooth animations on a larger screen.
    • -
    • Better performance. Avoid lagging, crashing, or overheating issues that might occur on mobile devices.Better control. Use your keyboard and mouse or a gamepad to control your car with more precision and comfort.
    • -
    • Better sound. Hear the realistic engine sounds and background music with better quality and volume.
    • -
    -

    Steps to play on PC using BlueStacks

    -

    One of the easiest ways to play Extreme Car Driving Simulator on PC is to use BlueStacks, a popular Android emulator that lets you run mobile apps and games on your PC. Here are the steps to follow:

    -
      -
    1. Download and install BlueStacks from BlueStacks - Fastest Android Emulator for PC & Mac |100% Safe and FREE.
    2. -
    3. Launch BlueStacks and sign in with your Google account.
    4. -
    5. Search for Extreme Car Driving Simulator in the search bar.
    6. -
    7. Click on the game icon and install it from the Google Play Store.
    8. -
    9. Once installed, click on the game icon on the home screen to start playing.
    10. -
    -

    How to play Extreme Car Driving Simulator on mobile devices

    -

    Benefits of playing on mobile devices

    -

    If you prefer to play Extreme Car Driving Simulator on your mobile device, you can also enjoy some benefits that are unique to this platform. Here are some of them:

    -
      -
    • Portability. Play the game anytime and anywhere you want, as long as you have your device and an internet connection.
    • -
    • Accessibility. Download the game for free from the Google Play Store or the App Store, depending on your device.
    • -
    • Simplicity. Use the touch screen to control your car with simple gestures and taps.
    • -
    • Variety. Choose from different cars and modes that are exclusive to the mobile version of the game.
    • -
    -

    Steps to play on Android using Google Play Store

    -

    If you have an Android device, you can download and play Extreme Car Driving Simulator from the Google Play Store. Here are the steps to follow:

    -
      -
    1. Open the Google Play Store app on your device.
    2. -
    3. Search for Extreme Car Driving Simulator in the search bar.
    4. -
    5. Tap on the game icon and install it.
    6. -
    7. Once installed, tap on the game icon on your home screen or app drawer to start playing.
    8. -
    -

    Steps to play on iOS using App Store

    -

    If you have an iOS device, you can download and play Extreme Car Driving Simulator from the App Store. Here are the steps to follow:

    -

    extreme car driving simulator 3d game online
    -extreme car driving simulator 3d mod apk
    -extreme car driving simulator 3d pc windows 10
    -extreme car driving simulator 3d free play
    -extreme car driving simulator 3d cheats and hacks
    -extreme car driving simulator 3d uptodown android
    -extreme car driving simulator 3d bluestacks app player
    -extreme car driving simulator 3d crazygames racing
    -extreme car driving simulator 3d google play store
    -extreme car driving simulator 3d axesinmotion casual
    -extreme car driving simulator 3d realistic physics engine
    -extreme car driving simulator 3d sports car drift
    -extreme car driving simulator 3d open world city
    -extreme car driving simulator 3d full real hud
    -extreme car driving simulator 3d abs tc esp simulation
    -extreme car driving simulator 3d realistic car damage
    -extreme car driving simulator 3d control options
    -extreme car driving simulator 3d several different cameras
    -extreme car driving simulator 3d gamepad support
    -extreme car driving simulator 3d mini game checkpoint mode
    -extreme car driving simulator 3d drive with traffic
    -extreme car driving simulator 3d no need to brake
    -extreme car driving simulator 3d illegal stunt actions
    -extreme car driving simulator 3d full speed without police
    -extreme car driving simulator 3d burn the asphalt
    -extreme car driving simulator 3d latest update version
    -extreme car driving simulator 3d ratings and reviews
    -extreme car driving simulator 3d data privacy and security
    -extreme car driving simulator 3d data deletion request
    -extreme car driving simulator 3d photosphere composition
    -extreme car driving simulator 3d chromosphere thickness
    -extreme car driving simulator 3d sun spot cycle
    -extreme car driving simulator 3d net energy gain experiment
    -extreme car driving simulator 3d holy grail fusion reaction
    -extreme car driving simulator 3d seven times hotter than sun core
    -extreme car driving simulator 3d kstar facility korea institute of fusion energy
    -extreme car driving simulator 3d gta type style option
    -extreme car driving simulator 3d walk around the city and airport
    -extreme car driving simulator 3d coins and passport check system
    -extreme car driving simulator 3d veteran of the game feedback
    -extreme car driving simulator 3d advanced game saved on account issue
    -extreme car driving simulator 3d fun and addictive gameplay experience

    -
      -
    1. Open the App Store app on your device.
    2. -
    3. Search for Extreme Car Driving Simulator in the search bar.
    4. -
    5. Tap on the game icon and install it.
    6. -
    7. Once installed, tap on the game icon on your home screen to start playing.
    8. -
    -

    Tips and tricks to master the game

    -

    Now that you know how to download and play Extreme Car Driving Simulator on your PC or mobile device, you might want to learn some tips and tricks to master the game and have more fun. Here are some of them:

    -

    Choose the right car for each mode

    -

    The game offers a variety of cars that you can drive, each with different characteristics and performance. You can choose from sports cars, muscle cars, off-road vehicles, police cars, and more. Depending on the mode you are playing, you might want to choose a car that suits your style and preference. For example, if you are playing in free roam mode, you might want to choose a fast and agile car that can maneuver easily in the city. If you are playing in traffic mode, you might want to choose a sturdy and durable car that can withstand collisions with other vehicles.

    -

    Use the different camera angles

    -

    The game allows you to switch between different camera angles while driving. You can choose from first-person, third-person, or top-down view. Each camera angle has its own advantages and disadvantages. For example, if you want to feel more immersed in the game, you might want to use the first-person view. If you want to see more of your surroundings and avoid obstacles, you might want to use the third-person view. If you want to have a bird's eye view of the city and plan your route, you might want to use the top-down view. Experiment with different camera angles and find out which one works best for you.

    -

    Customize your controls

    -

    The game also lets you customize your controls according to your preference. You can choose from different options such as steering wheel, accelerometer, or arrows. You can also adjust the sensitivity and position of each control option.

    Customize your controls to suit your comfort and convenience. You can also use a gamepad if you are playing on PC or a compatible device.

    -

    Perform stunts and drifts

    -

    One of the most fun aspects of the game is performing stunts and drifts with your car. You can use the ramps, loops, bridges, and other structures in the city to launch your car into the air and do flips, spins, and rolls. You can also use the handbrake and the nitro boost to drift around corners and curves. Performing stunts and drifts will not only make you look cool, but also earn you more points and coins that you can use to unlock and upgrade new cars.

    -

    Conclusion

    -

    Extreme Car Driving Simulator is a game that will satisfy your need for speed and adrenaline. It is a game that lets you drive, race, and drift in a realistic 3D city with various cars and modes. You can download and play the game on your PC or mobile device, depending on your preference. You can also follow some tips and tricks to master the game and have more fun. If you are looking for a car simulator game that is easy to play but hard to put down, Extreme Car Driving Simulator is the game for you.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Extreme Car Driving Simulator:

    -
      -
    1. Is Extreme Car Driving Simulator free to play?
    2. -

      Yes, Extreme Car Driving Simulator is free to play. However, it contains ads and in-app purchases that you can disable or avoid if you wish.

      -
    3. How many cars are there in Extreme Car Driving Simulator?
    4. -

      There are over 30 cars in Extreme Car Driving Simulator, ranging from sports cars, muscle cars, off-road vehicles, police cars, and more. You can unlock them by earning coins or buying them with real money.

      -
    5. How many modes are there in Extreme Car Driving Simulator?
    6. -

      There are four modes in Extreme Car Driving Simulator: free roam, checkpoint, traffic, and ghost mode. Each mode has its own objectives and challenges that you can complete for more points and coins.

      -
    7. What are the minimum system requirements for Extreme Car Driving Simulator?
    8. -

      The minimum system requirements for Extreme Car Driving Simulator are:

      -
        -
      • For PC: Windows 7 or higher, 4 GB RAM, 4 GB disk space, Intel or AMD processor.
      • -
      • For Android: Android 4.4 or higher, 1 GB RAM, 100 MB disk space.
      • -
      • For iOS: iOS 9.0 or higher, iPhone 5S or newer, iPad Air or newer.
      • -
      -
    9. How can I contact the developer of Extreme Car Driving Simulator?
    10. -

      You can contact the developer of Extreme Car Driving Simulator by emailing them at support@axesinmotion.com or visiting their website at AxesInMotion Racing - The best racing games for mobile devices.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/siya02/Konakni-TTS/ttsv/scripts/glow/train_glow.sh b/spaces/siya02/Konakni-TTS/ttsv/scripts/glow/train_glow.sh deleted file mode 100644 index f12939d5d4563de555bf49408fa7a27397e0dae3..0000000000000000000000000000000000000000 --- a/spaces/siya02/Konakni-TTS/ttsv/scripts/glow/train_glow.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash - -gender='male' - -config='../../config/glow/'$gender'.json' -modeldir='../../checkpoints/glow/'$gender -logdir='../../logs/glow/'$gender -init=1 # 1 if start from scratch. 0 if start from last checkpoint - - -#################################################### - -if [[ $init -eq 1 ]] -then - python ../../src/glow_tts/init.py -c $config -m $modeldir -l $logdir -fi -python ../../src/glow_tts/train.py -c $config -m $modeldir -l $logdir diff --git a/spaces/skyler36237/vits-uma-genshin-honkai/attentions.py b/spaces/skyler36237/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/skyler36237/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/skylord/surubhi/app.py b/spaces/skylord/surubhi/app.py deleted file mode 100644 index 29298eddc17029169f94102b8629f150bc1af39c..0000000000000000000000000000000000000000 --- a/spaces/skylord/surubhi/app.py +++ /dev/null @@ -1,22 +0,0 @@ -#import gradio as gr - -#def greet(name): -# return "Hello " + name + "!!" - -#iface = gr.Interface(fn=greet, inputs="text", outputs="text") -#iface.launch() - -import gradio as gr - -def greet(name, is_morning, temperature): - salutation = "Good morning" if is_morning else "Good evening" - greeting = "%s %s. It is %s degrees today" % ( - salutation, name, temperature) - celsius = (temperature - 32) * 5 / 9 - return greeting, round(celsius, 2) - -iface = gr.Interface( - fn=greet, - inputs=["text", "checkbox", gr.inputs.Slider(0, 100)], - outputs=["text", "number"]) -iface.launch().launch(auth=("admin", "allow4321")) \ No newline at end of file diff --git a/spaces/sohojoe/soho-clip-embeddings-explorer/api_test.py b/spaces/sohojoe/soho-clip-embeddings-explorer/api_test.py deleted file mode 100644 index 30bd64a74d252abf819fd3553e4c7ea4cbf9e036..0000000000000000000000000000000000000000 --- a/spaces/sohojoe/soho-clip-embeddings-explorer/api_test.py +++ /dev/null @@ -1,83 +0,0 @@ -from gradio_client import Client -import time -import numpy as np - -import torch - -from api_helper import preprocess_image, encode_numpy_array -clip_image_size = 224 -num_steps = 1000 -test_image_url = "https://static.wixstatic.com/media/4d6b49_42b9435ce1104008b1b5f7a3c9bfcd69~mv2.jpg/v1/fill/w_454,h_333,fp_0.50_0.50,q_90/4d6b49_42b9435ce1104008b1b5f7a3c9bfcd69~mv2.jpg" - - -client = Client("http://127.0.0.1:7860/") - -print("do we have cuda", torch.cuda.is_available()) - -def test_text(): - result = client.predict( - "Howdy!", # str representing string value in 'Input' Textbox component - api_name="/text_to_embeddings" - ) - return(result) - -def test_image(): - result = client.predict( - test_image_url, # str representing filepath or URL to image in 'Image Prompt' Image component - api_name="/image_to_embeddings" - ) - return(result) - -def test_image_as_payload(payload): - result = client.predict( - payload, # image as string payload - api_name="/image_as_payload_to_embeddings" - ) - return(result) - -# performance test for text -start = time.time() -for i in range(num_steps): - test_text() -end = time.time() -average_time_seconds = (end - start) / num_steps -print("Average time for text: ", average_time_seconds, "s") -print("Average time for text: ", average_time_seconds * 1000, "ms") -print("Number of predictions per second for text: ", 1 / average_time_seconds) - -# performance test for image -start = time.time() -for i in range(num_steps): - test_image() -end = time.time() -average_time_seconds = (end - start) / num_steps -print("Average time for image: ", average_time_seconds, "s") -print("Average time for image: ", average_time_seconds * 1000, "ms") -print("Number of predictions per second for image: ", 1 / average_time_seconds) - - - -# download image from url -import requests -from PIL import Image -from io import BytesIO -response = requests.get(test_image_url) -input_image = Image.open(BytesIO(response.content)) -input_image = input_image.convert('RGB') -# convert image to numpy array -input_image = np.array(input_image) - -if input_image.shape[0] > clip_image_size or input_image.shape[1] > clip_image_size: - input_image = preprocess_image(input_image, clip_image_size) -payload = encode_numpy_array(input_image) - -# performance test for image as payload -start = time.time() -for i in range(num_steps): - test_image_as_payload(payload) -end = time.time() -average_time_seconds = (end - start) / num_steps -print("Average time for image as payload: ", average_time_seconds, "s") -print("Average time for image as payload: ", average_time_seconds * 1000, "ms") -print("Number of predictions per second for image as payload: ", 1 / average_time_seconds) - diff --git a/spaces/sophiamyang/Panel_PDF_QA/README.md b/spaces/sophiamyang/Panel_PDF_QA/README.md deleted file mode 100644 index ebe6d2a8b2b582e4e4a784f3ea7ac53b7e732ea6..0000000000000000000000000000000000000000 --- a/spaces/sophiamyang/Panel_PDF_QA/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Panel PDF QA -emoji: 📈 -colorFrom: pink -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/stephenleo/stripnet/README.md b/spaces/stephenleo/stripnet/README.md deleted file mode 100644 index 6dd8aee5963effd8e8791210b500ee1df271c331..0000000000000000000000000000000000000000 --- a/spaces/stephenleo/stripnet/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: 'STriP Net: Semantic Similarity of Scientific Papers Network' -emoji: 🕸️ -colorFrom: red -colorTo: blue -sdk: streamlit -app_file: app.py -pinned: true ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/stomexserde/gpt4-ui/Examples/Amar Akbar Anthony Remake 2 Hd Tamil Movie Free Download High Quality.md b/spaces/stomexserde/gpt4-ui/Examples/Amar Akbar Anthony Remake 2 Hd Tamil Movie Free Download High Quality.md deleted file mode 100644 index 8888d3de4aafbcc9b93404cd654ea60e857e714a..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Amar Akbar Anthony Remake 2 Hd Tamil Movie Free Download High Quality.md +++ /dev/null @@ -1,17 +0,0 @@ -
    -

    Amar Akbar Anthony Remake 2: A Comedy Thriller You Don't Want to Miss

    -

    If you are looking for a fun and entertaining movie to watch, you should check out Amar Akbar Anthony Remake 2, the sequel to the 2015 Malayalam hit film Amar Akbar Anthony. This movie is a comedy thriller that follows the adventures of three friends who have different religious backgrounds and personalities. They get involved in a series of hilarious and dangerous situations that test their friendship and loyalty.

    -

    Amar Akbar Anthony Remake 2 is directed by Nadirshah, who also helmed the first film. The movie stars Prithviraj Sukumaran, Jayasurya, and Indrajith Sukumaran reprising their roles as Amar, Akbar, and Anthony respectively. The movie also features Namitha Pramod, Asif Ali, Ramesh Pisharody, and Dharmajan Bolgatty in supporting roles. The movie has a catchy soundtrack composed by Nadirshah himself, with lyrics by Santhosh Varma and Manu Manjith.

    -

    Amar Akbar Anthony Remake 2 hd tamil movie free download


    Downloadhttps://urlgoal.com/2uI6q4



    -

    The movie is a remake of the 1977 Bollywood classic Amar Akbar Anthony, which starred Amitabh Bachchan, Vinod Khanna, and Rishi Kapoor in the lead roles. The original film was a blockbuster that became a cult classic among Indian cinema lovers. The Malayalam remake was also a huge success, earning positive reviews from critics and audiences alike. The movie was praised for its comedy, action, and social message.

    -

    Amar Akbar Anthony Remake 2 is expected to release in Tamil soon, as Nadirshah has announced his plans to direct the Tamil version of the film. The movie will be produced by Suresh Balaje and George Pius under the banner of Wide Angle Creations. The cast and crew of the Tamil version are yet to be finalized.

    -

    If you want to watch Amar Akbar Anthony Remake 2 in HD quality for free, you can download it from our website. We provide you with the best and latest movies in various languages and genres. You can enjoy watching your favorite movies without any hassle or interruption. Just click on the link below and start downloading Amar Akbar Anthony Remake 2 hd tamil movie free download now!

    -Download Amar Akbar Anthony Remake 2 hd tamil movie free download here - -

    Amar Akbar Anthony Remake 2 is not just a comedy thriller, but also a movie that explores the themes of friendship, religion, and identity. The movie shows how the three friends overcome their differences and prejudices and stand by each other in times of trouble. The movie also portrays the diversity and harmony of Kerala's culture and society, where people of different faiths and backgrounds coexist peacefully.

    -

    The movie has some memorable scenes and dialogues that will make you laugh and think. For example, there is a scene where Amar, Akbar, and Anthony disguise themselves as a Hindu priest, a Muslim cleric, and a Christian pastor respectively to escape from some goons. There is also a scene where the three friends sing a song that celebrates their unity and diversity. The movie also has some thrilling action sequences and twists that will keep you on the edge of your seat.

    -

    -

    Amar Akbar Anthony Remake 2 is a movie that you should not miss if you are a fan of comedy, thriller, or Malayalam cinema. The movie is a perfect blend of humor, suspense, and emotion that will entertain you from start to finish. The movie is also a tribute to the original Amar Akbar Anthony, which is considered to be one of the best movies ever made in Indian cinema.

    -

    So what are you waiting for? Download Amar Akbar Anthony Remake 2 hd tamil movie free download from our website and enjoy watching this amazing movie with your friends and family. You will not regret it!

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Android Multi Tools V1.02b Tool.epub.md b/spaces/stomexserde/gpt4-ui/Examples/Android Multi Tools V1.02b Tool.epub.md deleted file mode 100644 index ca5f560a1f1a6a6ffd97d737e82e57dd6c5588fb..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Android Multi Tools V1.02b Tool.epub.md +++ /dev/null @@ -1,43 +0,0 @@ - -

    How to Use Android Multi Tools V1.02b Tool.epub to Unlock Your Android Device

    -

    If you have forgotten your Android device's pattern lock, pin lock, password, or face unlock, you might be looking for a way to reset it without losing your data. One of the best tools that can help you with this task is Android Multi Tools V1.02b Tool.epub. This tool is a free and easy-to-use program that allows you to perform various tasks on your Android device, such as removing the lock, bypassing the FRP (Factory Reset Protection), wiping data and cache, booting into different modes, checking device information, and more.

    -

    In this article, we will show you how to download and use Android Multi Tools V1.02b Tool.epub to unlock your Android device in a few simple steps. But before we start, let's see what are the features and benefits of this tool.

    -

    Android Multi Tools V1.02b Tool.epub


    DOWNLOADhttps://urlgoal.com/2uIcj4



    - -

    Features and Benefits of Android Multi Tools V1.02b Tool.epub

    -
      -
    • It can remove any type of lock from your Android device, such as pattern lock, pin lock, password, or face unlock.
    • -
    • It can reset your Gmail password and bypass the FRP lock on your device.
    • -
    • It can wipe data and cache on your device in fastboot mode.
    • -
    • It can boot your device into different modes, such as fastboot mode, bootloader mode, or recovery mode.
    • -
    • It can check your device's software and hardware details, such as CPU architecture, Android version, RAM allocation, etc.
    • -
    • It can launch the command prompt on your PC with a single click.
    • -
    • It is compatible with any Android device and any Windows PC.
    • -
    • It is free and easy to use.
    • -
    - -

    How to Download and Install Android Multi Tools V1.02b Tool.epub

    -

    To use this tool, you will need to download and install it on your Windows PC. Here are the steps to do so:

    -
      -
    1. Go to https://androidmultitools.com/ and click on the direct download link or the mirror download link to download the tool.
    2. -
    3. Extract the downloaded file using any file extractor program.
    4. -
    5. Open the extracted folder and run the Android Multi Tools v1.02b.exe file as an administrator.
    6. -
    7. The tool will launch on your PC and show you a list of options.
    8. -
    - -

    How to Use Android Multi Tools V1.02b Tool.epub to Unlock Your Android Device

    -

    Now that you have installed the tool on your PC, you can use it to unlock your Android device. Here are the steps to do so:

    -

    -
      -
    1. Enable developer options and USB debugging on your Android device. To do this, go to Settings > About phone > Tap on Build number seven times > Go back to Settings > Developer options > Enable USB debugging.
    2. -
    3. Connect your Android device to your PC using a USB cable.
    4. -
    5. Select option 2 from the tool's menu: Reset Face/PIN Lock.
    6. -
    7. The tool will ask you to confirm your choice by pressing Y or N. Press Y and hit Enter.
    8. -
    9. The tool will start removing the lock from your device. Wait for a few seconds until it finishes.
    10. -
    11. Your device will reboot automatically and you will be able to access it without any lock.
    12. -
    - -

    Conclusion

    -

    Android Multi Tools V1.02b Tool.epub is a handy tool that can help you unlock your Android device if you have forgotten your lock. It can also perform other tasks such as wiping data and cache, booting into different modes, checking device information, and more. It is free and easy to use and compatible with any Android device and any Windows PC. We hope this article has helped you learn how to use this tool to unlock your Android device. If you have any questions or feedback, feel free to leave a comment below.

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cad Image Dll Irfanview 80 REPACK.md b/spaces/stomexserde/gpt4-ui/Examples/Cad Image Dll Irfanview 80 REPACK.md deleted file mode 100644 index 794d2208bffe194f3c0c2b3b915bd8134312d0f1..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Cad Image Dll Irfanview 80 REPACK.md +++ /dev/null @@ -1,81 +0,0 @@ - -

    CAD Image DLL IrfanView 80: What You Need to Know

    -

    If you are looking for a way to view CAD files on your computer without installing expensive software or compromising on quality, you may want to consider using CAD Image DLL with IrfanView. In this article, we will explain what CAD Image DLL is, what IrfanView is, and why they are useful for viewing CAD files. We will also show you how to install and use CAD Image DLL with IrfanView, how to troubleshoot some common issues, and where to find more resources and support.

    -

    Cad Image Dll Irfanview 80


    Download File ⚹⚹⚹ https://urlgoal.com/2uI7sq



    -

    What is CAD Image DLL?

    -

    CAD Image DLL is a plugin that allows IrfanView to read some rare image formats, including CAD formats such as DWG, DXF, HPGL, SVG, and CGM. These formats are commonly used for creating and storing vector graphics that represent technical drawings or designs. With CAD Image DLL installed in your IrfanView plugins folder, you can open and view these files in IrfanView as raster images. CAD Image DLL is developed by CADSoftTools, a company that specializes in CAD software and components.

    -

    What is IrfanView?

    -

    IrfanView is a popular image viewer and editor that supports many file formats and features. It is fast, compact, and easy to use. It can also perform basic editing tasks such as cropping, resizing, rotating, color correction, and effects. IrfanView is free for non-commercial use and has a large community of users and developers. It can be extended with plugins that add more functionality and support for more formats. IrfanView is developed by Irfan Skiljan, a software engineer from Bosnia and Herzegovina.

    -

    Why Use CAD Image DLL with IrfanView?

    -

    Benefits of CAD Image DLL

    -

    Using CAD Image DLL with IrfanView has several benefits, such as:

    -
      -
    • Compatibility: You can view CAD files on any Windows system without installing any other software or drivers. You can also view CAD files on other platforms using IrfanView with Wine or CrossOver.
    • -
    • Speed: You can open and view CAD files quickly and smoothly with IrfanView's optimized performance and interface. You can also batch convert CAD files to other formats using IrfanView's command line options or GUI.
    • -
    • Quality: You can view CAD files with high resolution and quality using CAD Image DLL's advanced rendering engine. You can also adjust the image quality settings such as anti-aliasing, smoothing, and dithering.
    • -
    • Convenience: You can view CAD files with ease using IrfanView's user-friendly features such as drag-and-drop, slideshow, thumbnail, fullscreen, zoom, rotate, print, and export. You can also customize IrfanView's appearance and behavior according to your preferences.
    • -
    -

    Limitations of CAD Image DLL

    -

    Using CAD Image DLL with IrfanView also has some limitations, such as:

    -

    -
      -
    • Rasterization: You can only view CAD files as raster images, not as vector graphics. This means that you cannot edit or modify the original CAD data or properties. You also cannot zoom in infinitely without losing quality.
    • -
    • Registration: You need to register CAD Image DLL with a valid license key to use it with IrfanView. The license key is based on your computer's hardware ID and cannot be transferred to another computer. You can request a free trial key or purchase a full key from the CADSoftTools website.
    • -
    • Licensing: You need to comply with the terms and conditions of the CAD Image DLL license agreement when using it with IrfanView. The license agreement states that you can only use CAD Image DLL for non-commercial purposes or for evaluation purposes for a limited period of time. You also cannot distribute or modify CAD Image DLL without permission from CADSoftTools.
    • -
    -

    How to Install and Use CAD Image DLL with IrfanView?

    -

    Installation Steps

    -

    To install and use CAD Image DLL with IrfanView, you need to follow these steps:

    -
      -
    1. Download: Download the latest version of CAD Image DLL from the CADSoftTools website. The file name is cadimage.dll.zip. You also need to download the latest version of IrfanView from the IrfanView website. The file name is iview80_setup.exe.
    2. -
    3. Copy: Extract the cadimage.dll file from the zip archive and copy it to the Plugins folder in your IrfanView installation directory. The default location is C:\Program Files\IrfanView\Plugins.
    4. -
    5. Register: Run the reg_cadimage.bat file in the Plugins folder to register CAD Image DLL with your system. You need to have administrator rights to do this. You also need to enter your license key when prompted. If you don't have a license key, you can request a free trial key or purchase a full key from the CADSoftTools website.
    6. -
    7. Configure: Run IrfanView and go to Options > Properties/Settings > PlugIns > PlugIns 8BF Filters/DLLs > Add new PlugIn (DLL) path. Enter the path to the Plugins folder in your IrfanView installation directory and click OK. This will enable IrfanView to recognize CAD Image DLL as a plugin.
    8. -
    -

    Usage Tips

    -

    To use CAD Image DLL with IrfanView, you can follow these tips:

    -
      -
    • Opening files: To open a CAD file in IrfanView, you can use the File > Open menu, the Open button on the toolbar, or the drag-and-drop method. You can also use the File > Batch Conversion/Rename menu or the command line options to open multiple CAD files at once.
    • -
    • Zooming: To zoom in or out of a CAD file in IrfanView, you can use the View > Zoom menu, the Zoom buttons on the toolbar, or the mouse wheel. You can also use the View > Fit window to image menu or the F key to fit the image to the window size.
    • -
    • Rotating: To rotate a CAD file in IrfanView, you can use the Image > Rotate menu, the Rotate buttons on the toolbar, or the R and L keys. You can also use the Image > Auto adjust colors menu or the Shift + G key to adjust the colors of the image.
    • -
    • Printing: To print a CAD file in IrfanView, you can use the File > Print menu, the Print button on the toolbar, or the Ctrl + P key. You can also use the File > Print Preview menu or the P key to preview the print settings and layout.
    • -
    • Exporting: To export a CAD file in IrfanView, you can use the File > Save As menu, the Save button on the toolbar, or the S key. You can also use the File > Save for Web (plugin) menu or the Shift + S key to optimize and save the image for web publishing.
    • -
    -

    How to Troubleshoot CAD Image DLL with IrfanView?

    -

    Common Errors and Solutions

    -

    Sometimes, you may encounter some errors when using CAD Image DLL with IrfanView. Here are some examples of common errors and solutions:

    - - - - - - -
    ErrorSolution
    CAD Image DLL is not found or not registered.Make sure that you have copied cadimage.dll to your IrfanView plugins folder and run reg_cadimage.bat as administrator. If you still get this error, try reinstalling CAD Image DLL or contacting CADSoftTools support.
    CAD Image DLL is not licensed or has expired.Make sure that you have entered a valid license key when registering CAD Image DLL. If you don't have a license key, you can request a free trial key or purchase a full key from the CADSoftTools website. If you still get this error, try re-registering CAD Image DLL or contacting CADSoftTools support.
    CAD file format is not supported by CAD Image DLL.Make sure that you are trying to open a supported CAD file format such as DWG, DXF, HPGL, SVG, or CGM. If you are not sure about the file format, you can check it with a hex editor or a file identifier tool. If you still get this error, try updating CAD Image DLL to the latest version or contacting CADSoftTools support.
    CAD file is corrupted or damaged.Make sure that you have downloaded or copied the CAD file correctly and that it is not infected by a virus or malware. You can also try opening the file with another program that supports CAD formats such as AutoCAD or LibreCAD. If you still get this error, try repairing the file with a CAD repair tool or contacting the file creator or provider.
    -

    Resources and Support

    -

    If you need more resources and support for using CAD Image DLL with IrfanView, you can check out these links:

    -
      -
    • CAD Image DLL documentation: This is the official documentation for CAD Image DLL that explains its features, functions, and parameters. You can find it here: [CAD Image DLL documentation].
    • -
    • CAD Image DLL forum: This is the official forum for CAD Image DLL users and developers where you can ask questions, share feedback, and report bugs. You can find it here: [CAD Image DLL forum].
    • -
    • CAD Image DLL FAQs: This is a list of frequently asked questions and answers about CAD Image DLL that covers common topics such as installation, licensing, and usage. You can find it here: [CAD Image DLL FAQs].
    • -
    • IrfanView website: This is the official website for IrfanView that provides downloads, updates, plugins, and information. You can find it here: [IrfanView website].
    • -
    • IrfanView support: This is the official support page for IrfanView that provides contact details, FAQs, forums, and tutorials. You can find it here: [IrfanView support].
    • -
    -

    Conclusion

    -

    In conclusion, CAD Image DLL is a plugin that allows IrfanView to read some rare image formats, including CAD formats such as DWG, DXF, HPGL, SVG, and CGM. It is a useful tool for viewing CAD files on your computer without installing expensive software or compromising on quality. However, it also has some limitations such as rasterization, registration, and licensing. To use CAD Image DLL with IrfanView, you need to install and register it with a valid license key, configure it with IrfanView's settings, and follow some usage tips. If you encounter any errors or issues, you can troubleshoot them with some common solutions or seek more resources and support from the official websites and forums.

    -

    We hope that this article has helped you understand what CAD Image DLL IrfanView 80 is and how to use it. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

    -

    FAQs

    -

    Here are some FAQs related to the topic of this article:

    -
      -
    1. Q: How much does CAD Image DLL cost?
    2. -
    3. A: CAD Image DLL costs $125 for a single-user license or $625 for a site license. You can also request a free trial key for 30 days from the CADSoftTools website.
    4. -
    5. Q: How do I update CAD Image DLL?
    6. -
    7. A: You can update CAD Image DLL by downloading the latest version from the CADSoftTools website and replacing the old cadimage.dll file in your IrfanView plugins folder. You don't need to re-register it if you have a valid license key.
    8. -
    9. Q: How do I uninstall CAD Image DLL?
    10. -
    11. A: You can uninstall CAD Image DLL by deleting the cadimage.dll file from your IrfanView plugins folder and running the unreg_cadimage.bat file in the same folder. You may also need to remove the plugin path from IrfanView's settings.
    12. -
    13. Q: Can I use CAD Image DLL with other programs?
    14. -
    15. A: Yes, you can use CAD Image DLL with other programs that support loading plugins or DLLs such as Photoshop or Paint.NET. However, you may need to adjust some settings or parameters according to the program's specifications.
    16. -
    17. Q: Can I edit CAD files with CAD Image DLL?
    18. -
    19. A: No, you cannot edit CAD files with CAD Image DLL. You can only view them as raster images. If you want to edit CAD files, you need to use a dedicated CAD software or converter.
    20. -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dbphoenix Trading By Price Pdf Download.md b/spaces/stomexserde/gpt4-ui/Examples/Dbphoenix Trading By Price Pdf Download.md deleted file mode 100644 index a45972dcb458ef7af145d626ea9043db825f76e4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dbphoenix Trading By Price Pdf Download.md +++ /dev/null @@ -1,38 +0,0 @@ -
    -

    How to Learn Trading by Price from Dbphoenix's PDFs

    -

    If you are interested in learning how to trade by price, you may have come across the name Dbphoenix. He is a legendary member of the Trade2Win forum, where he has shared his insights and methods on trading by price for over a decade. He has also written several PDFs that explain the basics of trading by price, such as what is a chart, what is demand and supply, how to determine the trend of the market, and how to trade different time frames.

    -

    In this article, we will give you an overview of Dbphoenix's PDFs and how you can download them for free. We will also provide some tips on how to study them and apply them to your own trading.

    -

    Dbphoenix Trading By Price Pdf Download


    DOWNLOAD ····· https://urlgoal.com/2uI6wg



    -

    What are Dbphoenix's PDFs?

    -

    Dbphoenix's PDFs are documents that he has created and posted on the Trade2Win forum as part of his threads on trading by price. They cover various topics related to trading by price, such as:

    -
      -
    • What's A Chart? - This PDF explains what a chart is, how it represents the buying and selling behavior of investors, and how it creates patterns that can be used for trading.
    • -
    • Demand/Supply - This PDF explains what demand and supply are, how they affect price movements, and how to identify areas of demand and supply on a chart.
    • -
    • Determining the Trend of the Market - This PDF explains what a trend is, how to determine the direction and strength of the trend, and how to trade with the trend.
    • -
    • Trading Time Frames - This PDF explains how to trade different time frames, such as daily, weekly, monthly, etc., and how to align them with your trading objectives and style.
    • -
    -

    These PDFs are not meant to be comprehensive or definitive guides on trading by price. Rather, they are introductory and educational materials that aim to help traders understand the basic concepts and principles of trading by price. They are also meant to stimulate further learning and discussion among traders who want to improve their skills and knowledge.

    -

    How to Download Dbphoenix's PDFs?

    -

    Dbphoenix's PDFs are available for free download on the Trade2Win forum. You can find them by following these steps:

    -
      -
    1. Go to this thread on the Trade2Win forum.
    2. -
    3. Scroll down to the posts by Dbphoenix. You will see that he has attached his PDFs at the end of some of his posts.
    4. -
    5. Click on the PDFs that you want to download. They will open in a new tab or window.
    6. -
    7. Save the PDFs to your computer or device.
    8. -
    -

    You can also find some of his other PDFs on Scribd and Scribd.

    -

    -

    How to Study Dbphoenix's PDFs?

    -

    Downloading Dbphoenix's PDFs is only the first step in learning trading by price. You also need to study them carefully and apply them to your own trading. Here are some tips on how to do that:

    -
      -
    • Read each PDF multiple times until you understand the main ideas and concepts.
    • -
    • Take notes and highlight the key points and examples.
    • -
    • Compare Dbphoenix's charts with your own charts and look for similarities and differences.
    • -
    • Practice identifying areas of demand and supply, trends, patterns, etc. on your own charts.
    • -
    • Backtest and forward test your trading ideas based on trading by price.
    • -
    • Review your trades and results regularly and look for ways to improve.
    • -
    • Ask questions and seek feedback from other traders who use trading by price.
    • -
    -

    Remember that trading by price is not a mechanical or rigid system. It is a flexible

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/FarCry 3 Black Box (SilverTorrent) The Game.md b/spaces/stomexserde/gpt4-ui/Examples/FarCry 3 Black Box (SilverTorrent) The Game.md deleted file mode 100644 index 0c1a3b636d4d34cffed283e24bc05c3e91abb2f8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/FarCry 3 Black Box (SilverTorrent) The Game.md +++ /dev/null @@ -1,13 +0,0 @@ -
    -

    FarCry 3 Black Box (SilverTorrent): A First-Person Shooter Game Set on a Lawless Island

    -

    FarCry 3 is a 2012 first-person shooter game developed by Ubisoft Montreal and published by Ubisoft. It is the third main installment in the Far Cry series, following Far Cry 2 (2008). The game is set on a tropical island between the Indian and Pacific Oceans, where the player controls Jason Brody, a tourist who is kidnapped by pirates and must escape and fight his way across the island.

    -

    The game features an open world environment that can be explored on foot or by various vehicles, such as cars, boats, hang gliders, and zip lines. The player can also use stealth, melee combat, firearms, and explosives to combat enemies and wildlife. The game also has a skill system that allows the player to unlock new abilities and customize their character. The game also has a multiplayer mode that includes co-operative and competitive modes.

    -

    FarCry 3 Black Box (SilverTorrent) the game


    Downloadhttps://urlgoal.com/2uIaKD



    -

    FarCry 3 Black Box (SilverTorrent) is a repack version of the game that reduces the file size from 15 GB to 4.7 GB. It also includes a separate file for the movie that can be skipped if desired. The repack version was uploaded by srkfan on 1337x.to[^2^], a torrent site that provides verified torrents for various media. The repack version has received positive feedback from users who praised its quality and performance.

    -

    FarCry 3 Black Box (SilverTorrent) is a great option for gamers who want to experience the thrilling and immersive gameplay of Far Cry 3 without downloading a large file. The game has received critical acclaim for its story, characters, graphics, gameplay, and soundtrack. It has also won several awards, such as the BAFTA Games Award for Best Action Game and the Golden Joystick Award for Game of the Year.

    - -

    One of the main features of Far Cry 3 is its gameplay, which offers a lot of freedom and variety to the player. The game allows the player to approach each mission and situation in different ways, depending on their preferred playstyle. For example, the player can choose to stealthily infiltrate an enemy outpost, use a sniper rifle from a distance, or go in guns blazing with a flamethrower. The game also encourages exploration and discovery, as the player can find hidden items, secrets, and side quests throughout the island.

    -

    The game also has a dynamic and reactive environment that responds to the player's actions. For instance, the player can set fire to vegetation and watch it spread across the area, creating chaos and distraction. The player can also interact with various animals that roam the island, such as tigers, bears, sharks, and crocodiles. Some animals can be hunted for resources or used as allies against enemies. The game also has a day-night cycle and a weather system that affect the gameplay and atmosphere.

    -

    Another feature of Far Cry 3 is its multiplayer mode, which includes both co-operative and competitive modes. The co-op mode is a separate campaign that follows four characters who are trying to escape from the island after a heist gone wrong. The co-op mode has six missions that can be played online or offline with up to four players. The competitive mode is a traditional online multiplayer mode that features various modes and maps based on the island setting. The competitive mode also has a map editor that allows players to create and share their own maps with other players.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/subhc/Guess-What-Moves/mask_former/modeling/heads/per_pixel_baseline.py b/spaces/subhc/Guess-What-Moves/mask_former/modeling/heads/per_pixel_baseline.py deleted file mode 100644 index a99f508e7b4a87ada0af6f10209f10edefa7e412..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/mask_former/modeling/heads/per_pixel_baseline.py +++ /dev/null @@ -1,243 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -from typing import Callable, Dict, List, Optional, Tuple, Union - -import fvcore.nn.weight_init as weight_init -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, get_norm -from detectron2.modeling import SEM_SEG_HEADS_REGISTRY - -from ..transformer.transformer_predictor import TransformerPredictor -from .pixel_decoder import build_pixel_decoder - - -@SEM_SEG_HEADS_REGISTRY.register() -class PerPixelBaselineHead(nn.Module): - - _version = 2 - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - logger = logging.getLogger(__name__) - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "sem_seg_head" in k and not k.startswith(prefix + "predictor"): - newk = k.replace(prefix, prefix + "pixel_decoder.") - # logger.warning(f"{k} ==> {newk}") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - num_classes: int, - pixel_decoder: nn.Module, - loss_weight: float = 1.0, - ignore_value: int = -1, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - num_classes: number of classes to predict - pixel_decoder: the pixel decoder module - loss_weight: loss weight - ignore_value: category id to be ignored during training. - """ - super().__init__() - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] - feature_strides = [v.stride for k, v in input_shape] - feature_channels = [v.channels for k, v in input_shape] - - self.ignore_value = ignore_value - self.common_stride = 4 - self.loss_weight = loss_weight - - self.pixel_decoder = pixel_decoder - self.predictor = Conv2d( - self.pixel_decoder.mask_dim, num_classes, kernel_size=1, stride=1, padding=0 - ) - weight_init.c2_msra_fill(self.predictor) - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - return { - "input_shape": { - k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - }, - "ignore_value": cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, - "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES, - "pixel_decoder": build_pixel_decoder(cfg, input_shape), - "loss_weight": cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT, - } - - def forward(self, features, targets=None): - """ - Returns: - In training, returns (None, dict of losses) - In inference, returns (CxHxW logits, {}) - """ - x = self.layers(features) - if self.training: - return None, self.losses(x, targets) - else: - x = F.interpolate( - x, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - return x, {} - - def layers(self, features): - x, _ = self.pixel_decoder.forward_features(features) - x = self.predictor(x) - return x - - def losses(self, predictions, targets): - predictions = predictions.float() # https://github.com/pytorch/pytorch/issues/48163 - predictions = F.interpolate( - predictions, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - loss = F.cross_entropy( - predictions, targets, reduction="mean", ignore_index=self.ignore_value - ) - losses = {"loss_sem_seg": loss * self.loss_weight} - return losses - - -@SEM_SEG_HEADS_REGISTRY.register() -class PerPixelBaselinePlusHead(PerPixelBaselineHead): - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "sem_seg_head" in k and not k.startswith(prefix + "predictor"): - newk = k.replace(prefix, prefix + "pixel_decoder.") - logger.debug(f"{k} ==> {newk}") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - # extra parameters - transformer_predictor: nn.Module, - transformer_in_feature: str, - deep_supervision: bool, - # inherit parameters - num_classes: int, - pixel_decoder: nn.Module, - loss_weight: float = 1.0, - ignore_value: int = -1, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - transformer_predictor: the transformer decoder that makes prediction - transformer_in_feature: input feature name to the transformer_predictor - deep_supervision: whether or not to add supervision to the output of - every transformer decoder layer - num_classes: number of classes to predict - pixel_decoder: the pixel decoder module - loss_weight: loss weight - ignore_value: category id to be ignored during training. - """ - super().__init__( - input_shape, - num_classes=num_classes, - pixel_decoder=pixel_decoder, - loss_weight=loss_weight, - ignore_value=ignore_value, - ) - - del self.predictor - - self.predictor = transformer_predictor - self.transformer_in_feature = transformer_in_feature - self.deep_supervision = deep_supervision - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super().from_config(cfg, input_shape) - ret["transformer_in_feature"] = cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE - if cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE == "transformer_encoder": - in_channels = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - else: - in_channels = input_shape[ret["transformer_in_feature"]].channels - ret["transformer_predictor"] = TransformerPredictor( - cfg, in_channels, mask_classification=False - ) - ret["deep_supervision"] = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION - return ret - - def forward(self, features, targets=None): - """ - Returns: - In training, returns (None, dict of losses) - In inference, returns (CxHxW logits, {}) - """ - x, aux_outputs = self.layers(features) - if self.training: - if self.deep_supervision: - losses = self.losses(x, targets) - for i, aux_output in enumerate(aux_outputs): - losses["loss_sem_seg" + f"_{i}"] = self.losses( - aux_output["pred_masks"], targets - )["loss_sem_seg"] - return None, losses - else: - return None, self.losses(x, targets) - else: - x = F.interpolate( - x, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - return x, {} - - def layers(self, features): - mask_features, transformer_encoder_features = self.pixel_decoder.forward_features(features) - if self.transformer_in_feature == "transformer_encoder": - assert ( - transformer_encoder_features is not None - ), "Please use the TransformerEncoderPixelDecoder." - predictions = self.predictor(transformer_encoder_features, mask_features) - else: - predictions = self.predictor(features[self.transformer_in_feature], mask_features) - if self.deep_supervision: - return predictions["pred_masks"], predictions["aux_outputs"] - else: - return predictions["pred_masks"], None diff --git a/spaces/subhc/Guess-What-Moves/mask_former/modeling/transformer/__init__.py b/spaces/subhc/Guess-What-Moves/mask_former/modeling/transformer/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/mask_former/modeling/transformer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/EstiNet Network Simulator.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/EstiNet Network Simulator.md deleted file mode 100644 index 914967776896ff5319009f320488dd765e9e301b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/EstiNet Network Simulator.md +++ /dev/null @@ -1,6 +0,0 @@ -

    EstiNet Network Simulator


    Download Ziphttps://cinurl.com/2uEYzV



    - -January 12, 2564 BE - EstiNet network simulator/simulator is based on NCTU. NCTUs have been used for network research and publications since 2002. â–¡ NCTU-1B is the first instance of the NCTU Network Simulator. Has been used for research and publication since 2002. â–¡ NCTU-1C is the second instance of the NCTU Network emulator. Released in 2004. â–¡ NCTU 2B is the second copy of the NCTU Network emulator. Released in 2006. The NCTU network is a simulation system that uses IBM PC hardware with TCP/IP drivers installed on the system. The NCTU simulator system runs under the Windows 3.1/95/NT operating system. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/models_onnx.py b/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/datasets/builders/__init__.py b/spaces/t110-ai-admin/InspectLens/video_llama/datasets/builders/__init__.py deleted file mode 100644 index 0b160d0b8ad5793e368d8b2d26ff9829fa3ddd9a..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/datasets/builders/__init__.py +++ /dev/null @@ -1,77 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from video_llama.datasets.builders.base_dataset_builder import load_dataset_config -from video_llama.datasets.builders.image_text_pair_builder import ( - CCSBUBuilder, - LaionBuilder, - CCSBUAlignBuilder -) -from video_llama.datasets.builders.video_caption_builder import WebvidBuilder -from video_llama.common.registry import registry -from video_llama.datasets.builders.instruct_builder import WebvidInstruct_Builder,LlavaInstruct_Builder -__all__ = [ - "CCSBUBuilder", - "LaionBuilder", - "CCSBUAlignBuilder", - "WebvidBuilder", - "LlavaInstruct_Builder", - "WebvidInstruct_Builder" - -] - - -def load_dataset(name, cfg_path=None, vis_path=None, data_type=None): - """ - Example - - >>> dataset = load_dataset("coco_caption", cfg=None) - >>> splits = dataset.keys() - >>> print([len(dataset[split]) for split in splits]) - - """ - if cfg_path is None: - cfg = None - else: - cfg = load_dataset_config(cfg_path) - - try: - builder = registry.get_builder_class(name)(cfg) - except TypeError: - print( - f"Dataset {name} not found. Available datasets:\n" - + ", ".join([str(k) for k in dataset_zoo.get_names()]) - ) - exit(1) - - if vis_path is not None: - if data_type is None: - # use default data type in the config - data_type = builder.config.data_type - - assert ( - data_type in builder.config.build_info - ), f"Invalid data_type {data_type} for {name}." - - builder.config.build_info.get(data_type).storage = vis_path - - dataset = builder.build_datasets() - return dataset - - -class DatasetZoo: - def __init__(self) -> None: - self.dataset_zoo = { - k: list(v.DATASET_CONFIG_DICT.keys()) - for k, v in sorted(registry.mapping["builder_name_mapping"].items()) - } - - def get_names(self): - return list(self.dataset_zoo.keys()) - - -dataset_zoo = DatasetZoo() diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/models/Qformer.py b/spaces/t110-ai-admin/InspectLens/video_llama/models/Qformer.py deleted file mode 100644 index 4902165ec6574d89f04cbeb2141b018278324ca6..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/models/Qformer.py +++ /dev/null @@ -1,1217 +0,0 @@ -""" -Adapted from salesforce@LAVIS. Below is the original copyright: - * Copyright (c) 2023, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li - * Based on huggingface code base - * https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/bert -""" - -import math -import os -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple, Dict, Any - -import torch -from torch import Tensor, device, dtype, nn -import torch.utils.checkpoint -from torch import nn -from torch.nn import CrossEntropyLoss -import torch.nn.functional as F - -from transformers.activations import ACT2FN -from transformers.file_utils import ( - ModelOutput, -) -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - NextSentencePredictorOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import logging -from transformers.models.bert.configuration_bert import BertConfig - -logger = logging.get_logger(__name__) - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word and position embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding( - config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id - ) - self.position_embeddings = nn.Embedding( - config.max_position_embeddings, config.hidden_size - ) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer( - "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)) - ) - self.position_embedding_type = getattr( - config, "position_embedding_type", "absolute" - ) - - self.config = config - - def forward( - self, - input_ids=None, - position_ids=None, - query_embeds=None, - past_key_values_length=0, - ): - if input_ids is not None: - seq_length = input_ids.size()[1] - else: - seq_length = 0 - - if position_ids is None: - position_ids = self.position_ids[ - :, past_key_values_length : seq_length + past_key_values_length - ].clone() - - if input_ids is not None: - embeddings = self.word_embeddings(input_ids) - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings = embeddings + position_embeddings - - if query_embeds is not None: - embeddings = torch.cat((query_embeds, embeddings), dim=1) - else: - embeddings = query_embeds - - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class BertSelfAttention(nn.Module): - def __init__(self, config, is_cross_attention): - super().__init__() - self.config = config - if config.hidden_size % config.num_attention_heads != 0 and not hasattr( - config, "embedding_size" - ): - raise ValueError( - "The hidden size (%d) is not a multiple of the number of attention " - "heads (%d)" % (config.hidden_size, config.num_attention_heads) - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - if is_cross_attention: - self.key = nn.Linear(config.encoder_width, self.all_head_size) - self.value = nn.Linear(config.encoder_width, self.all_head_size) - else: - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = getattr( - config, "position_embedding_type", "absolute" - ) - if ( - self.position_embedding_type == "relative_key" - or self.position_embedding_type == "relative_key_query" - ): - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding( - 2 * config.max_position_embeddings - 1, self.attention_head_size - ) - self.save_attention = False - - def save_attn_gradients(self, attn_gradients): - self.attn_gradients = attn_gradients - - def get_attn_gradients(self): - return self.attn_gradients - - def save_attention_map(self, attention_map): - self.attention_map = attention_map - - def get_attention_map(self): - return self.attention_map - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + ( - self.num_attention_heads, - self.attention_head_size, - ) - x = x.view(*new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - mixed_query_layer = self.query(hidden_states) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if ( - self.position_embedding_type == "relative_key" - or self.position_embedding_type == "relative_key_query" - ): - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange( - seq_length, dtype=torch.long, device=hidden_states.device - ).view(-1, 1) - position_ids_r = torch.arange( - seq_length, dtype=torch.long, device=hidden_states.device - ).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding( - distance + self.max_position_embeddings - 1 - ) - positional_embedding = positional_embedding.to( - dtype=query_layer.dtype - ) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum( - "bhld,lrd->bhlr", query_layer, positional_embedding - ) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum( - "bhld,lrd->bhlr", query_layer, positional_embedding - ) - relative_position_scores_key = torch.einsum( - "bhrd,lrd->bhlr", key_layer, positional_embedding - ) - attention_scores = ( - attention_scores - + relative_position_scores_query - + relative_position_scores_key - ) - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.Softmax(dim=-1)(attention_scores) - - if is_cross_attention and self.save_attention: - self.save_attention_map(attention_probs) - attention_probs.register_hook(self.save_attn_gradients) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs_dropped = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - - context_layer = torch.matmul(attention_probs_dropped, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(*new_context_layer_shape) - - outputs = ( - (context_layer, attention_probs) if output_attentions else (context_layer,) - ) - - outputs = outputs + (past_key_value,) - return outputs - - -class BertSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertAttention(nn.Module): - def __init__(self, config, is_cross_attention=False): - super().__init__() - self.self = BertSelfAttention(config, is_cross_attention) - self.output = BertSelfOutput(config) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, - self.self.num_attention_heads, - self.self.attention_head_size, - self.pruned_heads, - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = ( - self.self.attention_head_size * self.self.num_attention_heads - ) - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - ) - attention_output = self.output(self_outputs[0], hidden_states) - - outputs = (attention_output,) + self_outputs[ - 1: - ] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -class BertOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertLayer(nn.Module): - def __init__(self, config, layer_num): - super().__init__() - self.config = config - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = BertAttention(config) - self.layer_num = layer_num - if ( - self.config.add_cross_attention - and layer_num % self.config.cross_attention_freq == 0 - ): - self.crossattention = BertAttention( - config, is_cross_attention=self.config.add_cross_attention - ) - self.has_cross_attention = True - else: - self.has_cross_attention = False - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config) - - self.intermediate_query = BertIntermediate(config) - self.output_query = BertOutput(config) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - query_length=0, - ): - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - self_attn_past_key_value = ( - past_key_value[:2] if past_key_value is not None else None - ) - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - past_key_value=self_attn_past_key_value, - ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:-1] - - present_key_value = self_attention_outputs[-1] - - if query_length > 0: - query_attention_output = attention_output[:, :query_length, :] - - if self.has_cross_attention: - assert ( - encoder_hidden_states is not None - ), "encoder_hidden_states must be given for cross-attention layers" - cross_attention_outputs = self.crossattention( - query_attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - output_attentions=output_attentions, - ) - query_attention_output = cross_attention_outputs[0] - outputs = ( - outputs + cross_attention_outputs[1:-1] - ) # add cross attentions if we output attention weights - - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk_query, - self.chunk_size_feed_forward, - self.seq_len_dim, - query_attention_output, - ) - if attention_output.shape[1] > query_length: - layer_output_text = apply_chunking_to_forward( - self.feed_forward_chunk, - self.chunk_size_feed_forward, - self.seq_len_dim, - attention_output[:, query_length:, :], - ) - layer_output = torch.cat([layer_output, layer_output_text], dim=1) - else: - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, - self.chunk_size_feed_forward, - self.seq_len_dim, - attention_output, - ) - outputs = (layer_output,) + outputs - - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - def feed_forward_chunk_query(self, attention_output): - intermediate_output = self.intermediate_query(attention_output) - layer_output = self.output_query(intermediate_output, attention_output) - return layer_output - - -class BertEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList( - [BertLayer(config, i) for i in range(config.num_hidden_layers)] - ) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=False, - output_hidden_states=False, - return_dict=True, - query_length=0, - ): - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = ( - () if output_attentions and self.config.add_cross_attention else None - ) - - next_decoder_cache = () if use_cache else None - - for i in range(self.config.num_hidden_layers): - layer_module = self.layer[i] - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if getattr(self.config, "gradient_checkpointing", False) and self.training: - - if use_cache: - logger.warn( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module( - *inputs, past_key_value, output_attentions, query_length - ) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - query_length, - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class BertPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class BertPredictionHeadTransform(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if isinstance(config.hidden_act, str): - self.transform_act_fn = ACT2FN[config.hidden_act] - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class BertLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class BertOnlyMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - - def forward(self, sequence_output): - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -class BertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BertConfig - base_model_prefix = "bert" - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - -class BertModel(BertPreTrainedModel): - """ - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in `Attention is - all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, - Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. - argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an - input to the forward pass. - """ - - def __init__(self, config, add_pooling_layer=False): - super().__init__(config) - self.config = config - - self.embeddings = BertEmbeddings(config) - - self.encoder = BertEncoder(config) - - self.pooler = BertPooler(config) if add_pooling_layer else None - - self.init_weights() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - def get_extended_attention_mask( - self, - attention_mask: Tensor, - input_shape: Tuple[int], - device: device, - is_decoder: bool, - has_query: bool = False, - ) -> Tensor: - """ - Makes broadcastable attention and causal masks so that future and masked tokens are ignored. - - Arguments: - attention_mask (:obj:`torch.Tensor`): - Mask with ones indicating tokens to attend to, zeros for tokens to ignore. - input_shape (:obj:`Tuple[int]`): - The shape of the input to the model. - device: (:obj:`torch.device`): - The device of the input to the model. - - Returns: - :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`. - """ - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if attention_mask.dim() == 3: - extended_attention_mask = attention_mask[:, None, :, :] - elif attention_mask.dim() == 2: - # Provided a padding mask of dimensions [batch_size, seq_length] - # - if the model is a decoder, apply a causal mask in addition to the padding mask - # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] - if is_decoder: - batch_size, seq_length = input_shape - - seq_ids = torch.arange(seq_length, device=device) - causal_mask = ( - seq_ids[None, None, :].repeat(batch_size, seq_length, 1) - <= seq_ids[None, :, None] - ) - - # add a prefix ones mask to the causal mask - # causal and attention masks must have same type with pytorch version < 1.3 - causal_mask = causal_mask.to(attention_mask.dtype) - - if causal_mask.shape[1] < attention_mask.shape[1]: - prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1] - if has_query: # UniLM style attention mask - causal_mask = torch.cat( - [ - torch.zeros( - (batch_size, prefix_seq_len, seq_length), - device=device, - dtype=causal_mask.dtype, - ), - causal_mask, - ], - axis=1, - ) - causal_mask = torch.cat( - [ - torch.ones( - (batch_size, causal_mask.shape[1], prefix_seq_len), - device=device, - dtype=causal_mask.dtype, - ), - causal_mask, - ], - axis=-1, - ) - extended_attention_mask = ( - causal_mask[:, None, :, :] * attention_mask[:, None, None, :] - ) - else: - extended_attention_mask = attention_mask[:, None, None, :] - else: - raise ValueError( - "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format( - input_shape, attention_mask.shape - ) - ) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = extended_attention_mask.to( - dtype=self.dtype - ) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - return extended_attention_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - is_decoder=False, - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - # use_cache = use_cache if use_cache is not None else self.config.use_cache - - if input_ids is None: - assert ( - query_embeds is not None - ), "You have to specify query_embeds when input_ids is None" - - # past_key_values_length - past_key_values_length = ( - past_key_values[0][0].shape[2] - self.config.query_length - if past_key_values is not None - else 0 - ) - - query_length = query_embeds.shape[1] if query_embeds is not None else 0 - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - query_embeds=query_embeds, - past_key_values_length=past_key_values_length, - ) - - input_shape = embedding_output.size()[:-1] - batch_size, seq_length = input_shape - device = embedding_output.device - - if attention_mask is None: - attention_mask = torch.ones( - ((batch_size, seq_length + past_key_values_length)), device=device - ) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if is_decoder: - extended_attention_mask = self.get_extended_attention_mask( - attention_mask, - input_ids.shape, - device, - is_decoder, - has_query=(query_embeds is not None), - ) - else: - extended_attention_mask = self.get_extended_attention_mask( - attention_mask, input_shape, device, is_decoder - ) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if encoder_hidden_states is not None: - if type(encoder_hidden_states) == list: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[ - 0 - ].size() - else: - ( - encoder_batch_size, - encoder_sequence_length, - _, - ) = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - - if type(encoder_attention_mask) == list: - encoder_extended_attention_mask = [ - self.invert_attention_mask(mask) for mask in encoder_attention_mask - ] - elif encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask( - encoder_attention_mask - ) - else: - encoder_extended_attention_mask = self.invert_attention_mask( - encoder_attention_mask - ) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - query_length=query_length, - ) - sequence_output = encoder_outputs[0] - pooled_output = ( - self.pooler(sequence_output) if self.pooler is not None else None - ) - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -class BertLMHeadModel(BertPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - past_key_values=None, - use_cache=True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - return_logits=False, - is_decoder=True, - reduction="mean", - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in - ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are - ignored (masked), the loss is only computed for the tokens with labels n ``[0, ..., config.vocab_size]`` - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - Returns: - Example:: - >>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig - >>> import torch - >>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased') - >>> config = BertConfig.from_pretrained("bert-base-cased") - >>> model = BertLMHeadModel.from_pretrained('bert-base-cased', config=config) - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - >>> prediction_logits = outputs.logits - """ - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - if labels is not None: - use_cache = False - if past_key_values is not None: - query_embeds = None - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - query_embeds=query_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - is_decoder=is_decoder, - ) - - sequence_output = outputs[0] - if query_embeds is not None: - sequence_output = outputs[0][:, query_embeds.shape[1] :, :] - - prediction_scores = self.cls(sequence_output) - - if return_logits: - return prediction_scores[:, :-1, :].contiguous() - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1) - lm_loss = loss_fct( - shifted_prediction_scores.view(-1, self.config.vocab_size), - labels.view(-1), - ) - if reduction == "none": - lm_loss = lm_loss.view(prediction_scores.size(0), -1).sum(1) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, query_embeds, past=None, attention_mask=None, **model_kwargs - ): - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_ids.shape) - query_mask = input_ids.new_ones(query_embeds.shape[:-1]) - attention_mask = torch.cat([query_mask, attention_mask], dim=-1) - - # cut decoder_input_ids if past is used - if past is not None: - input_ids = input_ids[:, -1:] - - return { - "input_ids": input_ids, - "query_embeds": query_embeds, - "attention_mask": attention_mask, - "past_key_values": past, - "encoder_hidden_states": model_kwargs.get("encoder_hidden_states", None), - "encoder_attention_mask": model_kwargs.get("encoder_attention_mask", None), - "is_decoder": True, - } - - def _reorder_cache(self, past, beam_idx): - reordered_past = () - for layer_past in past: - reordered_past += ( - tuple( - past_state.index_select(0, beam_idx) for past_state in layer_past - ), - ) - return reordered_past - - -class BertForMaskedLM(BertPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - return_logits=False, - is_decoder=False, - ): - r""" - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., - config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored - (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` - """ - - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - query_embeds=query_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - is_decoder=is_decoder, - ) - - if query_embeds is not None: - sequence_output = outputs[0][:, query_embeds.shape[1] :, :] - prediction_scores = self.cls(sequence_output) - - if return_logits: - return prediction_scores - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() # -100 index = padding token - masked_lm_loss = loss_fct( - prediction_scores.view(-1, self.config.vocab_size), labels.view(-1) - ) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ( - ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - ) - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/t13718236382/web-ui/_next/static/chunks/7c806026.0dff11a0e0d35bd9.js b/spaces/t13718236382/web-ui/_next/static/chunks/7c806026.0dff11a0e0d35bd9.js deleted file mode 100644 index 2a8f6bbc921aad3f724e484ceb49ccaebb5069af..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/_next/static/chunks/7c806026.0dff11a0e0d35bd9.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[372],{96758:function(t,h,a){a.d(h,{MDG:function(){return r},MUM:function(){return n}});var c=a(83270);function n(t){return(0,c.w_)({tag:"svg",attr:{viewBox:"0 0 24 24"},child:[{tag:"path",attr:{d:"M11 16h2V7h3l-4-5-4 5h3z"}},{tag:"path",attr:{d:"M5 22h14c1.103 0 2-.897 2-2v-9c0-1.103-.897-2-2-2h-4v2h4v9H5v-9h4V9H5c-1.103 0-2 .897-2 2v9c0 1.103.897 2 2 2z"}}]})(t)}function r(t){return(0,c.w_)({tag:"svg",attr:{viewBox:"0 0 24 24"},child:[{tag:"path",attr:{d:"m12 18 4-5h-3V2h-2v11H8z"}},{tag:"path",attr:{d:"M19 9h-4v2h4v9H5v-9h4V9H5c-1.103 0-2 .897-2 2v9c0 1.103.897 2 2 2h14c1.103 0 2-.897 2-2v-9c0-1.103-.897-2-2-2z"}}]})(t)}}}]); \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Bkpps3 Bin Ofw !!INSTALL!!.md b/spaces/terfces0erbo/CollegeProjectV2/Bkpps3 Bin Ofw !!INSTALL!!.md deleted file mode 100644 index fb3a427a3a9c7364a99c79d52744bc0ccfee4849..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Bkpps3 Bin Ofw !!INSTALL!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    bkpps3 bin ofw


    DOWNLOAD →→→ https://bytlly.com/2uGlow



    -
    -Paste your homebrew (Habib QA Taggle, SEN Enabler) apps on USB root folder. Put lower firmware PUP file on PS3>UPDATE folder. Plug your USB drive on ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Card Recovery 610 Crack By Mmb Download.md b/spaces/terfces0erbo/CollegeProjectV2/Card Recovery 610 Crack By Mmb Download.md deleted file mode 100644 index 048411819ad1123d05356d697a14388fbc806762..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Card Recovery 610 Crack By Mmb Download.md +++ /dev/null @@ -1,11 +0,0 @@ - -

    cardrecovery crack program is a powerful photo recovery software. it provides recovery of files on windows from memory cards, sd and cf cards. it has an easy to use interface that provides instant access to the recovery process. you can even recover photos from memory card without formatting.

    -

    Card Recovery 610 Crack By Mmb Download


    DOWNLOAD 🔗 https://bytlly.com/2uGkBL



    -

    cardrecovery presents a straightforward interface. simply start the recovery process by selecting the source of your photos, then select the type of photo to be recovered, and then start your recovery.

    -

    it is a very easy to use software that comes with a friendly interface and is easy to use. using this tool you can easily recover pictures on your camera memory card that have been accidentally deleted.

    -

    disk drill is a powerful photo recovery software that can recover your lost photos on your camera memory card, as well as your lost files from hard drives, memory cards and usb flash drives. it also has advanced features such as the ability to scan for viruses and other hidden files.

    -

    disk drill is a powerful data recovery software that can recover photos from a memory card, internal hard drive, usb flash drive, and other devices. it can also recover your lost photos, and your deleted files from a memory card that has been accidentally deleted.

    -

    -

    the cloud-based card recovery software will monitor file changes on your drive, and automatically recover the deleted or lost files. it scans the entire drive in order to locate the lost files. when it sees a file that it can recover, it saves it to the specified folder. you can recover photos and videos from usb flash drives, memory cards, digital cameras, and other removable media. the software is a high performance file recovery software that can recover files from hard disks, memory cards, and usb flash drives, including digital camera memory cards, sd cards, compactflash cards, and memory stick cards. it makes the data recovery process easy and less time-consuming.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Circuit Wizard Release Code Keygengolkes.md b/spaces/terfces0erbo/CollegeProjectV2/Circuit Wizard Release Code Keygengolkes.md deleted file mode 100644 index c267bba4ed43bc6eb5aced385dd78d71d0500f8e..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Circuit Wizard Release Code Keygengolkes.md +++ /dev/null @@ -1,6 +0,0 @@ -

    circuit wizard release code keygengolkes


    Download File ★★★★★ https://bytlly.com/2uGixF



    -
    -This box is a note. You can add and remove as many boxes as you want. Boxes can be used to display things like location info, store hours, ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/HACK Sidify Spotify Music Converter 1.2.8 -19 SeuPirate [BEST].md b/spaces/terfces0erbo/CollegeProjectV2/HACK Sidify Spotify Music Converter 1.2.8 -19 SeuPirate [BEST].md deleted file mode 100644 index 2148bb17ffd8432fc1f17d0cb7ee1a6455e984f1..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HACK Sidify Spotify Music Converter 1.2.8 -19 SeuPirate [BEST].md +++ /dev/null @@ -1,6 +0,0 @@ -

    HACK Sidify Spotify Music Converter 1.2.8 -19 SeuPirate


    DOWNLOAD ✑ ✑ ✑ https://bytlly.com/2uGj0N



    - - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Honestech VHS to DVD 4.0 Serial Keygen A Simple and Easy Way to Digitize Your VHS Collection.md b/spaces/tialenAdioni/chat-gpt-api/logs/Honestech VHS to DVD 4.0 Serial Keygen A Simple and Easy Way to Digitize Your VHS Collection.md deleted file mode 100644 index a2587381dfdea0b6d8f9bf1f97afeb572be14ad8..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Honestech VHS to DVD 4.0 Serial Keygen A Simple and Easy Way to Digitize Your VHS Collection.md +++ /dev/null @@ -1,109 +0,0 @@ - -

    Honestech Vhs To DVD 4.0 Serial Keygen: How to Convert Your Old Tapes to Digital Format

    -

    If you have a collection of old VHS tapes that you want to preserve and enjoy on your computer or DVD player, you may need a software that can help you convert them to digital format. One of the software that can do this is Honestech Vhs To DVD 4.0, a product that can capture video from any analog source and burn it to DVD or save it as a file. However, to use this software, you need a serial keygen that can activate it and unlock its full features. In this article, we will explain what Honestech Vhs To DVD 4.0 serial keygen is, how it works, and how to use it.

    -

    honestech vhs to dvd 4.0 serial keygen


    Download Ziphttps://urlcod.com/2uKa1i



    -

    What is Honestech Vhs To DVD 4.0 serial keygen?

    -

    Honestech Vhs To DVD 4.0 serial keygen is a software that can generate a serial key for Honestech Vhs To DVD 4.0 software. It is created by expert coders who have experience in creating hacks, cracks and keygens for different types of software and games. Honestech Vhs To DVD 4.0 serial keygen is the original and real serial key generator for Honestech Vhs To DVD 4.0 software, and it is 100% fully working to completely activate and update your copy compatible with the latest PC.

    -

    How does Honestech Vhs To DVD 4.0 serial keygen work?

    -

    Honestech Vhs To DVD 4.0 serial keygen works by bypassing the activation process of Honestech Vhs To DVD 4.0 software and providing a virtual key that doesn't need any further purchase of the product to activate. It saves a lot of money of the users of Honestech Vhs To DVD 4.0 software by simply installing it into their system by downloading from the below given download button. Honestech Vhs To DVD 4.0 serial keygen can also update your copy of Honestech Vhs To DVD 4.0 software with the latest features and patches.

    -

    How to use Honestech Vhs To DVD 4.0 serial keygen to activate Honestech Vhs To DVD 4.0 software?

    -

    To use Honestech Vhs To DVD 4.0 serial keygen to activate Honestech Vhs To DVD 4.0 software, you need to follow these steps:

    -
      -
    1. Download and install Honestech Vhs To DVD 4.0 software from the official website or any other source.
    2. -
    3. Download Honestech Vhs To DVD 4.0 serial keygen from the below given download button.
    4. -
    5. Run Honestech Vhs To DVD 4.0 serial keygen as administrator.
    6. -
    7. Select Honestech Vhs To DVD 4.0 from the product list.
    8. -
    9. Click on Generate button to generate a serial key.
    10. -
    11. Copy the serial key and paste it in the activation window of Honestech Vhs To DVD 4.0 software.
    12. -
    13. Click on Next and follow the instructions to complete the activation process.
    14. -
    15. Enjoy using Honestech Vhs To DVD 4.0 software with full features.
    16. -
    -

    Honestech Vhs To DVD 4.0 serial keygen is a reliable and safe tool that can activate Honestech Vhs To DVD 4.0 software without any hassle. It is easy to use and compatible with the latest PC. However, it is recommended to use it only for educational purposes and not for commercial use. Honestech Vhs To DVD 4.0 serial keygen is not affiliated with or endorsed by Honestech in any way.

    -

    What are the features of Honestech Vhs To DVD 4.0 software?

    -

    Honestech Vhs To DVD 4.0 software is a video conversion solution that can capture video from any analog source and burn it to DVD or save it as a file. It can also edit and enhance the video quality and add transitions, titles, music and effects. Some of the features of Honestech Vhs To DVD 4.0 software are:

    -
      -
    • Easy Wizard Mode: This mode guides you through the video conversion process step by step. You can choose from three options: VHS to DVD, VHS to PC, or PC to DVD. You can also adjust the recording time and quality settings.
    • -
    • Advanced Mode: This mode gives you more control over the video conversion process. You can capture video from any analog source, such as VCR, camcorder, DVD player, or TV tuner. You can also edit and enhance the video using various tools, such as trim, split, merge, crop, rotate, color correction, noise reduction, and more. You can also add transitions, titles, music and effects to your video.
    • -
    • Audio Recording: This feature allows you to record audio from any analog source, such as cassette tapes, vinyl records, or radio. You can also edit and enhance the audio using various tools, such as noise reduction, equalizer, and more. You can also burn the audio to CD or save it as a file.
    • -
    • Blu-ray Support: This feature allows you to burn your video to Blu-ray discs or save it as a Blu-ray folder or ISO file. You can also choose from various Blu-ray menu templates and customize them with your own images and music.
    • -
    • HD Editing: This feature allows you to edit and enhance your HD video captured from HD camcorders or other HD sources. You can also convert your HD video to standard definition video for DVD burning or file saving.
    • -
    -

    These are some of the features of Honestech Vhs To DVD 4.0 software that make it a powerful and easy to use video conversion solution.

    -

    What are the reviews of Honestech Vhs To DVD 4.0 software?

    -

    Honestech Vhs To DVD 4.0 software is a video conversion solution that has received mixed reviews from users who have tried it. Some users have praised its features, performance, and ease of use, while others have complained about its limitations, errors, and quality issues. Here are some of the reviews of Honestech Vhs To DVD 4.0 software from different sources:

    -

    honestech vhs to dvd 4.0 deluxe crack
    -honestech vhs to dvd 4.0 product key
    -honestech vhs to dvd 4.0 activation code
    -honestech vhs to dvd 4.0 license key
    -honestech vhs to dvd 4.0 registration code
    -honestech vhs to dvd 4.0 full version download
    -honestech vhs to dvd 4.0 free download with crack
    -honestech vhs to dvd 4.0 serial number generator
    -honestech vhs to dvd 4.0 patch
    -honestech vhs to dvd 4.0 keygen download
    -honestech vhs to dvd 4.0 deluxe serial number
    -honestech vhs to dvd 4.0 deluxe activation key
    -honestech vhs to dvd 4.0 deluxe license code
    -honestech vhs to dvd 4.0 deluxe crack download
    -honestech vhs to dvd 4.0 deluxe full version free download
    -honestech vhs to dvd 4.0 plus crack
    -honestech vhs to dvd 4.0 plus serial key
    -honestech vhs to dvd 4.0 plus activation code
    -honestech vhs to dvd 4.0 plus license key
    -honestech vhs to dvd 4.0 plus registration code
    -honestech vhs to dvd 4.0 plus full version download
    -honestech vhs to dvd 4.0 plus free download with crack
    -honestech vhs to dvd 4.0 plus serial number generator
    -honestech vhs to dvd 4.0 plus patch
    -honestech vhs to dvd 4.0 plus keygen download
    -honestech vhs to dvd 4.0 converter crack
    -honestech vhs to dvd 4.0 converter serial key
    -honestech vhs to dvd 4.0 converter activation code
    -honestech vhs to dvd 4.0 converter license key
    -honestech vhs to dvd 4.0 converter registration code
    -honestech vhs to dvd 4.0 converter full version download
    -honestech vhs to dvd 4.0 converter free download with crack
    -honestech vhs to dvd 4.0 converter serial number generator
    -honestech vhs to dvd 4.0 converter patch
    -honestech vhs to dvd 4.0 converter keygen download
    -how to crack honestech vhs to dvd 4.0
    -how to get honestech vhs to dvd 4.0 for free
    -how to activate honestech vhs to dvd 4.0
    -how to register honestech vhs to dvd 4.0
    -how to use honestech vhs to dvd 4.0
    -how to install honestech vhs to dvd 4.0
    -how to uninstall honestech vhs to dvd 4.0
    -how to update honestech vhs to dvd 4.0
    -how to fix honestech vhs to dvd 4.0 errors
    -how to convert vhs tapes with honestech vhs to dvd 4.0
    -how to burn dvds with honestech vhs to dvd 4.0
    -how to edit videos with honestech vhs to dvd 4.0
    -how good is honestech vhs to dvd 4.0
    -where can i buy honestech vhs to dvd 4.0
    -where can i find honestech vhs to dvd 4.0 manual

    -
      -
    • Amazon.com: This website has two versions of Honestech Vhs To DVD 4.0 software: Honestech Vhs To DVD 4.0 Hd and Honest Technologies Vhs To DVD 4.0 Deluxe. The former has a rating of 2.5 out of 5 stars based on 3 customer reviews, while the latter has a rating of 3 out of 5 stars based on 4 customer reviews. Some of the positive comments include: \"Solid video conversion device\", \"Easy to use\", and \"Works great\". Some of the negative comments include: \"Audio sync\", \"If you're doing more than 15 clips, get something better than this!\", and \"Good hardware/bad software\".
    • -
    • Newegg.com: This website has Honestech Vhs To DVD 4.0 Deluxe software with a rating of 3 out of 5 eggs based on 1 customer review. The reviewer said: \"I bought this product to convert my old VHS tapes to DVDs. It works well for that purpose. The software is easy to use and the hardware is simple to install. The quality of the DVDs is not great, but acceptable for old tapes. The main problem I have with this product is that it does not work well with Windows Vista 64-bit. It crashes frequently and sometimes freezes my computer. I contacted the customer support and they told me to update the drivers, but that did not help much. I would recommend this product only if you have Windows XP or Vista 32-bit.\"
    • -
    • YouTube.com: This website has two videos related to Honestech Vhs To DVD 4.0 software: How to use honestech VHS to DVD 4.0 Deluxe and honestech VHS to DVD 4.0 Deluxe. The former has a view count of over 78,000 and a like/dislike ratio of 67/16, while the latter has a view count of over 12,000 and a like/dislike ratio of 16/6. Some of the positive comments include: \"Thank you for this video\", \"Very helpful\", and \"Great product\". Some of the negative comments include: \"Doesn't work\", \"Poor quality\", and \"Waste of money\".
    • -
    -

    These are some of the reviews of Honestech Vhs To DVD 4.0 software from different sources that show its advantages and disadvantages.

    -

    What are the alternatives to Honestech Vhs To DVD 4.0 software?

    -

    Honestech Vhs To DVD 4.0 software is not the only video conversion solution that can capture video from any analog source and burn it to DVD or save it as a file. There are other software that can do the same or similar tasks with different features and prices. Some of the alternatives to Honestech Vhs To DVD 4.0 software are:

    -
      -
    • Golden Videos VHS to DVD Converter: This software is a free alternative to Honestech Vhs To DVD 4.0 software that can convert your old VHS tapes to DVDs or digital files using your PC. It has video restoration tools to keep your movies looking their best, and can burn directly to DVD or save as a file. It also supports HD video and Blu-ray burning.
    • -
    • Roxio Easy VHS to DVD: This software is a paid alternative to Honestech Vhs To DVD 4.0 software that can create DVD movies from your VHS tapes and Hi8 or V8 home videos. It has a simple interface that guides you through the video conversion process step by step. It also has video editing tools to trim, cut, split, and add transitions, titles, and music to your videos.
    • -
    • Cyberlink Power2Go: This software is a paid alternative to Honestech Vhs To DVD 4.0 software that can burn your video to Blu-ray discs or save it as a Blu-ray folder or ISO file. It has a drag-and-drop interface that makes it easy to add files and folders to your disc project. It also has video editing tools to enhance, trim, and add effects to your videos.
    • -
    -

    These are some of the alternatives to Honestech Vhs To DVD 4.0 software that can help you convert your old analog video to digital format.

    -

    How to download Honestech Vhs To DVD 4.0 software?

    -

    Honestech Vhs To DVD 4.0 software is a video conversion solution that can be downloaded from various sources online. However, not all sources are reliable and safe, and some may contain viruses, malware, or fake serial keygens that can harm your computer or steal your personal information. Therefore, it is important to download Honestech Vhs To DVD 4.0 software from trusted and official sources only. Here are some of the sources where you can download Honestech Vhs To DVD 4.0 software:

    -
      -
    • Official website: The official website of Honestech Vhs To DVD 4.0 software is www.honestech.com, where you can find information about the product, its features, system requirements, and customer support. You can also purchase the software online and download it directly from the website after completing the payment process. You will receive a confirmation email with a serial key that you can use to activate the software.
    • -
    • Software Informer: Software Informer is a website that provides information and reviews about various software products, as well as download links from official sources. You can find Honestech Vhs To DVD 4.0 software on Software Informer at https://honestech-vhs-to-dvd-deluxe.software.informer.com/4.0/, where you can read user comments and ratings, see screenshots, and download the software from the official website.
    • -
    • Internet Archive: Internet Archive is a website that preserves and provides access to digital content, such as books, music, videos, and software. You can find Honestech Vhs To DVD 4.0 software on Internet Archive at https://archive.org/details/manualzilla-id-6794156, where you can download the software for free as a part of a manual for using the product.
    • -
    -

    These are some of the sources where you can download Honestech Vhs To DVD 4.0 software safely and legally.

    -

    Conclusion

    -

    Honestech Vhs To DVD 4.0 Serial Keygen is a software that can activate Honestech Vhs To DVD 4.0 software and enable users to use its full features without any limitations. Honestech Vhs To DVD 4.0 software is a video conversion solution that can capture video from any analog source and burn it to DVD or save it as a file. It also has features such as video editing, audio recording, Blu-ray support, and HD editing. Honestech Vhs To DVD 4.0 software has received mixed reviews from users who have tried it, and it has some alternatives that can do similar tasks with different features and prices. Honestech Vhs To DVD 4.0 software can be downloaded from various sources online, but it is important to download it from trusted and official sources only. Honestech Vhs To DVD 4.0 Serial Keygen is a reliable and safe tool that can activate Honestech Vhs To DVD 4.0 software without any hassle.

    679dcb208e
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/720p Dual Audio Movies Galti Sirf Tumhari TOP.md b/spaces/tioseFevbu/cartoon-converter/scripts/720p Dual Audio Movies Galti Sirf Tumhari TOP.md deleted file mode 100644 index 14851f394f9e7e8bf07ec80d0ed75ad9586316d0..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/720p Dual Audio Movies Galti Sirf Tumhari TOP.md +++ /dev/null @@ -1,17 +0,0 @@ -
    -

    Galti Sirf Tumhari: A Thriller Film Starring Poonam Pandey and Navi Bhangu

    -

    Galti Sirf Tumhari (GST) is a 2017 Hindi thriller film directed by Suryakant Tyagi and produced by Sarika S. Sanjot. The film stars Poonam Pandey, Navi Bhangu, Manisha Thakur and others in the lead roles. The film revolves around a group of friends who fall in love with each other, but soon find themselves in a web of deceit and revenge.

    -

    The film was released on YouTube by Shemaroo in 2019 and has garnered over 229K views as of April 2023. The film is available in 720p resolution and dual audio (Hindi and English) for the viewers who prefer to watch it in different languages. The film has a runtime of 1 hour and 36 minutes and is rated 18+ for its adult content and violence.

    -

    720p Dual Audio Movies Galti Sirf Tumhari


    DOWNLOAD >>>>> https://urlcod.com/2uHvKS



    -

    If you are looking for a thrilling and suspenseful film to watch online, you can check out GST - Galti Sirf Tumhari on YouTube[^1^]. You can also listen to the soundtrack of the film on SoundCloud[^2^] [^3^]. The film has received mixed reviews from the critics and the audience, but it is worth a watch for the fans of Poonam Pandey and Navi Bhangu.

    - -

    The plot of GST - Galti Sirf Tumhari follows the lives of Shyam (Navi Bhangu), a successful businessman, and his wife Priya (Poonam Pandey), a model and actress. They have a happy marriage until Priya gets involved with a film director named Raj (Sunil Thappa), who promises to make her a star. Raj seduces Priya and convinces her to leave Shyam for him. However, Raj has a hidden agenda and plans to blackmail Priya with a sex tape he secretly recorded.

    -

    Meanwhile, Shyam's friends also face troubles in their love lives. Ravi (Ravi Yadav) is a flirtatious photographer who cheats on his girlfriend Manisha (Manisha Thakur) with various models. Manisha finds out about his infidelity and decides to take revenge on him. She hires a hitman to kill Ravi, but things go wrong when the hitman accidentally kills Ravi's brother instead. Manisha then becomes the prime suspect in the murder case.

    -

    The film takes a twist when Shyam discovers Priya's affair with Raj and decides to confront them. He finds out that Raj is not only cheating on Priya with another actress, but also has a criminal record of extortion and rape. Shyam decides to expose Raj's crimes and save Priya from his clutches. However, he faces a lot of obstacles and dangers along the way. Will Shyam be able to reunite with Priya? Will Manisha be able to prove her innocence? Will Raj get away with his evil deeds? Watch GST - Galti Sirf Tumhari to find out.

    - -

    GST - Galti Sirf Tumhari is a film that tries to explore the dark side of love and relationships. The film has a lot of twists and turns that keep the viewers hooked till the end. The film also has some bold scenes and dialogues that may appeal to some sections of the audience. However, the film also suffers from a weak script, poor direction, and mediocre performances. The film fails to create an impact or deliver a message.

    -

    The film has received mostly negative reviews from the critics and the audience. The film has been criticized for its poor execution, lack of originality, and lack of logic. The film has also been accused of being a cheap publicity stunt by Poonam Pandey, who is known for her controversial and provocative acts. The film has been rated 4/10 by IMDb[^1^] and 0/5 by Bollywood Hungama[^2^]. The film has also been ignored by the box office and has failed to recover its budget of 20 million rupees.

    -

    GST - Galti Sirf Tumhari is a film that could have been a decent thriller if it had a better script, direction, and acting. The film has some potential but it is wasted by its poor execution and lack of substance. The film is not recommended for anyone who is looking for a quality cinema experience. The film is only for those who are fans of Poonam Pandey and Navi Bhangu and who do not mind watching a low-budget and low-quality film.

    -

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/AUTODATA 8.45 Crack FULL Crack.md b/spaces/tioseFevbu/cartoon-converter/scripts/AUTODATA 8.45 Crack FULL Crack.md deleted file mode 100644 index e3b1478d1e788e3fd568c653ef47a6162328dbb8..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/AUTODATA 8.45 Crack FULL Crack.md +++ /dev/null @@ -1,135 +0,0 @@ - -

    AUTODATA 8.45 Crack FULL crack: Everything You Need to Know

    -

    If you are looking for a comprehensive and reliable software for automotive diagnostics, repair, and maintenance, you might have heard of AUTODATA. This software is one of the most popular and widely used tools in the automotive industry, providing access to a vast database of technical information, diagrams, wiring schematics, service schedules, diagnostic trouble codes, and more.

    -

    However, AUTODATA is not a cheap software, and it requires a subscription fee to use its full features. That's why some people resort to using cracks, which are modified versions of the software that bypass the security measures and allow unlimited access without paying anything. But is this a good idea? And how can you get and use AUTODATA 8.45 Crack FULL crack?

    -

    AUTODATA 8.45 Crack FULL crack


    Download ……… https://urlcod.com/2uHx6d



    -

    In this article, we will answer these questions and more, giving you everything you need to know about AUTODATA 8.45 Crack FULL crack. We will explain what AUTODATA is and why you need it, what a crack is and why you need it, how to download and install AUTODATA 8.45 Crack FULL crack, how to use it, and what are the potential risks and benefits of using it. We will also provide you with some frequently asked questions at the end of the article.

    -

    What is AUTODATA and why do you need it?

    -

    AUTODATA is a software that provides technical information for automotive professionals. It covers over 40,000 vehicle models from over 80 manufacturers worldwide, including cars, motorcycles, trucks, buses, vans, and more. It offers data on engine management systems, ABS, airbags, immobilizers, climate control systems, service indicators, wiring diagrams, component testing procedures, diagnostic trouble codes, service schedules, torque settings, repair times, key programming instructions, wheel alignment data, tire pressure monitoring systems, and more.

    -

    AUTODATA is designed to help automotive technicians diagnose, repair, and maintain vehicles faster and easier. It allows them to access the latest information from the manufacturers, follow the recommended procedures and standards, avoid common mistakes and errors, save time and money on repairs, improve customer satisfaction and loyalty, and stay updated on the latest technologies and trends in the automotive industry.

    -

    AUTODATA features and benefits

    -

    Some of the main features and benefits of AUTODATA are:

    -
      -
    • It provides accurate and up-to-date information on over 40,000 vehicle models from over 80 manufacturers worldwide.
    • -
    • It covers all aspects of automotive diagnostics, repair, and maintenance, including engine management systems, ABS, airbags, immobilizers, climate control systems, service indicators, wiring diagrams, component testing procedures, diagnostic trouble codes, service schedules, torque settings, repair times, key programming instructions, wheel alignment data, tire pressure monitoring systems,
    • It has a user-friendly interface that allows easy navigation through the data.
    • -
    • It has a search function that allows finding the information by vehicle model or system.
    • -
    • It has a print function that allows printing the information for reference or documentation purposes.
    • -
    • It has an update function that allows downloading the latest data from the internet.
    • -
    • It has a support function that allows contacting the technical support team for assistance or feedback.
    • -
    -

    AUTODATA system requirements and compatibility

    -

    To run AUTODATA, you need a computer that meets the following system requirements: - Operating system: Windows 7, 8, 10 (32-bit or 64-bit) - Processor: Intel Pentium 4 or AMD Athlon 64 or higher - Memory: 2 GB RAM or more - Hard disk space: 10 GB or more - Internet connection: Broadband or higher - Screen resolution: 1024 x 768 or higher - DVD drive: Required for installation AUTODATA is compatible with most diagnostic tools and interfaces that use the OBD-II protocol, such as ELM327, KWP2000, CAN, J1850, ISO9141, and more. However, some features may not work with some vehicles or systems, depending on the manufacturer's specifications and limitations. You can check the compatibility of your vehicle and diagnostic tool with AUTODATA by using the online compatibility checker.

    What is a crack and why do you need it?

    -

    A crack is a modified version of a software that bypasses the security measures and allows unlimited access without paying anything. A crack can be a file, a program, a code, a patch, a keygen, or a combination of these. A crack can be used to activate, register, unlock, or hack a software.

    -

    Some people use cracks to get access to paid software for free, either because they cannot afford it, they do not want to pay for it, they want to test it before buying it, they want to use it for educational purposes, or they want to use it for illegal purposes. However, using cracks is not without risks and consequences.

    -

    -

    Crack definition and types

    -

    There are different types of cracks, depending on how they work and what they do. Some of the most common types are:

    -
      -
    • Activation crack: A crack that activates a software by generating a valid serial number or license key.
    • -
    • Registration crack: A crack that registers a software by entering a fake name and email address.
    • -
    • Unlock crack: A crack that unlocks a software by removing the trial period or the feature limitations.
    • -
    • Hack crack: A crack that hacks a software by changing its code or behavior.
    • -
    • Patch crack: A crack that patches a software by replacing or modifying some of its files.
    • -
    • Keygen crack: A crack that generates a keygen, which is a program that creates serial numbers or license keys for a software.
    • -
    -

    Crack advantages and disadvantages

    -

    Some of the advantages of using cracks are:

    -
      -
    • You can get access to paid software for free.
    • -
    • You can use the full features of the software without any restrictions.
    • -
    • You can use the software offline without any internet connection.
    • -
    • You can use the software on multiple devices without any limit.
    • -
    -

    Some of the disadvantages of using cracks are:

    -
      -
    • You can get infected with malware, viruses, spyware, ransomware, trojans, worms, or other malicious programs that can harm your computer or steal your data.
    • -
    • You can get exposed to legal issues, such as lawsuits, fines, penalties, or even jail time for violating the intellectual property rights of the software developers or distributors.
    • -
    • You can get banned from online services, such as updates, support, forums, communities, or multiplayer modes that require authentication or verification from the software servers.
    • -
    • You can get poor performance, stability, compatibility, or functionality issues with the software due to bugs, errors, glitches, crashes,
    • You can get outdated information or data from the software due to lack of updates or patches from the official sources.
    • -

    Crack legality and ethics

    -

    Using cracks is illegal and unethical in most countries and jurisdictions. It violates the intellectual property rights of the software developers or distributors, who invest time, money, and effort to create and distribute the software. It also deprives them of the revenue and profit they deserve for their work. It also harms the software industry and the economy, as it reduces the incentive and motivation for innovation and development.

    -

    Using cracks is also unfair and disrespectful to the software users who pay for the software legitimately and follow the terms and conditions of the license agreement. It also creates an unfair competition and a distorted market for the software products and services.

    -

    Using cracks is also risky and irresponsible for yourself and others, as it exposes you to potential malware, legal issues, performance issues, or data loss. It also compromises the quality and reliability of the software, as it prevents you from getting the latest updates, patches, or support from the official sources.

    -

    How to download and install AUTODATA 8.45 Crack FULL crack?

    -

    If you still want to download and install AUTODATA 8.45 Crack FULL crack, despite the risks and consequences, you need to follow these steps:

    -

    Download sources and links

    -

    There are many websites and platforms that offer AUTODATA 8.45 Crack FULL crack for free download, such as torrent sites, file-sharing sites, crack sites, or forums. However, not all of them are trustworthy or safe. Some of them may contain fake, corrupted, or infected files that can damage your computer or steal your data.

    -

    To avoid these problems, you need to be careful and selective when choosing where to download AUTODATA 8.45 Crack FULL crack from. You need to check the reputation, reviews, ratings, comments, feedback, or testimonials of the download sources and links before clicking on them. You also need to use a reliable antivirus or anti-malware program to scan the downloaded files before opening or running them.

    -

    Here are some examples of download sources and links that claim to offer AUTODATA 8.45 Crack FULL crack for free:

    -
      -
    • [AUTODATA 8.45 Crack FULL crack torrent]
    • -
    • [AUTODATA 8.45 Crack FULL crack direct download]
    • -
    • [AUTODATA 8.45 Crack FULL crack keygen]
    • -
    -

    Note: We do not endorse or recommend any of these download sources or links. Use them at your own risk.

    Installation steps and tips

    -

    After downloading AUTODATA 8.45 Crack FULL crack from a reliable source, you need to install it on your computer. Here are the steps and tips to do so:

    -
      -
    1. Extract the downloaded file using a program like WinRAR or 7-Zip.
    2. -
    3. Run the setup.exe file as administrator and follow the instructions on the screen.
    4. -
    5. Choose the destination folder where you want to install AUTODATA.
    6. -
    7. Wait for the installation process to complete.
    8. -
    9. Copy the crack file from the crack folder and paste it into the installation folder, replacing the original file.
    10. -
    11. Run AUTODATA as administrator and enjoy the full features.
    12. -
    -

    Note: You may need to disable your antivirus or firewall temporarily during the installation or activation process, as they may interfere with the crack or detect it as a threat. However, this may also expose your computer to malware or viruses, so be careful and cautious.

    -

    Troubleshooting and errors

    -

    Sometimes, you may encounter some problems or errors when downloading, installing, or using AUTODATA 8.45 Crack FULL crack. Some of the common issues and solutions are:

    -
      -
    • The download link is broken or expired: Try to find another download source or link that works.
    • -
    • The downloaded file is corrupted or incomplete: Try to download the file again or use a different program to extract it.
    • -
    • The installation process fails or freezes: Try to run the setup.exe file as administrator or in compatibility mode.
    • -
    • The crack file is missing or invalid: Try to download the crack file again or use a different crack version.
    • -
    • The software does not run or crashes: Try to run the software as administrator or in compatibility mode.
    • -
    • The software does not recognize your vehicle or diagnostic tool: Try to update your vehicle or diagnostic tool firmware or drivers, or check the compatibility with AUTODATA.
    • -
    -

    If none of these solutions work, you may need to contact the technical support team of the download source or link, or look for online forums or communities where other users may have faced similar issues and found solutions. However, do not expect any official support from AUTODATA, as they do not endorse or authorize the use of cracks.

    How to use AUTODATA 8.45 Crack FULL crack?

    -

    After installing and activating AUTODATA 8.45 Crack FULL crack, you can start using it to diagnose, repair, and maintain your vehicles. Here are some tips and instructions on how to use it:

    -

    User interface and navigation

    -

    When you launch AUTODATA, you will see the main screen, which consists of the following elements:

    -
      -
    • The menu bar, which contains the options for file, edit, view, tools, help, and exit.
    • -
    • The toolbar, which contains the icons for search, print, update, support, and settings.
    • -
    • The vehicle selection panel, which allows you to select the vehicle model or system you want to work on.
    • -
    • The data display panel, which shows the information and data related to the selected vehicle model or system.
    • -
    -

    To navigate through the data, you can use the following methods:

    -
      -
    • Use the search function to find the information by entering a keyword or a phrase.
    • -
    • Use the tree view to browse the information by expanding or collapsing the categories and subcategories.
    • -
    • Use the tabs to switch between different types of information, such as technical data, diagrams, procedures, codes, schedules, etc.
    • -
    • Use the links to jump to related information or external sources.
    • -
    -

    Basic functions and operations

    -

    To perform basic functions and operations with AUTODATA, you can use the following methods:

    -
      -
    • Use the print function to print the information for reference or documentation purposes. You can choose to print the whole page, the selected area, or the current tab.
    • -
    • Use the update function to download the latest data from the internet. You can choose to update automatically or manually.
    • -
    • Use the support function to contact the technical support team for assistance or feedback. You can choose to send an email or call a phone number.
    • -
    • Use the settings function to customize your preferences and options for AUTODATA. You can choose to change the language, units, fonts, colors, etc.
    • -
    -

    Advanced features and settings

    -

    To use advanced features and settings with AUTODATA, you can use the following methods:

    -
      -
    • Use the diagnostic tool interface function to connect your diagnostic tool or interface with AUTODATA. You can choose to use a USB cable or a Bluetooth connection.
    • -
    • Use the diagnostic trouble code function to read and clear the diagnostic trouble codes from your vehicle's system. You can also get detailed information and solutions for each code.
    • -
    • Use the component testing function to test and measure various components of your vehicle's system. You can also get step-by-step instructions and diagrams for each component.
    • -
    • Use the service schedule function to check and follow the recommended service schedule for your vehicle. You can also get reminders and alerts for each service interval.
    • -
    -

    Conclusion and FAQs

    -

    In conclusion, AUTODATA 8.45 Crack FULL crack is a modified version of AUTODATA that allows you to access its full features without paying anything. However, using it is illegal, unethical, risky, and irresponsible. It violates the intellectual property rights of AUTODATA developers and distributors, harms the software industry and economy, exposes you to malware and legal issues, and compromises the quality and reliability of AUTODATA. Therefore, we do not recommend or encourage you to use it. Instead, we suggest you to buy a legitimate subscription of AUTODATA from its official website or authorized dealers.

    -

    If you have any questions about AUTODATA 8.45 Crack FULL crack or AUTODATA in general, you may find some answers in these frequently asked questions:

    -

    Q: Is AUTODATA 8.45 Crack FULL crack safe?

    -

    A: No, it is not safe. It may contain malware or viruses that can harm your computer or steal your data. It may also expose you to legal issues or performance issues with AUTODATA.

    -

    Q: Is AUTODATA 8.45 Crack FULL crack legal?

    -

    A: No, it is not legal. It violates the intellectual property rights of AUTODATA developers and distributors. It also deprives them of their revenue and profit for their work. It may also result in lawsuits, fines, penalties, or even jail time for you.

    -

    Q: Is AUTODATA 8.45 Crack FULL crack ethical?

    -

    A: No, it is not ethical. It is unfair and disrespectful to AUTODATA developers and distributors who invest time, money, and effort to create and distribute AUTOD ATA. It also harms the software industry and economy, as it reduces the incentive and motivation for innovation and development. It also creates an unfair competition and a distorted market for the software products and services.

    -

    Q: Is AUTODATA 8.45 Crack FULL crack worth it?

    -

    A: No, it is not worth it. It may seem like a good deal at first, but it comes with many risks and consequences that outweigh the benefits. It may damage your computer or data, get you into legal trouble, or cause you to lose trust and reputation with your customers or peers. It may also prevent you from getting the best out of AUTODATA, as it may not work properly or have outdated information or data.

    -

    Q: How can I get a legitimate subscription of AUTODATA?

    -

    A: You can get a legitimate subscription of AUTODATA from its official website or authorized dealers. You can choose from different plans and options that suit your needs and budget. You can also get a free trial or a demo version of AUTODATA to test it before buying it. By getting a legitimate subscription of AUTODATA, you can enjoy the full features and benefits of AUTODATA, as well as the updates, patches, support, and security from the official sources.

    -

    Q: How can I learn more about AUTODATA?

    -

    A: You can learn more about AUTODATA by visiting its official website or following its social media accounts. You can also watch some videos or read some articles or blogs that showcase or review AUTODATA. You can also join some online forums or communities where other AUTODATA users share their experiences, tips, tricks, or questions about AUTODATA.

    -

    I hope this article has been helpful and informative for you. If you have any comments, suggestions, or feedback, please feel free to share them with me. Thank you for reading and have a great day!

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93 ((INSTALL)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93 ((INSTALL)).md deleted file mode 100644 index 437f2fd300c4b01a01b34949465fbaa9b6dd9ca6..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93 ((INSTALL)).md +++ /dev/null @@ -1,22 +0,0 @@ -
    -

    How to Download Free Ebook PDF of Electronic Communication by Dennis Roddy and John 93

    -

    Electronic communication is a field of engineering that deals with the transmission and reception of information using various devices and systems. It covers topics such as analog and digital modulation, radio waves, antennas, fiber optics, satellite communication, cellular networks, and more.

    -

    If you are looking for a comprehensive and accessible textbook on electronic communication, you might want to check out Electronic Communication by Dennis Roddy and John 93. This book provides a clear and concise introduction to the principles and applications of electronic communication, with numerous examples, exercises, and problems. It also includes a CD-ROM that contains simulation software and interactive tutorials.

    -

    Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93


    Download File 🆓 https://urlcod.com/2uHyTs



    -

    But how can you get a free ebook PDF of this book? Well, there are several ways to do that. Here are some of them:

    -
      -
    • Search for the book on online libraries or repositories that offer free ebooks. Some examples are Open Library, Project Gutenberg, Internet Archive, and Google Books. You might need to create an account or sign in to access some of these sites.
    • -
    • Look for the book on file-sharing platforms or torrent sites that allow users to upload and download files for free. Some examples are 4shared, Z-Library, Library Genesis, and The Pirate Bay. However, be careful when using these sites as they might contain viruses, malware, or illegal content.
    • -
    • Ask for the book on online forums or communities that share ebooks or academic resources. Some examples are Reddit, Quora, Stack Exchange, and Facebook Groups. You might need to follow some rules or guidelines to request or receive the book.
    • -
    -

    However, before you download any free ebook PDF of Electronic Communication by Dennis Roddy and John 93, you should be aware of the possible risks and consequences. Downloading free ebooks might violate the copyright laws or the terms of service of the original publishers or authors. You might also face legal actions or penalties if you are caught downloading or distributing pirated ebooks. Therefore, you should always respect the intellectual property rights of the creators and support them by buying their books legally.

    -

    If you want to learn more about electronic communication or other related topics, you can also visit some of the following websites:

    -

    -
      -
    • Electronics Tutorials: A website that offers free tutorials on various aspects of electronics and communication.
    • -
    • All About Circuits: A website that provides a free online textbook on communications systems.
    • -
    • Electronics Hub: A website that features articles and projects on communication systems and technologies.
    • -
    -

    I hope this article was helpful for you. If you have any questions or feedback, please leave a comment below.

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/tags.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/tags.py deleted file mode 100644 index 9a3d25a71c75c975291cf987001ecd6882d6417d..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/tags.py +++ /dev/null @@ -1,487 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import logging -import platform -import sys -import sysconfig -from importlib.machinery import EXTENSION_SUFFIXES -from typing import ( - Dict, - FrozenSet, - Iterable, - Iterator, - List, - Optional, - Sequence, - Tuple, - Union, - cast, -) - -from . import _manylinux, _musllinux - -logger = logging.getLogger(__name__) - -PythonVersion = Sequence[int] -MacVersion = Tuple[int, int] - -INTERPRETER_SHORT_NAMES: Dict[str, str] = { - "python": "py", # Generic. - "cpython": "cp", - "pypy": "pp", - "ironpython": "ip", - "jython": "jy", -} - - -_32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32 - - -class Tag: - """ - A representation of the tag triple for a wheel. - - Instances are considered immutable and thus are hashable. Equality checking - is also supported. - """ - - __slots__ = ["_interpreter", "_abi", "_platform", "_hash"] - - def __init__(self, interpreter: str, abi: str, platform: str) -> None: - self._interpreter = interpreter.lower() - self._abi = abi.lower() - self._platform = platform.lower() - # The __hash__ of every single element in a Set[Tag] will be evaluated each time - # that a set calls its `.disjoint()` method, which may be called hundreds of - # times when scanning a page of links for packages with tags matching that - # Set[Tag]. Pre-computing the value here produces significant speedups for - # downstream consumers. - self._hash = hash((self._interpreter, self._abi, self._platform)) - - @property - def interpreter(self) -> str: - return self._interpreter - - @property - def abi(self) -> str: - return self._abi - - @property - def platform(self) -> str: - return self._platform - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Tag): - return NotImplemented - - return ( - (self._hash == other._hash) # Short-circuit ASAP for perf reasons. - and (self._platform == other._platform) - and (self._abi == other._abi) - and (self._interpreter == other._interpreter) - ) - - def __hash__(self) -> int: - return self._hash - - def __str__(self) -> str: - return f"{self._interpreter}-{self._abi}-{self._platform}" - - def __repr__(self) -> str: - return f"<{self} @ {id(self)}>" - - -def parse_tag(tag: str) -> FrozenSet[Tag]: - """ - Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances. - - Returning a set is required due to the possibility that the tag is a - compressed tag set. - """ - tags = set() - interpreters, abis, platforms = tag.split("-") - for interpreter in interpreters.split("."): - for abi in abis.split("."): - for platform_ in platforms.split("."): - tags.add(Tag(interpreter, abi, platform_)) - return frozenset(tags) - - -def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]: - value = sysconfig.get_config_var(name) - if value is None and warn: - logger.debug( - "Config variable '%s' is unset, Python ABI tag may be incorrect", name - ) - return value - - -def _normalize_string(string: str) -> str: - return string.replace(".", "_").replace("-", "_") - - -def _abi3_applies(python_version: PythonVersion) -> bool: - """ - Determine if the Python version supports abi3. - - PEP 384 was first implemented in Python 3.2. - """ - return len(python_version) > 1 and tuple(python_version) >= (3, 2) - - -def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]: - py_version = tuple(py_version) # To allow for version comparison. - abis = [] - version = _version_nodot(py_version[:2]) - debug = pymalloc = ucs4 = "" - with_debug = _get_config_var("Py_DEBUG", warn) - has_refcount = hasattr(sys, "gettotalrefcount") - # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled - # extension modules is the best option. - # https://github.com/pypa/pip/issues/3383#issuecomment-173267692 - has_ext = "_d.pyd" in EXTENSION_SUFFIXES - if with_debug or (with_debug is None and (has_refcount or has_ext)): - debug = "d" - if py_version < (3, 8): - with_pymalloc = _get_config_var("WITH_PYMALLOC", warn) - if with_pymalloc or with_pymalloc is None: - pymalloc = "m" - if py_version < (3, 3): - unicode_size = _get_config_var("Py_UNICODE_SIZE", warn) - if unicode_size == 4 or ( - unicode_size is None and sys.maxunicode == 0x10FFFF - ): - ucs4 = "u" - elif debug: - # Debug builds can also load "normal" extension modules. - # We can also assume no UCS-4 or pymalloc requirement. - abis.append(f"cp{version}") - abis.insert( - 0, - "cp{version}{debug}{pymalloc}{ucs4}".format( - version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4 - ), - ) - return abis - - -def cpython_tags( - python_version: Optional[PythonVersion] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a CPython interpreter. - - The tags consist of: - - cp-- - - cp-abi3- - - cp-none- - - cp-abi3- # Older Python versions down to 3.2. - - If python_version only specifies a major version then user-provided ABIs and - the 'none' ABItag will be used. - - If 'abi3' or 'none' are specified in 'abis' then they will be yielded at - their normal position and not at the beginning. - """ - if not python_version: - python_version = sys.version_info[:2] - - interpreter = f"cp{_version_nodot(python_version[:2])}" - - if abis is None: - if len(python_version) > 1: - abis = _cpython_abis(python_version, warn) - else: - abis = [] - abis = list(abis) - # 'abi3' and 'none' are explicitly handled later. - for explicit_abi in ("abi3", "none"): - try: - abis.remove(explicit_abi) - except ValueError: - pass - - platforms = list(platforms or platform_tags()) - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - if _abi3_applies(python_version): - yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms) - yield from (Tag(interpreter, "none", platform_) for platform_ in platforms) - - if _abi3_applies(python_version): - for minor_version in range(python_version[1] - 1, 1, -1): - for platform_ in platforms: - interpreter = "cp{version}".format( - version=_version_nodot((python_version[0], minor_version)) - ) - yield Tag(interpreter, "abi3", platform_) - - -def _generic_abi() -> Iterator[str]: - abi = sysconfig.get_config_var("SOABI") - if abi: - yield _normalize_string(abi) - - -def generic_tags( - interpreter: Optional[str] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a generic interpreter. - - The tags consist of: - - -- - - The "none" ABI will be added if it was not explicitly provided. - """ - if not interpreter: - interp_name = interpreter_name() - interp_version = interpreter_version(warn=warn) - interpreter = "".join([interp_name, interp_version]) - if abis is None: - abis = _generic_abi() - platforms = list(platforms or platform_tags()) - abis = list(abis) - if "none" not in abis: - abis.append("none") - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - - -def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]: - """ - Yields Python versions in descending order. - - After the latest version, the major-only version will be yielded, and then - all previous versions of that major version. - """ - if len(py_version) > 1: - yield f"py{_version_nodot(py_version[:2])}" - yield f"py{py_version[0]}" - if len(py_version) > 1: - for minor in range(py_version[1] - 1, -1, -1): - yield f"py{_version_nodot((py_version[0], minor))}" - - -def compatible_tags( - python_version: Optional[PythonVersion] = None, - interpreter: Optional[str] = None, - platforms: Optional[Iterable[str]] = None, -) -> Iterator[Tag]: - """ - Yields the sequence of tags that are compatible with a specific version of Python. - - The tags consist of: - - py*-none- - - -none-any # ... if `interpreter` is provided. - - py*-none-any - """ - if not python_version: - python_version = sys.version_info[:2] - platforms = list(platforms or platform_tags()) - for version in _py_interpreter_range(python_version): - for platform_ in platforms: - yield Tag(version, "none", platform_) - if interpreter: - yield Tag(interpreter, "none", "any") - for version in _py_interpreter_range(python_version): - yield Tag(version, "none", "any") - - -def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str: - if not is_32bit: - return arch - - if arch.startswith("ppc"): - return "ppc" - - return "i386" - - -def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]: - formats = [cpu_arch] - if cpu_arch == "x86_64": - if version < (10, 4): - return [] - formats.extend(["intel", "fat64", "fat32"]) - - elif cpu_arch == "i386": - if version < (10, 4): - return [] - formats.extend(["intel", "fat32", "fat"]) - - elif cpu_arch == "ppc64": - # TODO: Need to care about 32-bit PPC for ppc64 through 10.2? - if version > (10, 5) or version < (10, 4): - return [] - formats.append("fat64") - - elif cpu_arch == "ppc": - if version > (10, 6): - return [] - formats.extend(["fat32", "fat"]) - - if cpu_arch in {"arm64", "x86_64"}: - formats.append("universal2") - - if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}: - formats.append("universal") - - return formats - - -def mac_platforms( - version: Optional[MacVersion] = None, arch: Optional[str] = None -) -> Iterator[str]: - """ - Yields the platform tags for a macOS system. - - The `version` parameter is a two-item tuple specifying the macOS version to - generate platform tags for. The `arch` parameter is the CPU architecture to - generate platform tags for. Both parameters default to the appropriate value - for the current system. - """ - version_str, _, cpu_arch = platform.mac_ver() - if version is None: - version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2]))) - else: - version = version - if arch is None: - arch = _mac_arch(cpu_arch) - else: - arch = arch - - if (10, 0) <= version and version < (11, 0): - # Prior to Mac OS 11, each yearly release of Mac OS bumped the - # "minor" version number. The major version was always 10. - for minor_version in range(version[1], -1, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=10, minor=minor_version, binary_format=binary_format - ) - - if version >= (11, 0): - # Starting with Mac OS 11, each yearly release bumps the major version - # number. The minor versions are now the midyear updates. - for major_version in range(version[0], 10, -1): - compat_version = major_version, 0 - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=major_version, minor=0, binary_format=binary_format - ) - - if version >= (11, 0): - # Mac OS 11 on x86_64 is compatible with binaries from previous releases. - # Arm64 support was introduced in 11.0, so no Arm binaries from previous - # releases exist. - # - # However, the "universal2" binary format can have a - # macOS version earlier than 11.0 when the x86_64 part of the binary supports - # that version of macOS. - if arch == "x86_64": - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - else: - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_format = "universal2" - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - - -def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]: - linux = _normalize_string(sysconfig.get_platform()) - if is_32bit: - if linux == "linux_x86_64": - linux = "linux_i686" - elif linux == "linux_aarch64": - linux = "linux_armv7l" - _, arch = linux.split("_", 1) - yield from _manylinux.platform_tags(linux, arch) - yield from _musllinux.platform_tags(arch) - yield linux - - -def _generic_platforms() -> Iterator[str]: - yield _normalize_string(sysconfig.get_platform()) - - -def platform_tags() -> Iterator[str]: - """ - Provides the platform tags for this installation. - """ - if platform.system() == "Darwin": - return mac_platforms() - elif platform.system() == "Linux": - return _linux_platforms() - else: - return _generic_platforms() - - -def interpreter_name() -> str: - """ - Returns the name of the running interpreter. - """ - name = sys.implementation.name - return INTERPRETER_SHORT_NAMES.get(name) or name - - -def interpreter_version(*, warn: bool = False) -> str: - """ - Returns the version of the running interpreter. - """ - version = _get_config_var("py_version_nodot", warn=warn) - if version: - version = str(version) - else: - version = _version_nodot(sys.version_info[:2]) - return version - - -def _version_nodot(version: PythonVersion) -> str: - return "".join(map(str, version)) - - -def sys_tags(*, warn: bool = False) -> Iterator[Tag]: - """ - Returns the sequence of tag triples for the running interpreter. - - The order of the sequence corresponds to priority order for the - interpreter, from most to least important. - """ - - interp_name = interpreter_name() - if interp_name == "cp": - yield from cpython_tags(warn=warn) - else: - yield from generic_tags() - - if interp_name == "pp": - yield from compatible_tags(interpreter="pp3") - else: - yield from compatible_tags() diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/jaraco/context.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/jaraco/context.py deleted file mode 100644 index 87a4e3dca299c4201ac50f6ef589dc73f1c45576..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/jaraco/context.py +++ /dev/null @@ -1,213 +0,0 @@ -import os -import subprocess -import contextlib -import functools -import tempfile -import shutil -import operator - - -@contextlib.contextmanager -def pushd(dir): - orig = os.getcwd() - os.chdir(dir) - try: - yield dir - finally: - os.chdir(orig) - - -@contextlib.contextmanager -def tarball_context(url, target_dir=None, runner=None, pushd=pushd): - """ - Get a tarball, extract it, change to that directory, yield, then - clean up. - `runner` is the function to invoke commands. - `pushd` is a context manager for changing the directory. - """ - if target_dir is None: - target_dir = os.path.basename(url).replace('.tar.gz', '').replace('.tgz', '') - if runner is None: - runner = functools.partial(subprocess.check_call, shell=True) - # In the tar command, use --strip-components=1 to strip the first path and - # then - # use -C to cause the files to be extracted to {target_dir}. This ensures - # that we always know where the files were extracted. - runner('mkdir {target_dir}'.format(**vars())) - try: - getter = 'wget {url} -O -' - extract = 'tar x{compression} --strip-components=1 -C {target_dir}' - cmd = ' | '.join((getter, extract)) - runner(cmd.format(compression=infer_compression(url), **vars())) - with pushd(target_dir): - yield target_dir - finally: - runner('rm -Rf {target_dir}'.format(**vars())) - - -def infer_compression(url): - """ - Given a URL or filename, infer the compression code for tar. - """ - # cheat and just assume it's the last two characters - compression_indicator = url[-2:] - mapping = dict(gz='z', bz='j', xz='J') - # Assume 'z' (gzip) if no match - return mapping.get(compression_indicator, 'z') - - -@contextlib.contextmanager -def temp_dir(remover=shutil.rmtree): - """ - Create a temporary directory context. Pass a custom remover - to override the removal behavior. - """ - temp_dir = tempfile.mkdtemp() - try: - yield temp_dir - finally: - remover(temp_dir) - - -@contextlib.contextmanager -def repo_context(url, branch=None, quiet=True, dest_ctx=temp_dir): - """ - Check out the repo indicated by url. - - If dest_ctx is supplied, it should be a context manager - to yield the target directory for the check out. - """ - exe = 'git' if 'git' in url else 'hg' - with dest_ctx() as repo_dir: - cmd = [exe, 'clone', url, repo_dir] - if branch: - cmd.extend(['--branch', branch]) - devnull = open(os.path.devnull, 'w') - stdout = devnull if quiet else None - subprocess.check_call(cmd, stdout=stdout) - yield repo_dir - - -@contextlib.contextmanager -def null(): - yield - - -class ExceptionTrap: - """ - A context manager that will catch certain exceptions and provide an - indication they occurred. - - >>> with ExceptionTrap() as trap: - ... raise Exception() - >>> bool(trap) - True - - >>> with ExceptionTrap() as trap: - ... pass - >>> bool(trap) - False - - >>> with ExceptionTrap(ValueError) as trap: - ... raise ValueError("1 + 1 is not 3") - >>> bool(trap) - True - - >>> with ExceptionTrap(ValueError) as trap: - ... raise Exception() - Traceback (most recent call last): - ... - Exception - - >>> bool(trap) - False - """ - - exc_info = None, None, None - - def __init__(self, exceptions=(Exception,)): - self.exceptions = exceptions - - def __enter__(self): - return self - - @property - def type(self): - return self.exc_info[0] - - @property - def value(self): - return self.exc_info[1] - - @property - def tb(self): - return self.exc_info[2] - - def __exit__(self, *exc_info): - type = exc_info[0] - matches = type and issubclass(type, self.exceptions) - if matches: - self.exc_info = exc_info - return matches - - def __bool__(self): - return bool(self.type) - - def raises(self, func, *, _test=bool): - """ - Wrap func and replace the result with the truth - value of the trap (True if an exception occurred). - - First, give the decorator an alias to support Python 3.8 - Syntax. - - >>> raises = ExceptionTrap(ValueError).raises - - Now decorate a function that always fails. - - >>> @raises - ... def fail(): - ... raise ValueError('failed') - >>> fail() - True - """ - - @functools.wraps(func) - def wrapper(*args, **kwargs): - with ExceptionTrap(self.exceptions) as trap: - func(*args, **kwargs) - return _test(trap) - - return wrapper - - def passes(self, func): - """ - Wrap func and replace the result with the truth - value of the trap (True if no exception). - - First, give the decorator an alias to support Python 3.8 - Syntax. - - >>> passes = ExceptionTrap(ValueError).passes - - Now decorate a function that always fails. - - >>> @passes - ... def fail(): - ... raise ValueError('failed') - - >>> fail() - False - """ - return self.raises(func, _test=operator.not_) - - -class suppress(contextlib.suppress, contextlib.ContextDecorator): - """ - A version of contextlib.suppress with decorator support. - - >>> @suppress(KeyError) - ... def key_error(): - ... {}[''] - >>> key_error() - """ diff --git a/spaces/tomofi/CRAFT-TrOCR/README.md b/spaces/tomofi/CRAFT-TrOCR/README.md deleted file mode 100644 index 07a5d3702f0294d75b840b2bcb92a1147be8940d..0000000000000000000000000000000000000000 --- a/spaces/tomofi/CRAFT-TrOCR/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: CRAFT OCR -emoji: 👁 -colorFrom: pink -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/tomofi/MMOCR/docs/en/tools.md b/spaces/tomofi/MMOCR/docs/en/tools.md deleted file mode 100644 index f42cef2471890976807a34101e548734d5439fd3..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/docs/en/tools.md +++ /dev/null @@ -1,32 +0,0 @@ -# Useful Tools - -We provide some useful tools under `mmocr/tools` directory. - -## Publish a Model - -Before you upload a model to AWS, you may want to -(1) convert the model weights to CPU tensors, (2) delete the optimizer states and -(3) compute the hash of the checkpoint file and append the hash id to the filename. These functionalities could be achieved by `tools/publish_model.py`. - -```shell -python tools/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME} -``` - -For example, - -```shell -python tools/publish_model.py work_dirs/psenet/latest.pth psenet_r50_fpnf_sbn_1x_20190801.pth -``` - -The final output filename will be `psenet_r50_fpnf_sbn_1x_20190801-{hash id}.pth`. - - -## Convert txt annotation to lmdb format -Sometimes, loading a large txt annotation file with multiple workers can cause OOM (out of memory) error. You can convert the file into lmdb format using `tools/data/utils/txt2lmdb.py` and use LmdbLoader in your config to avoid this issue. -```bash -python tools/data/utils/txt2lmdb.py -i -o -``` -For example, -```bash -python tools/data/utils/txt2lmdb.py -i data/mixture/Syn90k/label.txt -o data/mixture/Syn90k/label.lmdb -``` diff --git a/spaces/trysem/image-matting-app/ppmatting/models/layers/gca_module.py b/spaces/trysem/image-matting-app/ppmatting/models/layers/gca_module.py deleted file mode 100644 index ba8654efc9bd24de2e127393ad8338d21964e4a5..0000000000000000000000000000000000000000 --- a/spaces/trysem/image-matting-app/ppmatting/models/layers/gca_module.py +++ /dev/null @@ -1,211 +0,0 @@ -# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# The gca code was heavily based on https://github.com/Yaoyi-Li/GCA-Matting -# and https://github.com/open-mmlab/mmediting - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F - -from paddleseg.cvlibs import param_init - - -class GuidedCxtAtten(nn.Layer): - def __init__(self, - out_channels, - guidance_channels, - kernel_size=3, - stride=1, - rate=2): - super().__init__() - - self.kernel_size = kernel_size - self.rate = rate - self.stride = stride - self.guidance_conv = nn.Conv2D( - in_channels=guidance_channels, - out_channels=guidance_channels // 2, - kernel_size=1) - - self.out_conv = nn.Sequential( - nn.Conv2D( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=1, - bias_attr=False), - nn.BatchNorm(out_channels)) - - self.init_weight() - - def init_weight(self): - param_init.xavier_uniform(self.guidance_conv.weight) - param_init.constant_init(self.guidance_conv.bias, value=0.0) - param_init.xavier_uniform(self.out_conv[0].weight) - param_init.constant_init(self.out_conv[1].weight, value=1e-3) - param_init.constant_init(self.out_conv[1].bias, value=0.0) - - def forward(self, img_feat, alpha_feat, unknown=None, softmax_scale=1.): - - img_feat = self.guidance_conv(img_feat) - img_feat = F.interpolate( - img_feat, scale_factor=1 / self.rate, mode='nearest') - - # process unknown mask - unknown, softmax_scale = self.process_unknown_mask(unknown, img_feat, - softmax_scale) - - img_ps, alpha_ps, unknown_ps = self.extract_feature_maps_patches( - img_feat, alpha_feat, unknown) - - self_mask = self.get_self_correlation_mask(img_feat) - - # split tensors by batch dimension; tuple is returned - img_groups = paddle.split(img_feat, 1, axis=0) - img_ps_groups = paddle.split(img_ps, 1, axis=0) - alpha_ps_groups = paddle.split(alpha_ps, 1, axis=0) - unknown_ps_groups = paddle.split(unknown_ps, 1, axis=0) - scale_groups = paddle.split(softmax_scale, 1, axis=0) - groups = (img_groups, img_ps_groups, alpha_ps_groups, unknown_ps_groups, - scale_groups) - - y = [] - - for img_i, img_ps_i, alpha_ps_i, unknown_ps_i, scale_i in zip(*groups): - # conv for compare - similarity_map = self.compute_similarity_map(img_i, img_ps_i) - - gca_score = self.compute_guided_attention_score( - similarity_map, unknown_ps_i, scale_i, self_mask) - - yi = self.propagate_alpha_feature(gca_score, alpha_ps_i) - - y.append(yi) - - y = paddle.concat(y, axis=0) # back to the mini-batch - y = paddle.reshape(y, alpha_feat.shape) - - y = self.out_conv(y) + alpha_feat - - return y - - def extract_feature_maps_patches(self, img_feat, alpha_feat, unknown): - - # extract image feature patches with shape: - # (N, img_h*img_w, img_c, img_ks, img_ks) - img_ks = self.kernel_size - img_ps = self.extract_patches(img_feat, img_ks, self.stride) - - # extract alpha feature patches with shape: - # (N, img_h*img_w, alpha_c, alpha_ks, alpha_ks) - alpha_ps = self.extract_patches(alpha_feat, self.rate * 2, self.rate) - - # extract unknown mask patches with shape: (N, img_h*img_w, 1, 1) - unknown_ps = self.extract_patches(unknown, img_ks, self.stride) - unknown_ps = unknown_ps.squeeze(axis=2) # squeeze channel dimension - unknown_ps = unknown_ps.mean(axis=[2, 3], keepdim=True) - - return img_ps, alpha_ps, unknown_ps - - def extract_patches(self, x, kernel_size, stride): - n, c, _, _ = x.shape - x = self.pad(x, kernel_size, stride) - x = F.unfold(x, [kernel_size, kernel_size], strides=[stride, stride]) - x = paddle.transpose(x, (0, 2, 1)) - x = paddle.reshape(x, (n, -1, c, kernel_size, kernel_size)) - - return x - - def pad(self, x, kernel_size, stride): - left = (kernel_size - stride + 1) // 2 - right = (kernel_size - stride) // 2 - pad = (left, right, left, right) - return F.pad(x, pad, mode='reflect') - - def compute_guided_attention_score(self, similarity_map, unknown_ps, scale, - self_mask): - # scale the correlation with predicted scale factor for known and - # unknown area - unknown_scale, known_scale = scale[0] - out = similarity_map * ( - unknown_scale * paddle.greater_than(unknown_ps, - paddle.to_tensor([0.])) + - known_scale * paddle.less_equal(unknown_ps, paddle.to_tensor([0.]))) - # mask itself, self-mask only applied to unknown area - out = out + self_mask * unknown_ps - gca_score = F.softmax(out, axis=1) - - return gca_score - - def propagate_alpha_feature(self, gca_score, alpha_ps): - - alpha_ps = alpha_ps[0] # squeeze dim 0 - if self.rate == 1: - gca_score = self.pad(gca_score, kernel_size=2, stride=1) - alpha_ps = paddle.transpose(alpha_ps, (1, 0, 2, 3)) - out = F.conv2d(gca_score, alpha_ps) / 4. - else: - out = F.conv2d_transpose( - gca_score, alpha_ps, stride=self.rate, padding=1) / 4. - - return out - - def compute_similarity_map(self, img_feat, img_ps): - img_ps = img_ps[0] # squeeze dim 0 - # convolve the feature to get correlation (similarity) map - img_ps_normed = img_ps / paddle.clip(self.l2_norm(img_ps), 1e-4) - img_feat = F.pad(img_feat, (1, 1, 1, 1), mode='reflect') - similarity_map = F.conv2d(img_feat, img_ps_normed) - - return similarity_map - - def get_self_correlation_mask(self, img_feat): - _, _, h, w = img_feat.shape - self_mask = F.one_hot( - paddle.reshape(paddle.arange(h * w), (h, w)), - num_classes=int(h * w)) - - self_mask = paddle.transpose(self_mask, (2, 0, 1)) - self_mask = paddle.reshape(self_mask, (1, h * w, h, w)) - - return self_mask * (-1e4) - - def process_unknown_mask(self, unknown, img_feat, softmax_scale): - - n, _, h, w = img_feat.shape - - if unknown is not None: - unknown = unknown.clone() - unknown = F.interpolate( - unknown, scale_factor=1 / self.rate, mode='nearest') - unknown_mean = unknown.mean(axis=[2, 3]) - known_mean = 1 - unknown_mean - unknown_scale = paddle.clip( - paddle.sqrt(unknown_mean / known_mean), 0.1, 10) - known_scale = paddle.clip( - paddle.sqrt(known_mean / unknown_mean), 0.1, 10) - softmax_scale = paddle.concat([unknown_scale, known_scale], axis=1) - else: - unknown = paddle.ones([n, 1, h, w]) - softmax_scale = paddle.reshape( - paddle.to_tensor([softmax_scale, softmax_scale]), (1, 2)) - softmax_scale = paddle.expand(softmax_scale, (n, 2)) - - return unknown, softmax_scale - - @staticmethod - def l2_norm(x): - x = x**2 - x = x.sum(axis=[1, 2, 3], keepdim=True) - return paddle.sqrt(x) diff --git a/spaces/ttt246/brain/Brain/src/model/train_model.py b/spaces/ttt246/brain/Brain/src/model/train_model.py deleted file mode 100644 index 7db9e385b0a36f985f5da692764cdf4dd2c88635..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Brain/src/model/train_model.py +++ /dev/null @@ -1,24 +0,0 @@ -"""train model: -{ - "id": "String", - "data": [{"page_content": "String", "timestamp": 0}], - "status": "created | updated | deleted", -}""" - -from Brain.src.model.requests.request_model import Train - - -class TrainModel: - def __init__(self, train_data: Train): - self.id = train_data.id - self.data = train_data.data - self.status = TrainStatus.UPDATED - - -"""train status: created | updated | deleted""" - - -class TrainStatus: - CREATED = "created" - UPDATED = "updated" - DELETED = "deleted" diff --git a/spaces/tykimos/TarotGPT/app.py b/spaces/tykimos/TarotGPT/app.py deleted file mode 100644 index 256bfac05eed5aaf111cf4b23a3e960988cf9fbf..0000000000000000000000000000000000000000 --- a/spaces/tykimos/TarotGPT/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import openai -import streamlit as st -from streamlit_pills import pills - -st.subheader("AI Assistant : Streamlit + OpenAI: `stream` *argument*") - -# You can also use radio buttons instead -selected = pills("", ["질문", "건강"], ["🎈", "🌈"]) - -user_input = st.text_input("You: ",placeholder = "고민이 무엇입니까?", key="input") -api_key = st.text_input("api_key: ",placeholder = "OpenAI API Key", key="api_key") - -if st.button("Submit", type="primary"): - st.markdown("----") - res_box = st.empty() - - openai.api_key = api_key - - report = [] - # Looping over the response - - for resp in openai.Completion.create( - model="text-davinci-003", - prompt="당신은 타로 전문가입니다. \n1. 하단의 [질문]을 이해한 후 [타로카드목록]에서 임의의 3장의 카드를 고릅니다.\n2. [질문]에 맞게 고른 카드를 순서대로 해석한 뒤, 종합적으로 해석한 내용을 깊이있고 친절하게 작성합니다.\n\n[질문]: \"" + user_input + "\"\n\n[타로카드목록] \n{카드 번호}, {카드 이름}, {카드 링크}\n1, The Fool, 9/90/RWS_Tarot_00_Fool \n2, The Magician, d/de/RWS_Tarot_01_Magician \n3, The High Priestess, 8/88/RWS_Tarot_02_High_Priestess \n4, The Empress, d/d2/RWS_Tarot_03_Empress \n5, The Emperor, c/c3/RWS_Tarot_04_Emperor \n6, The Hierophant, 8/8d/RWS_Tarot_05_Hierophant \n7, The Lovers, 3/3a/TheLovers \n8, The Chariot, 9/9b/RWS_Tarot_07_Chariot \n9, Strength, f/f5/RWS_Tarot_08_Strength \n10, The Hermit, 4/4d/RWS_Tarot_09_Hermit \n11, Wheel of Fortune, 3/3c/RWS_Tarot_10_Wheel_of_Fortune \n12, Justice, e/e0/RWS_Tarot_11_Justice \n13, The Hanged Man, 2/2b/RWS_Tarot_12_Hanged_Man \n14, Death, d/d7/RWS_Tarot_13_Death \n15, Temperance, f/f8/RWS_Tarot_14_Temperance \n16, The Devil, 5/55/RWS_Tarot_15_Devil\n17, The Tower, 5/53/RWS_Tarot_16_Tower \n18, The Star, d/db/RWS_Tarot_17_Star \n19, The Moon, 7/7f/RWS_Tarot_18_Moon \n20, The Sun, 1/17/RWS_Tarot_19_Sun \n21, Judgment, d/dd/RWS_Tarot_20_Judgement \n22, The World, f/ff/RWS_Tarot_21_World 2\n3, Ace of Wands, 1/11/Wands01 \n24, Two of Wands, 0/0f/Wands02 \n25, Three of Wands, f/ff/Wands03 \n26, Four of Wands, a/a4/Wands04 \n27, Five of Wands, 9/9d/Wands05 \n28, Six of Wands, 3/3b/Wands06 \n29, Seven of Wands, e/e4/Wands07 \n30, Eight of Wands, 6/6b/Wands08 \n31, Nine of Wands, /4/4d/Tarot_Nine_of_Wands \n32, Ten of Wands, 0/0b/Wands10 \n33, Page of Wands, 6/6a/Wands11 \n34, Knight of Wands, 1/16/Wands12 \n35, Queen of Wands, 0/0d/Wands13 \n36, King of Wands, c/ce/Wands14 \n37, Ace of Cups, 3/36/Cups01 \n38, Two of Cups, f/f8/Cups02 \n39, Three of Cups, 7/7a/Cups03 \n40, Four of Cups, 3/35/Cups04 \n41, Five of Cups, d/d7/Cups05 \n42, Six of Cups, 1/17/Cups06 \n43, Seven of Cups, a/ae/Cups07 \n44, Eight of Cups, 6/60/Cups08 \n45, Nine of Cups, 2/24/Cups09 \n46, Ten of Cups, 8/84/Cups10 \n47, Page of Cups, a/ad/Cups11 \n48, Knight of Cups, f/fa/Cups12 \n49, Queen of Cups, 6/62/Cups13 \n50, King of Cups, 0/04/Cups14 \n51, Ace of Swords, 1/1a/Swords01 \n52, Two of Swords, 9/9e/Swords02 \n53, Three of Swords, 0/02/Swords03 \n54, Four of Swords, b/bf/Swords04 \n55, Five of Swords, 2/23/Swords05 \n56, Six of Swords, 2/29/Swords06 \n57, Seven of Swords, 3/34/Swords07 \n58, Eight of Swords, a/a7/Swords08 \n59, Nine of Swords, 2/2f/Swords09 \n60, Ten of Swords, d/d4/Swords10 \n61, Page of Swords, 4/4c/Swords11 \n62, Knight of Swords, b/b0/Swords12 \n63, Queen of Swords, d/d4/Swords13 \n64, King of Swords, 3/33/Swords14 \n65, Ace of Pentacles, f/fd/Pents01 \n66, Two of Pentacles, 9/9f/Pents02 \n67, Three of Pentacles, 4/42/Pents03 \n68, Four of Pentacles, 3/35/Pents04 \n69, Five of Pentacles, 9/96/Pents05 \n70, Six of Pentacles, a/a6/Pents06 \n71, Seven of Pentacles, 6/6a/Pents07 \n72, Eight of Pentacles, 4/49/Pents08 \n73, Nine of Pentacles, f/f0/Pents09 \n74, Ten of Pentacles, 4/42/Pents10 \n75, Page of Pentacles, e/ec/Pents11 \n76, Knight of Pentacles, d/d5/Pents12 \n77, Queen of Pentacles, 8/88/Pents13 \n78, King of Pentacles, 1/1c/Pents14\n\n해석:", - temperature=0.3, - max_tokens=1607, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stream = True - ): - # join method to concatenate the elements of the list - # into a single string, - # then strip out any empty strings - report.append(resp.choices[0].text) - result = "".join(report).strip() - #result = result.replace("\n", "") - res_box = None - res_box.markdown({result}) - -st.markdown("----") \ No newline at end of file diff --git a/spaces/uSerNameDDHL/bingo/src/components/chat-notification.tsx b/spaces/uSerNameDDHL/bingo/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
    - 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
    - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
    - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
    - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
    -
    -
    -
    -
    - error - {getAction(message.error, () => bot.resetConversation())} -
    -
    -
    -
    -
    - ) -} diff --git a/spaces/vaibhavarduino/anime-plus/e4e/training/coach.py b/spaces/vaibhavarduino/anime-plus/e4e/training/coach.py deleted file mode 100644 index 4c99da79e699c9362e02c289cd1425848d331d0b..0000000000000000000000000000000000000000 --- a/spaces/vaibhavarduino/anime-plus/e4e/training/coach.py +++ /dev/null @@ -1,437 +0,0 @@ -import os -import random -import matplotlib -import matplotlib.pyplot as plt - -matplotlib.use('Agg') - -import torch -from torch import nn, autograd -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.nn.functional as F - -from utils import common, train_utils -from criteria import id_loss, moco_loss -from configs import data_configs -from datasets.images_dataset import ImagesDataset -from criteria.lpips.lpips import LPIPS -from models.psp import pSp -from models.latent_codes_pool import LatentCodesPool -from models.discriminator import LatentCodesDiscriminator -from models.encoders.psp_encoders import ProgressiveStage -from training.ranger import Ranger - -random.seed(0) -torch.manual_seed(0) - - -class Coach: - def __init__(self, opts, prev_train_checkpoint=None): - self.opts = opts - - self.global_step = 0 - - self.device = 'cuda:0' - self.opts.device = self.device - # Initialize network - self.net = pSp(self.opts).to(self.device) - - # Initialize loss - if self.opts.lpips_lambda > 0: - self.lpips_loss = LPIPS(net_type=self.opts.lpips_type).to(self.device).eval() - if self.opts.id_lambda > 0: - if 'ffhq' in self.opts.dataset_type or 'celeb' in self.opts.dataset_type: - self.id_loss = id_loss.IDLoss().to(self.device).eval() - else: - self.id_loss = moco_loss.MocoLoss(opts).to(self.device).eval() - self.mse_loss = nn.MSELoss().to(self.device).eval() - - # Initialize optimizer - self.optimizer = self.configure_optimizers() - - # Initialize discriminator - if self.opts.w_discriminator_lambda > 0: - self.discriminator = LatentCodesDiscriminator(512, 4).to(self.device) - self.discriminator_optimizer = torch.optim.Adam(list(self.discriminator.parameters()), - lr=opts.w_discriminator_lr) - self.real_w_pool = LatentCodesPool(self.opts.w_pool_size) - self.fake_w_pool = LatentCodesPool(self.opts.w_pool_size) - - # Initialize dataset - self.train_dataset, self.test_dataset = self.configure_datasets() - self.train_dataloader = DataLoader(self.train_dataset, - batch_size=self.opts.batch_size, - shuffle=True, - num_workers=int(self.opts.workers), - drop_last=True) - self.test_dataloader = DataLoader(self.test_dataset, - batch_size=self.opts.test_batch_size, - shuffle=False, - num_workers=int(self.opts.test_workers), - drop_last=True) - - # Initialize logger - log_dir = os.path.join(opts.exp_dir, 'logs') - os.makedirs(log_dir, exist_ok=True) - self.logger = SummaryWriter(log_dir=log_dir) - - # Initialize checkpoint dir - self.checkpoint_dir = os.path.join(opts.exp_dir, 'checkpoints') - os.makedirs(self.checkpoint_dir, exist_ok=True) - self.best_val_loss = None - if self.opts.save_interval is None: - self.opts.save_interval = self.opts.max_steps - - if prev_train_checkpoint is not None: - self.load_from_train_checkpoint(prev_train_checkpoint) - prev_train_checkpoint = None - - def load_from_train_checkpoint(self, ckpt): - print('Loading previous training data...') - self.global_step = ckpt['global_step'] + 1 - self.best_val_loss = ckpt['best_val_loss'] - self.net.load_state_dict(ckpt['state_dict']) - - if self.opts.keep_optimizer: - self.optimizer.load_state_dict(ckpt['optimizer']) - if self.opts.w_discriminator_lambda > 0: - self.discriminator.load_state_dict(ckpt['discriminator_state_dict']) - self.discriminator_optimizer.load_state_dict(ckpt['discriminator_optimizer_state_dict']) - if self.opts.progressive_steps: - self.check_for_progressive_training_update(is_resume_from_ckpt=True) - print(f'Resuming training from step {self.global_step}') - - def train(self): - self.net.train() - if self.opts.progressive_steps: - self.check_for_progressive_training_update() - while self.global_step < self.opts.max_steps: - for batch_idx, batch in enumerate(self.train_dataloader): - loss_dict = {} - if self.is_training_discriminator(): - loss_dict = self.train_discriminator(batch) - x, y, y_hat, latent = self.forward(batch) - loss, encoder_loss_dict, id_logs = self.calc_loss(x, y, y_hat, latent) - loss_dict = {**loss_dict, **encoder_loss_dict} - self.optimizer.zero_grad() - loss.backward() - self.optimizer.step() - - # Logging related - if self.global_step % self.opts.image_interval == 0 or ( - self.global_step < 1000 and self.global_step % 25 == 0): - self.parse_and_log_images(id_logs, x, y, y_hat, title='images/train/faces') - if self.global_step % self.opts.board_interval == 0: - self.print_metrics(loss_dict, prefix='train') - self.log_metrics(loss_dict, prefix='train') - - # Validation related - val_loss_dict = None - if self.global_step % self.opts.val_interval == 0 or self.global_step == self.opts.max_steps: - val_loss_dict = self.validate() - if val_loss_dict and (self.best_val_loss is None or val_loss_dict['loss'] < self.best_val_loss): - self.best_val_loss = val_loss_dict['loss'] - self.checkpoint_me(val_loss_dict, is_best=True) - - if self.global_step % self.opts.save_interval == 0 or self.global_step == self.opts.max_steps: - if val_loss_dict is not None: - self.checkpoint_me(val_loss_dict, is_best=False) - else: - self.checkpoint_me(loss_dict, is_best=False) - - if self.global_step == self.opts.max_steps: - print('OMG, finished training!') - break - - self.global_step += 1 - if self.opts.progressive_steps: - self.check_for_progressive_training_update() - - def check_for_progressive_training_update(self, is_resume_from_ckpt=False): - for i in range(len(self.opts.progressive_steps)): - if is_resume_from_ckpt and self.global_step >= self.opts.progressive_steps[i]: # Case checkpoint - self.net.encoder.set_progressive_stage(ProgressiveStage(i)) - if self.global_step == self.opts.progressive_steps[i]: # Case training reached progressive step - self.net.encoder.set_progressive_stage(ProgressiveStage(i)) - - def validate(self): - self.net.eval() - agg_loss_dict = [] - for batch_idx, batch in enumerate(self.test_dataloader): - cur_loss_dict = {} - if self.is_training_discriminator(): - cur_loss_dict = self.validate_discriminator(batch) - with torch.no_grad(): - x, y, y_hat, latent = self.forward(batch) - loss, cur_encoder_loss_dict, id_logs = self.calc_loss(x, y, y_hat, latent) - cur_loss_dict = {**cur_loss_dict, **cur_encoder_loss_dict} - agg_loss_dict.append(cur_loss_dict) - - # Logging related - self.parse_and_log_images(id_logs, x, y, y_hat, - title='images/test/faces', - subscript='{:04d}'.format(batch_idx)) - - # For first step just do sanity test on small amount of data - if self.global_step == 0 and batch_idx >= 4: - self.net.train() - return None # Do not log, inaccurate in first batch - - loss_dict = train_utils.aggregate_loss_dict(agg_loss_dict) - self.log_metrics(loss_dict, prefix='test') - self.print_metrics(loss_dict, prefix='test') - - self.net.train() - return loss_dict - - def checkpoint_me(self, loss_dict, is_best): - save_name = 'best_model.pt' if is_best else 'iteration_{}.pt'.format(self.global_step) - save_dict = self.__get_save_dict() - checkpoint_path = os.path.join(self.checkpoint_dir, save_name) - torch.save(save_dict, checkpoint_path) - with open(os.path.join(self.checkpoint_dir, 'timestamp.txt'), 'a') as f: - if is_best: - f.write( - '**Best**: Step - {}, Loss - {:.3f} \n{}\n'.format(self.global_step, self.best_val_loss, loss_dict)) - else: - f.write('Step - {}, \n{}\n'.format(self.global_step, loss_dict)) - - def configure_optimizers(self): - params = list(self.net.encoder.parameters()) - if self.opts.train_decoder: - params += list(self.net.decoder.parameters()) - else: - self.requires_grad(self.net.decoder, False) - if self.opts.optim_name == 'adam': - optimizer = torch.optim.Adam(params, lr=self.opts.learning_rate) - else: - optimizer = Ranger(params, lr=self.opts.learning_rate) - return optimizer - - def configure_datasets(self): - if self.opts.dataset_type not in data_configs.DATASETS.keys(): - Exception('{} is not a valid dataset_type'.format(self.opts.dataset_type)) - print('Loading dataset for {}'.format(self.opts.dataset_type)) - dataset_args = data_configs.DATASETS[self.opts.dataset_type] - transforms_dict = dataset_args['transforms'](self.opts).get_transforms() - train_dataset = ImagesDataset(source_root=dataset_args['train_source_root'], - target_root=dataset_args['train_target_root'], - source_transform=transforms_dict['transform_source'], - target_transform=transforms_dict['transform_gt_train'], - opts=self.opts) - test_dataset = ImagesDataset(source_root=dataset_args['test_source_root'], - target_root=dataset_args['test_target_root'], - source_transform=transforms_dict['transform_source'], - target_transform=transforms_dict['transform_test'], - opts=self.opts) - print("Number of training samples: {}".format(len(train_dataset))) - print("Number of test samples: {}".format(len(test_dataset))) - return train_dataset, test_dataset - - def calc_loss(self, x, y, y_hat, latent): - loss_dict = {} - loss = 0.0 - id_logs = None - if self.is_training_discriminator(): # Adversarial loss - loss_disc = 0. - dims_to_discriminate = self.get_dims_to_discriminate() if self.is_progressive_training() else \ - list(range(self.net.decoder.n_latent)) - - for i in dims_to_discriminate: - w = latent[:, i, :] - fake_pred = self.discriminator(w) - loss_disc += F.softplus(-fake_pred).mean() - loss_disc /= len(dims_to_discriminate) - loss_dict['encoder_discriminator_loss'] = float(loss_disc) - loss += self.opts.w_discriminator_lambda * loss_disc - - if self.opts.progressive_steps and self.net.encoder.progressive_stage.value != 18: # delta regularization loss - total_delta_loss = 0 - deltas_latent_dims = self.net.encoder.get_deltas_starting_dimensions() - - first_w = latent[:, 0, :] - for i in range(1, self.net.encoder.progressive_stage.value + 1): - curr_dim = deltas_latent_dims[i] - delta = latent[:, curr_dim, :] - first_w - delta_loss = torch.norm(delta, self.opts.delta_norm, dim=1).mean() - loss_dict[f"delta{i}_loss"] = float(delta_loss) - total_delta_loss += delta_loss - loss_dict['total_delta_loss'] = float(total_delta_loss) - loss += self.opts.delta_norm_lambda * total_delta_loss - - if self.opts.id_lambda > 0: # Similarity loss - loss_id, sim_improvement, id_logs = self.id_loss(y_hat, y, x) - loss_dict['loss_id'] = float(loss_id) - loss_dict['id_improve'] = float(sim_improvement) - loss += loss_id * self.opts.id_lambda - if self.opts.l2_lambda > 0: - loss_l2 = F.mse_loss(y_hat, y) - loss_dict['loss_l2'] = float(loss_l2) - loss += loss_l2 * self.opts.l2_lambda - if self.opts.lpips_lambda > 0: - loss_lpips = self.lpips_loss(y_hat, y) - loss_dict['loss_lpips'] = float(loss_lpips) - loss += loss_lpips * self.opts.lpips_lambda - loss_dict['loss'] = float(loss) - return loss, loss_dict, id_logs - - def forward(self, batch): - x, y = batch - x, y = x.to(self.device).float(), y.to(self.device).float() - y_hat, latent = self.net.forward(x, return_latents=True) - if self.opts.dataset_type == "cars_encode": - y_hat = y_hat[:, :, 32:224, :] - return x, y, y_hat, latent - - def log_metrics(self, metrics_dict, prefix): - for key, value in metrics_dict.items(): - self.logger.add_scalar('{}/{}'.format(prefix, key), value, self.global_step) - - def print_metrics(self, metrics_dict, prefix): - print('Metrics for {}, step {}'.format(prefix, self.global_step)) - for key, value in metrics_dict.items(): - print('\t{} = '.format(key), value) - - def parse_and_log_images(self, id_logs, x, y, y_hat, title, subscript=None, display_count=2): - im_data = [] - for i in range(display_count): - cur_im_data = { - 'input_face': common.log_input_image(x[i], self.opts), - 'target_face': common.tensor2im(y[i]), - 'output_face': common.tensor2im(y_hat[i]), - } - if id_logs is not None: - for key in id_logs[i]: - cur_im_data[key] = id_logs[i][key] - im_data.append(cur_im_data) - self.log_images(title, im_data=im_data, subscript=subscript) - - def log_images(self, name, im_data, subscript=None, log_latest=False): - fig = common.vis_faces(im_data) - step = self.global_step - if log_latest: - step = 0 - if subscript: - path = os.path.join(self.logger.log_dir, name, '{}_{:04d}.jpg'.format(subscript, step)) - else: - path = os.path.join(self.logger.log_dir, name, '{:04d}.jpg'.format(step)) - os.makedirs(os.path.dirname(path), exist_ok=True) - fig.savefig(path) - plt.close(fig) - - def __get_save_dict(self): - save_dict = { - 'state_dict': self.net.state_dict(), - 'opts': vars(self.opts) - } - # save the latent avg in state_dict for inference if truncation of w was used during training - if self.opts.start_from_latent_avg: - save_dict['latent_avg'] = self.net.latent_avg - - if self.opts.save_training_data: # Save necessary information to enable training continuation from checkpoint - save_dict['global_step'] = self.global_step - save_dict['optimizer'] = self.optimizer.state_dict() - save_dict['best_val_loss'] = self.best_val_loss - if self.opts.w_discriminator_lambda > 0: - save_dict['discriminator_state_dict'] = self.discriminator.state_dict() - save_dict['discriminator_optimizer_state_dict'] = self.discriminator_optimizer.state_dict() - return save_dict - - def get_dims_to_discriminate(self): - deltas_starting_dimensions = self.net.encoder.get_deltas_starting_dimensions() - return deltas_starting_dimensions[:self.net.encoder.progressive_stage.value + 1] - - def is_progressive_training(self): - return self.opts.progressive_steps is not None - -# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Discriminator ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # - - def is_training_discriminator(self): - return self.opts.w_discriminator_lambda > 0 - - @staticmethod - def discriminator_loss(real_pred, fake_pred, loss_dict): - real_loss = F.softplus(-real_pred).mean() - fake_loss = F.softplus(fake_pred).mean() - - loss_dict['d_real_loss'] = float(real_loss) - loss_dict['d_fake_loss'] = float(fake_loss) - - return real_loss + fake_loss - - @staticmethod - def discriminator_r1_loss(real_pred, real_w): - grad_real, = autograd.grad( - outputs=real_pred.sum(), inputs=real_w, create_graph=True - ) - grad_penalty = grad_real.pow(2).reshape(grad_real.shape[0], -1).sum(1).mean() - - return grad_penalty - - @staticmethod - def requires_grad(model, flag=True): - for p in model.parameters(): - p.requires_grad = flag - - def train_discriminator(self, batch): - loss_dict = {} - x, _ = batch - x = x.to(self.device).float() - self.requires_grad(self.discriminator, True) - - with torch.no_grad(): - real_w, fake_w = self.sample_real_and_fake_latents(x) - real_pred = self.discriminator(real_w) - fake_pred = self.discriminator(fake_w) - loss = self.discriminator_loss(real_pred, fake_pred, loss_dict) - loss_dict['discriminator_loss'] = float(loss) - - self.discriminator_optimizer.zero_grad() - loss.backward() - self.discriminator_optimizer.step() - - # r1 regularization - d_regularize = self.global_step % self.opts.d_reg_every == 0 - if d_regularize: - real_w = real_w.detach() - real_w.requires_grad = True - real_pred = self.discriminator(real_w) - r1_loss = self.discriminator_r1_loss(real_pred, real_w) - - self.discriminator.zero_grad() - r1_final_loss = self.opts.r1 / 2 * r1_loss * self.opts.d_reg_every + 0 * real_pred[0] - r1_final_loss.backward() - self.discriminator_optimizer.step() - loss_dict['discriminator_r1_loss'] = float(r1_final_loss) - - # Reset to previous state - self.requires_grad(self.discriminator, False) - - return loss_dict - - def validate_discriminator(self, test_batch): - with torch.no_grad(): - loss_dict = {} - x, _ = test_batch - x = x.to(self.device).float() - real_w, fake_w = self.sample_real_and_fake_latents(x) - real_pred = self.discriminator(real_w) - fake_pred = self.discriminator(fake_w) - loss = self.discriminator_loss(real_pred, fake_pred, loss_dict) - loss_dict['discriminator_loss'] = float(loss) - return loss_dict - - def sample_real_and_fake_latents(self, x): - sample_z = torch.randn(self.opts.batch_size, 512, device=self.device) - real_w = self.net.decoder.get_latent(sample_z) - fake_w = self.net.encoder(x) - if self.is_progressive_training(): # When progressive training, feed only unique w's - dims_to_discriminate = self.get_dims_to_discriminate() - fake_w = fake_w[:, dims_to_discriminate, :] - if self.opts.use_w_pool: - real_w = self.real_w_pool.query(real_w) - fake_w = self.fake_w_pool.query(fake_w) - if fake_w.ndim == 3: - fake_w = fake_w[:, 0, :] - return real_w, fake_w diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/sam/model.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/sam/model.md deleted file mode 100644 index 7d924d4a93d8c50f2536174f46ef236091915bda..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/sam/model.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -description: Learn about the Ultralytics VIT SAM model for object detection and how it can help streamline your computer vision workflow. Check out the documentation for implementation details and examples. -keywords: Ultralytics, VIT, SAM, object detection, computer vision, deep learning, implementation, examples ---- - -## SAM ---- -### ::: ultralytics.vit.sam.model.SAM -

    \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/data/dataloaders/__init__.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/data/dataloaders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/vanderbilt-dsi/free-speech-app/free_speech_app/DataLoadDb.py b/spaces/vanderbilt-dsi/free-speech-app/free_speech_app/DataLoadDb.py deleted file mode 100644 index 6b867d9c61b15f0fac99e056d644e87d0ef373ed..0000000000000000000000000000000000000000 --- a/spaces/vanderbilt-dsi/free-speech-app/free_speech_app/DataLoadDb.py +++ /dev/null @@ -1,50 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: ../nbs/free-speech-stores.ipynb. - -# %% auto 0 -__all__ = ['setup_openai_api_key', 'setup_db'] - -# %% ../nbs/free-speech-stores.ipynb 4 -# libraries required for functionality -import os -from getpass import getpass - -from langchain.chains import RetrievalQA -from langchain.llms import OpenAI -from langchain.prompts import PromptTemplate -from langchain.document_loaders import UnstructuredFileLoader -from langchain.document_loaders.merge import MergedDataLoader -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import Chroma - -# %% ../nbs/free-speech-stores.ipynb 12 -def setup_openai_api_key(): - openai_api_key = getpass() - os.environ["OPENAI_API_KEY"] = openai_api_key - -# %% ../nbs/free-speech-stores.ipynb 15 -import nltk -nltk.download('averaged_perceptron_tagger') - -# %% ../nbs/free-speech-stores.ipynb 27 -def setup_db(local_path, hub_path, chunk_size=1000, chunk_overlap=5): - file_list = os.listdir(local_path) - - # set up loaders - loaders_list = [] - for file_path in file_list: - file_path = local_path + file_path - loaders_list.append(UnstructuredFileLoader(file_path)) - - loader_all = MergedDataLoader(loaders=[loader for loader in loaders_list]) - - # Split and embed docs - documents = loader_all.load() - text_splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) - texts = text_splitter.split_documents(documents) - embeddings = OpenAIEmbeddings() - - # Replace dataset path with relevant dataset name - counterspeech-resources or hatespeech-background - db = DeepLake.from_documents(texts, dataset_path=hub_path, embedding=embeddings, overwrite=True) - - return diff --git a/spaces/vinthony/SadTalker/src/utils/text2speech.py b/spaces/vinthony/SadTalker/src/utils/text2speech.py deleted file mode 100644 index 00d165b6cc7774fd200929aafa0ff3b15916111e..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/utils/text2speech.py +++ /dev/null @@ -1,20 +0,0 @@ -import os -import tempfile -from TTS.api import TTS - - -class TTSTalker(): - def __init__(self) -> None: - model_name = TTS.list_models()[0] - self.tts = TTS(model_name) - - def test(self, text, language='en'): - - tempf = tempfile.NamedTemporaryFile( - delete = False, - suffix = ('.'+'wav'), - ) - - self.tts.tts_to_file(text, speaker=self.tts.speakers[0], language=language, file_path=tempf.name) - - return tempf.name \ No newline at end of file diff --git a/spaces/vkganesan/AdaIN/train.py b/spaces/vkganesan/AdaIN/train.py deleted file mode 100644 index 4f55e0d69d48622ff2ba739626e221a62e9dfa7f..0000000000000000000000000000000000000000 --- a/spaces/vkganesan/AdaIN/train.py +++ /dev/null @@ -1,144 +0,0 @@ -from net import StyleTransfer -import torch -import torch.nn as nn -from pathlib import Path -import torchvision -import torch.utils.data as data -import torchvision.transforms as transforms -import matplotlib.pyplot as plt -import torch.multiprocessing -from utils import * -import argparse -from tqdm import tqdm -from tensorboardX import SummaryWriter -from decoder import decoder as Decoder -from encoder import encoder as Encoder -from PIL import Image, ImageFile - -class FlatFolderDataset(data.Dataset): - def __init__(self, root, transform): - super(FlatFolderDataset, self).__init__() - self.root = root - self.paths = list(Path(self.root).glob('*')) - self.transform = transform - - def __getitem__(self, index): - path = self.paths[index] - img = Image.open(str(path)).convert('RGB') - img = self.transform(img) - return img - - def __len__(self): - return len(self.paths) - - def name(self): - return 'FlatFolderDataset' - -def main(): - torch.multiprocessing.set_sharing_strategy('file_system') - - # Set the path to the dataset directory - content_dataset_dir = '../../content-dataset/images/images' - style_dataset_dir = '../../style-dataset/images' - - - def train_transform(): - transform_list = [ - transforms.Resize(size=(512, 512)), - transforms.RandomCrop(256), - transforms.ToTensor() - ] - return transforms.Compose(transform_list) - - - - - parser = argparse.ArgumentParser() - # Basic options - parser.add_argument('--content_dir', default=content_dataset_dir, type=str, - help='Directory path to a batch of content images') - parser.add_argument('--style_dir', default=style_dataset_dir, type=str, - help='Directory path to a batch of style images') - parser.add_argument('--encoder', type=str, default='./vgg_normalised.pth') - - # training options - parser.add_argument('--save_dir', default='../saved-models', - help='Directory to save the model') - parser.add_argument('--log_dir', default='./logs', - help='Directory to save the log') - parser.add_argument('--lr', type=float, default=1e-4) - parser.add_argument('--lr_decay', type=float, default=5e-5) - parser.add_argument('--max_iter', type=int, default=8000) - parser.add_argument('--batch_size', type=int, default=8) - parser.add_argument('--style_weight', type=float, default=10.0) - parser.add_argument('--content_weight', type=float, default=1.0) - parser.add_argument('--n_threads', type=int, default=8) - parser.add_argument('--save_model_interval', type=int, default=500) - parser.add_argument('--save-image-interval', type=int, default=50) - args = parser.parse_args() - - - - - device = torch.device('mps') - save_dir = Path(args.save_dir) - save_dir.mkdir(exist_ok=True, parents=True) - log_dir = Path(args.log_dir) - log_dir.mkdir(exist_ok=True, parents=True) - writer = SummaryWriter(log_dir=str(log_dir)) - - - decoder = Decoder - encoder = Encoder - - encoder.load_state_dict(torch.load(args.encoder)) - encoder = nn.Sequential(*list(encoder.children())[:31]) - network = StyleTransfer(encoder, decoder) - network.train() - network.to(device) - - content_dataset = FlatFolderDataset(args.content_dir, transform=train_transform()) - style_dataset = FlatFolderDataset(args.style_dir, transform=train_transform()) - - print(len(content_dataset), len(style_dataset)) - - content_iter = iter(data.DataLoader( - content_dataset, batch_size=args.batch_size, - num_workers=args.n_threads)) - style_iter = iter(data.DataLoader( - style_dataset, batch_size=args.batch_size, - num_workers=args.n_threads)) - optimizer = torch.optim.Adam(network.decoder.parameters(), lr=args.lr) - - - for batch in tqdm(range(args.max_iter)): - adjust_learning_rate(optimizer, batch, args.lr_decay, args.lr) - content_images = next(content_iter).to(device) - style_images = next(style_iter).to(device) - final_image, s_loss, c_loss = network(content_images, style_images) - c_loss = args.content_weight * c_loss - s_loss = args.style_weight * s_loss - total_loss = c_loss + s_loss - - optimizer.zero_grad() - total_loss.backward() - optimizer.step() - - writer.add_scalar('loss_content', c_loss.item(), batch + 1) - writer.add_scalar('loss_style', s_loss.item(), batch + 1) - - if (batch + 1) % args.save_model_interval == 0 or (batch + 1) == args.max_iter: - state_dict = network.decoder.state_dict() - for key in state_dict.keys(): - state_dict[key] = state_dict[key].to(torch.device('cpu')) - torch.save(state_dict, save_dir / - 'decoder_iter_{:d}.pth.tar'.format(batch + 1)) - - if (batch + 1) % args.save_image_interval == 0: - print_img = torch.cat((content_images[:1], style_images[:1], final_image[:1]), 3).detach().cpu() - concat_img(print_img, batch) - writer.close() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/vorstcavry/vits-models-1/utils.py b/spaces/vorstcavry/vits-models-1/utils.py deleted file mode 100644 index e19cac39c57f213bbf6f1435ab48fe7948a1b17b..0000000000000000000000000000000000000000 --- a/spaces/vorstcavry/vits-models-1/utils.py +++ /dev/null @@ -1,501 +0,0 @@ -import os -import glob -import re -import sys -import argparse -import logging -import json -import subprocess -import random - -import librosa -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from modules.commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -# def normalize_f0(f0, random_scale=True): -# f0_norm = f0.clone() # create a copy of the input Tensor -# batch_size, _, frame_length = f0_norm.shape -# for i in range(batch_size): -# means = torch.mean(f0_norm[i, 0, :]) -# if random_scale: -# factor = random.uniform(0.8, 1.2) -# else: -# factor = 1 -# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor -# return f0_norm -# def normalize_f0(f0, random_scale=True): -# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True) -# if random_scale: -# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device) -# else: -# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device) -# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) -# return f0_norm -def normalize_f0(f0, x_mask, uv, random_scale=True): - # calculate means based on x_mask - uv_sum = torch.sum(uv, dim=1, keepdim=True) - uv_sum[uv_sum == 0] = 9999 - means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum - - if random_scale: - factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device) - else: - factor = torch.ones(f0.shape[0], 1).to(f0.device) - # normalize f0 based on means and factor - f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) - if torch.isnan(f0_norm).any(): - exit(0) - return f0_norm * x_mask - - -def plot_data_to_numpy(x, y): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - plt.plot(x) - plt.plot(y) - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - - -def interpolate_f0(f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - -def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import parselmouth - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - time_step = hop_length / sampling_rate * 1000 - f0_min = 50 - f0_max = 1100 - f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - return f0 - -def resize_f0(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import pyworld - if p_len is None: - p_len = wav_numpy.shape[0]//hop_length - f0, t = pyworld.dio( - wav_numpy.astype(np.double), - fs=sampling_rate, - f0_ceil=800, - frame_period=1000 * hop_length / sampling_rate, - ) - f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return resize_f0(f0, p_len) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(): - vec_path = "hubert/checkpoint_best_legacy_500.pt" - print("load model(s) from {}".format(vec_path)) - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - model = models[0] - model.eval() - return model - -def get_hubert_content(hmodel, wav_16k_tensor): - feats = wav_16k_tensor - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav_16k_tensor.device), - "padding_mask": padding_mask.to(wav_16k_tensor.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = hmodel.extract_features(**inputs) - feats = hmodel.final_proj(logits[0]) - return feats.transpose(1, 2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "dec" in k or "disc" in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -def repeat_expand_2d(content, target_len): - # content : [h, t] - - src_len = content.shape[-1] - target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device) - temp = torch.arange(src_len+1) * target_len / src_len - current_pos = 0 - for i in range(target_len): - if i < temp[current_pos+1]: - target[:, i] = content[:, current_pos] - else: - current_pos += 1 - target[:, i] = content[:, current_pos] - - return target - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/ema.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/ema.py deleted file mode 100644 index 15c7e68088f019802a59e7ae41cc1fe0c7f28f96..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/ema.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...parallel import is_module_wrapper -from ..hooks.hook import HOOKS, Hook - - -@HOOKS.register_module() -class EMAHook(Hook): - r"""Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointSaverHook. - - .. math:: - - \text{Xema\_{t+1}} = (1 - \text{momentum}) \times - \text{Xema\_{t}} + \text{momentum} \times X_t - - Args: - momentum (float): The momentum used for updating ema parameter. - Defaults to 0.0002. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - warm_up (int): During first warm_up steps, we may use smaller momentum - to update ema parameters more slowly. Defaults to 100. - resume_from (str): The checkpoint path. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - warm_up=100, - resume_from=None): - assert isinstance(interval, int) and interval > 0 - self.warm_up = warm_up - self.interval = interval - assert momentum > 0 and momentum < 1 - self.momentum = momentum**interval - self.checkpoint = resume_from - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - self.model_parameters = dict(model.named_parameters(recurse=True)) - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers(recurse=True)) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - curr_step = runner.iter - # We warm up the momentum considering the instability at beginning - momentum = min(self.momentum, - (1 + curr_step) / (self.warm_up + curr_step)) - if curr_step % self.interval != 0: - return - for name, parameter in self.model_parameters.items(): - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/memory.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/memory.py deleted file mode 100644 index 70cf9a838fb314e3bd3c07aadbc00921a81e83ed..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/memory.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class EmptyCacheHook(Hook): - - def __init__(self, before_epoch=False, after_epoch=True, after_iter=False): - self._before_epoch = before_epoch - self._after_epoch = after_epoch - self._after_iter = after_iter - - def after_iter(self, runner): - if self._after_iter: - torch.cuda.empty_cache() - - def before_epoch(self, runner): - if self._before_epoch: - torch.cuda.empty_cache() - - def after_epoch(self, runner): - if self._after_epoch: - torch.cuda.empty_cache() diff --git a/spaces/vuu10/EnzRank/Streamlit/main.py b/spaces/vuu10/EnzRank/Streamlit/main.py deleted file mode 100644 index cc77adcc5c572695fe732af8bcccf95f307c1794..0000000000000000000000000000000000000000 --- a/spaces/vuu10/EnzRank/Streamlit/main.py +++ /dev/null @@ -1,205 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import re -from PIL import Image -import webbrowser - -from rdkit import Chem -from rdkit.Chem import AllChem -from rdkit.Chem import Draw -from rdkit.Chem import rdChemReactions as Reactions - -import tensorflow as tf -from tensorflow import keras -from keras.preprocessing import sequence -from keras.utils import pad_sequences -import keras -from keras import backend as K -from keras.models import load_model -import argparse -import h5py -import pdb - - -seq_rdic = ['A', 'I', 'L', 'V', 'F', 'W', 'Y', 'N', 'C', 'Q', 'M', - 'S', 'T', 'D', 'E', 'R', 'H', 'K', 'G', 'P', 'O', 'U', 'X', 'B', 'Z'] -seq_dic = {w: i+1 for i, w in enumerate(seq_rdic)} - - -@st.cache(allow_output_mutation=True) -def encodeSeq(seq, seq_dic): - if pd.isnull(seq): - return [0] - else: - return [seq_dic[aa] for aa in seq] - - -@st.cache(allow_output_mutation=True) -def load_modelfile(model_string): - loaded_model = tf.keras.models.load_model(model_string) - return loaded_model - - -@st.cache(allow_output_mutation=True) -def prot_feature_gen_from_str_input(prot_input_str, prot_len=2500): - Prot_ID = prot_input_str.split(':')[0] - Prot_seq = prot_input_str.split(':')[1] - prot_dataframe = pd.DataFrame( - {'Protein_ID': Prot_ID, 'Sequence': Prot_seq}, index=[0]) - prot_dataframe.set_index('Protein_ID') - - prot_dataframe["encoded_sequence"] = prot_dataframe.Sequence.map( - lambda a: encodeSeq(a, seq_dic)) - prot_feature = pad_sequences( - prot_dataframe["encoded_sequence"].values, prot_len) - - return prot_feature, Prot_ID - - -@st.cache(allow_output_mutation=True) -def mol_feature_gen_from_str_input(mol_str, kegg_id_flag, kegg_df): - - if kegg_id_flag == 1: - KEGG_ID = mol_str - kegg_id_loc = kegg_df.index[kegg_df.Compound_ID == KEGG_ID][0] - KEGG_ID_info = kegg_df.loc[kegg_id_loc] - KEGG_ID_info_df = KEGG_ID_info.to_frame().T.set_index('Compound_ID') - - final_return = KEGG_ID_info_df - final_id = KEGG_ID - - else: - try: - mol_ID = mol_str.split(':')[0] - mol_smiles = mol_str.split(':')[1] - mol = Chem.MolFromSmiles(mol_smiles) - fp1 = AllChem.GetMorganFingerprintAsBitVect( - mol, useChirality=True, radius=2, nBits=2048) - fp_list = list(np.array(fp1).astype(float)) - fp_str = list(map(str, fp_list)) - mol_fp = '\t'.join(fp_str) - - mol_dict = {} - mol_dict['Compound_ID'] = mol_ID - mol_dict['Smiles'] = mol_smiles - mol_dict['morgan_fp_r2'] = mol_fp - - mol_info_df = pd.DataFrame(mol_dict, index=[0]) - mol_info_df = mol_info_df.set_index('Compound_ID') - - final_return = mol_info_df - final_id = mol_ID - - except Exception as error: - print('Something wrong with molecule input string...' + repr(error)) - - return final_return, final_id - - -@st.cache(allow_output_mutation=True) -def act_df_gen_mol_feature(mol_id, prot_id): - act_df = pd.DataFrame( - {'Protein_ID': prot_id, 'Compound_ID': mol_id}, index=[0]) - - return act_df - - -@st.cache(allow_output_mutation=True) -def compound_feature_gen_df_input(act_df, comp_df, comp_len=2048, comp_vec='morgan_fp_r2'): - act_df = pd.merge(act_df, comp_df, left_on='Compound_ID', right_index=True) - comp_feature = np.stack(act_df[comp_vec].map(lambda fp: fp.split("\t"))) - comp_feature = comp_feature.astype('float') - return comp_feature - - -@st.cache(allow_output_mutation=True) -def model_prediction(compound_feature, enz_feature, model): - prediction_vals = model.predict([compound_feature, enz_feature]) - - return prediction_vals[0][0] - - -# loaded_model = load_modelfile('./../CNN_results/model_final.model') - -# KEGG_compound_read = pd.read_csv('./../CNN_data/Final_test/kegg_compound.csv', index_col = 'Compound_ID') -# kegg_df = KEGG_compound_read.reset_index() - - -def main(): - graph = tf.compat.v1.get_default_graph() - ld_model = tf.keras.models.load_model('./../CNN_results_split_final/Final_model.model') - - KEGG_compound_read = pd.read_csv('./../CNN_data/Final_test/kegg_compound.csv', index_col = 'Compound_ID') - kegg_df = KEGG_compound_read.reset_index() - - - # def img_to_bytes(img_path): - # img_bytes = Path(img_path).read_bytes() - # encoded = base64.b64encode(img_bytes).decode() - # return encoded - # # st.title('dGPredictor') - - # header_html = "" - - # st.markdown( - # header_html, unsafe_allow_html=True, - # ) - - - st.image('./header.png', use_column_width=True) - - st.subheader('Enzyme-Substrate Activity Predictor ') - - st.subheader('Enzyme sequence') - st.caption('Please follow the input format show in the text box--> id:Sequence') - - enz_str = st.text_input('', value="A0A4P8WFA8:MTKRVLVTGGAGFLGSHLCERLLSEGHEVICLDNFGSGRRKNIKEFEDHPSFKVNDRDVRISESLPSVDRIYHLASRASPADFTQFPVNIALANTQGTRRLLDQARACDARMVFASTSEVYGDPKVHPQPETYTGNVNIRGARGCYDESKRFGETLTVAYQRKYDVDARTVRIFNTYGPRMRPDDGRVVPTFVTQALRGDDLTIYGDGEQTRSFCYVDDLIEGLISLMRVDNPEHNVYNIGKENERTIKELAYEVLGLTDTESDIVYEPLPEDDPGQRRPDITRAKTELDWEPKISLREGLEDTITYFDN") - - # url = 'https://www.genome.jp/dbget-bin/www_bget?rn:R00801' - # if st.button('KEformat example'): - # webbrowser.open_new_tab(url) - - st.subheader('Substrate ') - st.caption('Please follow the input format show in the text box--> KEGG id or click the checkbox') - - comp_str = st.text_input('', value="C00149") - if st.checkbox('If you are entering smiles string along with KEGG ID'): - add_info = st.text_area('Additional information (id: Smiles):', "C00149:O[C@@H](CC([O-])=O)C([O-])=O") - else: - add_info = '' - - if st.button("Predict"): - # if session_state.button_search: -# st.subheader('Enzyme-Substrate activity score') - with st.spinner('Calculating...'): - try: -# st.write('I am inside') - prot_feature, prot_id = prot_feature_gen_from_str_input(enz_str) - if len(add_info) == 0: - kegg_id_flag = 1 - comp_feature, comp_id = mol_feature_gen_from_str_input(comp_str, kegg_id_flag, kegg_df) - else: - kegg_id_flag = 0 - comp_feature, comp_id = mol_feature_gen_from_str_input(add_info, kegg_id_flag, kegg_df) - - act_dataframe = act_df_gen_mol_feature(comp_id, prot_id) -# st.write(act_dataframe) - compound_feature = compound_feature_gen_df_input(act_dataframe, comp_feature) -# st.write(compound_feature) - - except Exception as e: - st.write('Error somewhere...' + repr(e)) - -# st.write(compound_feature) -# st.write(prot_feature) -# keras.backend.clear_session() - - y = ld_model.predict([compound_feature, prot_feature]) - - subheaderstring = 'EnzRank Score for '+ prot_id + '-' + comp_id + ' pair:' - st.subheader(subheaderstring) - st.write(str(y[0][0])) - -if __name__ == '__main__': - main() diff --git a/spaces/whitphx/gradio-static-test/dist/assets/dsv-576afacd.js b/spaces/whitphx/gradio-static-test/dist/assets/dsv-576afacd.js deleted file mode 100644 index 832d450961d23fb14b577c045f0c24c61e74c4e6..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/dsv-576afacd.js +++ /dev/null @@ -1,6 +0,0 @@ -var D={},A={},E=34,m=10,R=13;function I(r){return new Function("d","return {"+r.map(function(t,e){return JSON.stringify(t)+": d["+e+'] || ""'}).join(",")+"}")}function B(r,t){var e=I(r);return function(a,c){return t(e(a),c,r)}}function F(r){var t=Object.create(null),e=[];return r.forEach(function(a){for(var c in a)c in t||e.push(t[c]=c)}),e}function f(r,t){var e=r+"",a=e.length;return a9999?"+"+f(r,6):f(r,4)}function S(r){var t=r.getUTCHours(),e=r.getUTCMinutes(),a=r.getUTCSeconds(),c=r.getUTCMilliseconds();return isNaN(r)?"Invalid Date":L(r.getUTCFullYear())+"-"+f(r.getUTCMonth()+1,2)+"-"+f(r.getUTCDate(),2)+(c?"T"+f(t,2)+":"+f(e,2)+":"+f(a,2)+"."+f(c,3)+"Z":a?"T"+f(t,2)+":"+f(e,2)+":"+f(a,2)+"Z":e||t?"T"+f(t,2)+":"+f(e,2)+"Z":"")}function Z(r){var t=new RegExp('["'+r+` -\r]`),e=r.charCodeAt(0);function a(n,o){var s,i,u=c(n,function(h,l){if(s)return s(h,l-1);i=h,s=o?B(h,o):I(h)});return u.columns=i||[],u}function c(n,o){var s=[],i=n.length,u=0,h=0,l,v=i<=0,C=!1;n.charCodeAt(i-1)===m&&--i,n.charCodeAt(i-1)===R&&--i;function w(){if(v)return A;if(C)return C=!1,D;var j,d=u,p;if(n.charCodeAt(d)===E){for(;u++=i?v=!0:(p=n.charCodeAt(u++))===m?C=!0:p===R&&(C=!0,n.charCodeAt(u)===m&&++u),n.slice(d+1,j-1).replace(/""/g,'"')}for(;u ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/xdecoder/Demo/xdecoder/architectures/registry.py b/spaces/xdecoder/Demo/xdecoder/architectures/registry.py deleted file mode 100644 index 940e4560f7d052aed4915187410266ab5a4cb4d0..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/xdecoder/architectures/registry.py +++ /dev/null @@ -1,13 +0,0 @@ -_model_entrypoints = {} - -def register_model(fn): - module_name_split = fn.__module__.split('.') - model_name = module_name_split[-1] - _model_entrypoints[model_name] = fn - return fn - -def model_entrypoints(model_name): - return _model_entrypoints[model_name] - -def is_model(model_name): - return model_name in _model_entrypoints \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/data/sampler.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/data/sampler.py deleted file mode 100644 index f69b3e02a7f111bc88595dae7a6fe64b25e0703d..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/data/sampler.py +++ /dev/null @@ -1,245 +0,0 @@ -from __future__ import division, absolute_import -import copy -import numpy as np -import random -from collections import defaultdict -from torch.utils.data.sampler import Sampler, RandomSampler, SequentialSampler - -AVAI_SAMPLERS = [ - 'RandomIdentitySampler', 'SequentialSampler', 'RandomSampler', - 'RandomDomainSampler', 'RandomDatasetSampler' -] - - -class RandomIdentitySampler(Sampler): - """Randomly samples N identities each with K instances. - - Args: - data_source (list): contains tuples of (img_path(s), pid, camid, dsetid). - batch_size (int): batch size. - num_instances (int): number of instances per identity in a batch. - """ - - def __init__(self, data_source, batch_size, num_instances): - if batch_size < num_instances: - raise ValueError( - 'batch_size={} must be no less ' - 'than num_instances={}'.format(batch_size, num_instances) - ) - - self.data_source = data_source - self.batch_size = batch_size - self.num_instances = num_instances - self.num_pids_per_batch = self.batch_size // self.num_instances - self.index_dic = defaultdict(list) - for index, items in enumerate(data_source): - pid = items[1] - self.index_dic[pid].append(index) - self.pids = list(self.index_dic.keys()) - assert len(self.pids) >= self.num_pids_per_batch - - # estimate number of examples in an epoch - # TODO: improve precision - self.length = 0 - for pid in self.pids: - idxs = self.index_dic[pid] - num = len(idxs) - if num < self.num_instances: - num = self.num_instances - self.length += num - num % self.num_instances - - def __iter__(self): - batch_idxs_dict = defaultdict(list) - - for pid in self.pids: - idxs = copy.deepcopy(self.index_dic[pid]) - if len(idxs) < self.num_instances: - idxs = np.random.choice( - idxs, size=self.num_instances, replace=True - ) - random.shuffle(idxs) - batch_idxs = [] - for idx in idxs: - batch_idxs.append(idx) - if len(batch_idxs) == self.num_instances: - batch_idxs_dict[pid].append(batch_idxs) - batch_idxs = [] - - avai_pids = copy.deepcopy(self.pids) - final_idxs = [] - - while len(avai_pids) >= self.num_pids_per_batch: - selected_pids = random.sample(avai_pids, self.num_pids_per_batch) - for pid in selected_pids: - batch_idxs = batch_idxs_dict[pid].pop(0) - final_idxs.extend(batch_idxs) - if len(batch_idxs_dict[pid]) == 0: - avai_pids.remove(pid) - - return iter(final_idxs) - - def __len__(self): - return self.length - - -class RandomDomainSampler(Sampler): - """Random domain sampler. - - We consider each camera as a visual domain. - - How does the sampling work: - 1. Randomly sample N cameras (based on the "camid" label). - 2. From each camera, randomly sample K images. - - Args: - data_source (list): contains tuples of (img_path(s), pid, camid, dsetid). - batch_size (int): batch size. - n_domain (int): number of cameras to sample in a batch. - """ - - def __init__(self, data_source, batch_size, n_domain): - self.data_source = data_source - - # Keep track of image indices for each domain - self.domain_dict = defaultdict(list) - for i, items in enumerate(data_source): - camid = items[2] - self.domain_dict[camid].append(i) - self.domains = list(self.domain_dict.keys()) - - # Make sure each domain can be assigned an equal number of images - if n_domain is None or n_domain <= 0: - n_domain = len(self.domains) - assert batch_size % n_domain == 0 - self.n_img_per_domain = batch_size // n_domain - - self.batch_size = batch_size - self.n_domain = n_domain - self.length = len(list(self.__iter__())) - - def __iter__(self): - domain_dict = copy.deepcopy(self.domain_dict) - final_idxs = [] - stop_sampling = False - - while not stop_sampling: - selected_domains = random.sample(self.domains, self.n_domain) - - for domain in selected_domains: - idxs = domain_dict[domain] - selected_idxs = random.sample(idxs, self.n_img_per_domain) - final_idxs.extend(selected_idxs) - - for idx in selected_idxs: - domain_dict[domain].remove(idx) - - remaining = len(domain_dict[domain]) - if remaining < self.n_img_per_domain: - stop_sampling = True - - return iter(final_idxs) - - def __len__(self): - return self.length - - -class RandomDatasetSampler(Sampler): - """Random dataset sampler. - - How does the sampling work: - 1. Randomly sample N datasets (based on the "dsetid" label). - 2. From each dataset, randomly sample K images. - - Args: - data_source (list): contains tuples of (img_path(s), pid, camid, dsetid). - batch_size (int): batch size. - n_dataset (int): number of datasets to sample in a batch. - """ - - def __init__(self, data_source, batch_size, n_dataset): - self.data_source = data_source - - # Keep track of image indices for each dataset - self.dataset_dict = defaultdict(list) - for i, items in enumerate(data_source): - dsetid = items[3] - self.dataset_dict[dsetid].append(i) - self.datasets = list(self.dataset_dict.keys()) - - # Make sure each dataset can be assigned an equal number of images - if n_dataset is None or n_dataset <= 0: - n_dataset = len(self.datasets) - assert batch_size % n_dataset == 0 - self.n_img_per_dset = batch_size // n_dataset - - self.batch_size = batch_size - self.n_dataset = n_dataset - self.length = len(list(self.__iter__())) - - def __iter__(self): - dataset_dict = copy.deepcopy(self.dataset_dict) - final_idxs = [] - stop_sampling = False - - while not stop_sampling: - selected_datasets = random.sample(self.datasets, self.n_dataset) - - for dset in selected_datasets: - idxs = dataset_dict[dset] - selected_idxs = random.sample(idxs, self.n_img_per_dset) - final_idxs.extend(selected_idxs) - - for idx in selected_idxs: - dataset_dict[dset].remove(idx) - - remaining = len(dataset_dict[dset]) - if remaining < self.n_img_per_dset: - stop_sampling = True - - return iter(final_idxs) - - def __len__(self): - return self.length - - -def build_train_sampler( - data_source, - train_sampler, - batch_size=32, - num_instances=4, - num_cams=1, - num_datasets=1, - **kwargs -): - """Builds a training sampler. - - Args: - data_source (list): contains tuples of (img_path(s), pid, camid). - train_sampler (str): sampler name (default: ``RandomSampler``). - batch_size (int, optional): batch size. Default is 32. - num_instances (int, optional): number of instances per identity in a - batch (when using ``RandomIdentitySampler``). Default is 4. - num_cams (int, optional): number of cameras to sample in a batch (when using - ``RandomDomainSampler``). Default is 1. - num_datasets (int, optional): number of datasets to sample in a batch (when - using ``RandomDatasetSampler``). Default is 1. - """ - assert train_sampler in AVAI_SAMPLERS, \ - 'train_sampler must be one of {}, but got {}'.format(AVAI_SAMPLERS, train_sampler) - - if train_sampler == 'RandomIdentitySampler': - sampler = RandomIdentitySampler(data_source, batch_size, num_instances) - - elif train_sampler == 'RandomDomainSampler': - sampler = RandomDomainSampler(data_source, batch_size, num_cams) - - elif train_sampler == 'RandomDatasetSampler': - sampler = RandomDatasetSampler(data_source, batch_size, num_datasets) - - elif train_sampler == 'SequentialSampler': - sampler = SequentialSampler(data_source) - - elif train_sampler == 'RandomSampler': - sampler = RandomSampler(data_source) - - return sampler diff --git a/spaces/xiaoxin1111/vits-uma-genshin-honkai/app.py b/spaces/xiaoxin1111/vits-uma-genshin-honkai/app.py deleted file mode 100644 index 870f60f95837adc9ff6da405126836797727a1a8..0000000000000000000000000000000000000000 --- a/spaces/xiaoxin1111/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,123 +0,0 @@ -# coding=utf-8 -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 10000: - return f"输入文字过长!{len(text)}>10000", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3]) - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() diff --git a/spaces/ybelkada/detoxified-lms/style.css b/spaces/ybelkada/detoxified-lms/style.css deleted file mode 100644 index e5adbf3c59f92c16bd67a0c572d6b08cd670e31a..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/detoxified-lms/style.css +++ /dev/null @@ -1,14 +0,0 @@ -h1 { - text-align: center; - } - img#overview { - display: block; - margin: auto; - max-width: 1000px; - max-height: 600px; - } - img#visitor-badge { - display: block; - margin: auto; - } - \ No newline at end of file diff --git a/spaces/ybelkada/i-like-flan-ul2/README.md b/spaces/ybelkada/i-like-flan-ul2/README.md deleted file mode 100644 index 39b593f2d74e0256b11bd01d5a9510dee9df649e..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/i-like-flan-ul2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: I Like Flan UL2 -emoji: 🍮 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ybelkada/interfacegan_pp/models/pggan_tf_official/metrics/frechet_inception_distance.py b/spaces/ybelkada/interfacegan_pp/models/pggan_tf_official/metrics/frechet_inception_distance.py deleted file mode 100644 index 565bd36e8f587a5ceec441710f6fdae2ce14fe99..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/pggan_tf_official/metrics/frechet_inception_distance.py +++ /dev/null @@ -1,281 +0,0 @@ -#!/usr/bin/env python3 -# -# Copyright 2017 Martin Heusel -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Adapted from the original implementation by Martin Heusel. -# Source https://github.com/bioinf-jku/TTUR/blob/master/fid.py - -''' Calculates the Frechet Inception Distance (FID) to evalulate GANs. - -The FID metric calculates the distance between two distributions of images. -Typically, we have summary statistics (mean & covariance matrix) of one -of these distributions, while the 2nd distribution is given by a GAN. - -When run as a stand-alone program, it compares the distribution of -images that are stored as PNG/JPEG at a specified location with a -distribution given by summary statistics (in pickle format). - -The FID is calculated by assuming that X_1 and X_2 are the activations of -the pool_3 layer of the inception net for generated samples and real world -samples respectivly. - -See --help to see further details. -''' - -from __future__ import absolute_import, division, print_function -import numpy as np -import scipy as sp -import os -import gzip, pickle -import tensorflow as tf -from scipy.misc import imread -import pathlib -import urllib - - -class InvalidFIDException(Exception): - pass - - -def create_inception_graph(pth): - """Creates a graph from saved GraphDef file.""" - # Creates graph from saved graph_def.pb. - with tf.gfile.FastGFile( pth, 'rb') as f: - graph_def = tf.GraphDef() - graph_def.ParseFromString( f.read()) - _ = tf.import_graph_def( graph_def, name='FID_Inception_Net') -#------------------------------------------------------------------------------- - - -# code for handling inception net derived from -# https://github.com/openai/improved-gan/blob/master/inception_score/model.py -def _get_inception_layer(sess): - """Prepares inception net for batched usage and returns pool_3 layer. """ - layername = 'FID_Inception_Net/pool_3:0' - pool3 = sess.graph.get_tensor_by_name(layername) - ops = pool3.graph.get_operations() - for op_idx, op in enumerate(ops): - for o in op.outputs: - shape = o.get_shape() - if shape._dims is not None: - shape = [s.value for s in shape] - new_shape = [] - for j, s in enumerate(shape): - if s == 1 and j == 0: - new_shape.append(None) - else: - new_shape.append(s) - try: - o._shape = tf.TensorShape(new_shape) - except ValueError: - o._shape_val = tf.TensorShape(new_shape) # EDIT: added for compatibility with tensorflow 1.6.0 - return pool3 -#------------------------------------------------------------------------------- - - -def get_activations(images, sess, batch_size=50, verbose=False): - """Calculates the activations of the pool_3 layer for all images. - - Params: - -- images : Numpy array of dimension (n_images, hi, wi, 3). The values - must lie between 0 and 256. - -- sess : current session - -- batch_size : the images numpy array is split into batches with batch size - batch_size. A reasonable batch size depends on the disposable hardware. - -- verbose : If set to True and parameter out_step is given, the number of calculated - batches is reported. - Returns: - -- A numpy array of dimension (num images, 2048) that contains the - activations of the given tensor when feeding inception with the query tensor. - """ - inception_layer = _get_inception_layer(sess) - d0 = images.shape[0] - if batch_size > d0: - print("warning: batch size is bigger than the data size. setting batch size to data size") - batch_size = d0 - n_batches = d0//batch_size - n_used_imgs = n_batches*batch_size - pred_arr = np.empty((n_used_imgs,2048)) - for i in range(n_batches): - if verbose: - print("\rPropagating batch %d/%d" % (i+1, n_batches), end="", flush=True) - start = i*batch_size - end = start + batch_size - batch = images[start:end] - pred = sess.run(inception_layer, {'FID_Inception_Net/ExpandDims:0': batch}) - pred_arr[start:end] = pred.reshape(batch_size,-1) - if verbose: - print(" done") - return pred_arr -#------------------------------------------------------------------------------- - - -def calculate_frechet_distance(mu1, sigma1, mu2, sigma2): - """Numpy implementation of the Frechet Distance. - The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1) - and X_2 ~ N(mu_2, C_2) is - d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)). - - Params: - -- mu1 : Numpy array containing the activations of the pool_3 layer of the - inception net ( like returned by the function 'get_predictions') - -- mu2 : The sample mean over activations of the pool_3 layer, precalcualted - on an representive data set. - -- sigma2: The covariance matrix over activations of the pool_3 layer, - precalcualted on an representive data set. - - Returns: - -- dist : The Frechet Distance. - - Raises: - -- InvalidFIDException if nan occures. - """ - m = np.square(mu1 - mu2).sum() - #s = sp.linalg.sqrtm(np.dot(sigma1, sigma2)) # EDIT: commented out - s, _ = sp.linalg.sqrtm(np.dot(sigma1, sigma2), disp=False) # EDIT: added - dist = m + np.trace(sigma1+sigma2 - 2*s) - #if np.isnan(dist): # EDIT: commented out - # raise InvalidFIDException("nan occured in distance calculation.") # EDIT: commented out - #return dist # EDIT: commented out - return np.real(dist) # EDIT: added -#------------------------------------------------------------------------------- - - -def calculate_activation_statistics(images, sess, batch_size=50, verbose=False): - """Calculation of the statistics used by the FID. - Params: - -- images : Numpy array of dimension (n_images, hi, wi, 3). The values - must lie between 0 and 255. - -- sess : current session - -- batch_size : the images numpy array is split into batches with batch size - batch_size. A reasonable batch size depends on the available hardware. - -- verbose : If set to True and parameter out_step is given, the number of calculated - batches is reported. - Returns: - -- mu : The mean over samples of the activations of the pool_3 layer of - the incption model. - -- sigma : The covariance matrix of the activations of the pool_3 layer of - the incption model. - """ - act = get_activations(images, sess, batch_size, verbose) - mu = np.mean(act, axis=0) - sigma = np.cov(act, rowvar=False) - return mu, sigma -#------------------------------------------------------------------------------- - - -#------------------------------------------------------------------------------- -# The following functions aren't needed for calculating the FID -# they're just here to make this module work as a stand-alone script -# for calculating FID scores -#------------------------------------------------------------------------------- -def check_or_download_inception(inception_path): - ''' Checks if the path to the inception file is valid, or downloads - the file if it is not present. ''' - INCEPTION_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz' - if inception_path is None: - inception_path = '/tmp' - inception_path = pathlib.Path(inception_path) - model_file = inception_path / 'classify_image_graph_def.pb' - if not model_file.exists(): - print("Downloading Inception model") - from urllib import request - import tarfile - fn, _ = request.urlretrieve(INCEPTION_URL) - with tarfile.open(fn, mode='r') as f: - f.extract('classify_image_graph_def.pb', str(model_file.parent)) - return str(model_file) - - -def _handle_path(path, sess): - if path.endswith('.npz'): - f = np.load(path) - m, s = f['mu'][:], f['sigma'][:] - f.close() - else: - path = pathlib.Path(path) - files = list(path.glob('*.jpg')) + list(path.glob('*.png')) - x = np.array([imread(str(fn)).astype(np.float32) for fn in files]) - m, s = calculate_activation_statistics(x, sess) - return m, s - - -def calculate_fid_given_paths(paths, inception_path): - ''' Calculates the FID of two paths. ''' - inception_path = check_or_download_inception(inception_path) - - for p in paths: - if not os.path.exists(p): - raise RuntimeError("Invalid path: %s" % p) - - create_inception_graph(str(inception_path)) - with tf.Session() as sess: - sess.run(tf.global_variables_initializer()) - m1, s1 = _handle_path(paths[0], sess) - m2, s2 = _handle_path(paths[1], sess) - fid_value = calculate_frechet_distance(m1, s1, m2, s2) - return fid_value - - -if __name__ == "__main__": - from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter - parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter) - parser.add_argument("path", type=str, nargs=2, - help='Path to the generated images or to .npz statistic files') - parser.add_argument("-i", "--inception", type=str, default=None, - help='Path to Inception model (will be downloaded if not provided)') - parser.add_argument("--gpu", default="", type=str, - help='GPU to use (leave blank for CPU only)') - args = parser.parse_args() - os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu - fid_value = calculate_fid_given_paths(args.path, args.inception) - print("FID: ", fid_value) - -#---------------------------------------------------------------------------- -# EDIT: added - -class API: - def __init__(self, num_images, image_shape, image_dtype, minibatch_size): - import config - self.network_dir = os.path.join(config.result_dir, '_inception_fid') - self.network_file = check_or_download_inception(self.network_dir) - self.sess = tf.get_default_session() - create_inception_graph(self.network_file) - - def get_metric_names(self): - return ['FID'] - - def get_metric_formatting(self): - return ['%-10.4f'] - - def begin(self, mode): - assert mode in ['warmup', 'reals', 'fakes'] - self.activations = [] - - def feed(self, mode, minibatch): - act = get_activations(minibatch.transpose(0,2,3,1), self.sess, batch_size=minibatch.shape[0]) - self.activations.append(act) - - def end(self, mode): - act = np.concatenate(self.activations) - mu = np.mean(act, axis=0) - sigma = np.cov(act, rowvar=False) - if mode in ['warmup', 'reals']: - self.mu_real = mu - self.sigma_real = sigma - fid = calculate_frechet_distance(mu, sigma, self.mu_real, self.sigma_real) - return [fid] - -#---------------------------------------------------------------------------- diff --git a/spaces/yellowdolphin/happywhale-demo/utils.py b/spaces/yellowdolphin/happywhale-demo/utils.py deleted file mode 100644 index 8ef637d560077d1ef42f5d8c827d1da3f8f4391b..0000000000000000000000000000000000000000 --- a/spaces/yellowdolphin/happywhale-demo/utils.py +++ /dev/null @@ -1,323 +0,0 @@ -import math -import json - -import numpy as np -import tensorflow as tf -import tfimm -import efficientnet.tfkeras as efnv1 -import keras_efficientnet_v2 as efnv2 -import tensorflow_hub as hub - - -embedding_size = 1024 -n_images = 51033 + 27956 - - -class DotDict(dict): - """dot.notation access to dictionary attributes - - Reference: - https://stackoverflow.com/questions/2352181/how-to-use-a-dot-to-access-members-of-dictionary/23689767#23689767 - """ - __getattr__ = dict.get # returns None if missing key, don't use getattr() with default! - __setattr__ = dict.__setitem__ - __delattr__ = dict.__delitem__ - - -def get_cfg(json_file): - json_file = str(json_file) - config_dict = json.load(open(json_file)) - return DotDict(config_dict) - - -def get_embeddings(img, embed_model): - inp = img[None, ...] - embeddings = embed_model.predict(inp, verbose=1, batch_size=1, workers=4, use_multiprocessing=True) - return embeddings - - -# Train embeddings have to be re-ordered: embeddings were concatenated (train, valid) -# in the training notebook and the valid fold is different for each ensemble model. -FOLDS = 10 -shards, n_total = [], 0 -for fold in range(10): - n_img = 5104 if fold <= 2 else 5103 - shards.append(list(range(n_total, n_total + n_img))) - n_total += n_img -assert n_total == 51033 - - -def get_train_idx(use_fold): - "Return embedding index that restores the order of images in the tfrec files." - train_folds = [i for i in range(10) if i % FOLDS != use_fold] - valid_folds = [i for i in range(10) if i % FOLDS == use_fold] - folds = train_folds + valid_folds - - # order of saved embeddings (train + valid) - train_idx = [] - for fold in folds: - train_idx.append(shards[fold]) - train_idx = np.concatenate(train_idx) - - return np.argsort(train_idx) - - -def get_comp_embeddings(emb_files, use_folds): - "Load embeddings for competition images [n_images, embedding_size]" - comp_embeddings = [] - - for npz_file, use_fold in zip(emb_files, use_folds): - # Get embeddings for all competition images - d = np.load(str(npz_file)) - comp_train_emb = d['train'] - comp_test_emb = d['test'] - - # Restore original order of comp_train_emb, targets (use targets as fingerprint-check) - comp_train_idx = get_train_idx(use_fold) - comp_train_emb = comp_train_emb[comp_train_idx, :] - comp_embs = np.concatenate([comp_train_emb, comp_test_emb], axis=0) - assert comp_embs.shape == (n_images, embedding_size) - - # Normalize embeddings - comp_embs_norms = np.linalg.norm(comp_embs, axis=1) - print("comp_embs norm:", comp_embs_norms.min(), "...", comp_embs_norms.max()) - comp_embs /= comp_embs_norms[:, None] - - comp_embeddings.append(comp_embs) - - return np.concatenate(comp_embeddings, axis=1) - - -def get_test_embedding(image, embed_models, sizes): - test_embedding = [] - - for embed_model, size in zip(embed_models, sizes): - # Get model input - scaled_image = tf.image.resize(image, size) - scaled_image = tf.cast(scaled_image, tf.float32) / 255.0 - - # Get embedding for test image - test_emb = get_embeddings(scaled_image, embed_model) # shape: [1, embedding_size] - assert test_emb.shape == (1, embedding_size) - - # Normalize embeddings - test_emb_norm = np.linalg.norm(test_emb, axis=1) - test_emb /= test_emb_norm[:, None] - - test_embedding.append(test_emb) - - return np.concatenate(test_embedding, axis=1) # [1, embedding_size] - - -def p2logit(x): - return np.log(x / (1 - x)) - - -def sigmoid(x): - return 1 / (1 + np.exp(-x)) - - -def get_confidence(similarity, threshold): - "Calculate confidence in known/unknown prediction" - if similarity <= 0: - return 0 - logit_sim = p2logit(similarity) - logit_threshold = p2logit(threshold) - return sigmoid(abs(logit_sim - logit_threshold)) - - -class ArcMarginProductSubCenter(tf.keras.layers.Layer): - ''' - Implements large margin arc distance. - - References: - https://arxiv.org/pdf/1801.07698.pdf - https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution/ - https://github.com/haqishen/Google-Landmark-Recognition-2020-3rd-Place-Solution/ - - Sub-center version: - for k > 1, the embedding layer can learn k sub-centers per class - ''' - def __init__(self, n_classes, s=30, m=0.50, k=3, easy_margin=False, - ls_eps=0.0, **kwargs): - - super(ArcMarginProductSubCenter, self).__init__(**kwargs) - - self.n_classes = n_classes - self.s = s - self.m = m - self.k = k - self.ls_eps = ls_eps - self.easy_margin = easy_margin - self.cos_m = tf.math.cos(m) - self.sin_m = tf.math.sin(m) - self.th = tf.math.cos(math.pi - m) - self.mm = tf.math.sin(math.pi - m) * m - - def get_config(self): - - config = super().get_config().copy() - config.update({ - 'n_classes': self.n_classes, - 's': self.s, - 'm': self.m, - 'k': self.k, - 'ls_eps': self.ls_eps, - 'easy_margin': self.easy_margin, - }) - return config - - def build(self, input_shape): - super(ArcMarginProductSubCenter, self).build(input_shape[0]) - - self.W = self.add_weight( - name='W', - shape=(int(input_shape[0][-1]), self.n_classes * self.k), - initializer='glorot_uniform', - dtype='float32', - trainable=True) - - def call(self, inputs): - X, y = inputs - y = tf.cast(y, dtype=tf.int32) - cosine_all = tf.matmul( - tf.math.l2_normalize(X, axis=1), - tf.math.l2_normalize(self.W, axis=0) - ) - if self.k > 1: - cosine_all = tf.reshape(cosine_all, [-1, self.n_classes, self.k]) - cosine = tf.math.reduce_max(cosine_all, axis=2) - else: - cosine = cosine_all - sine = tf.math.sqrt(1.0 - tf.math.pow(cosine, 2)) - phi = cosine * self.cos_m - sine * self.sin_m - if self.easy_margin: - phi = tf.where(cosine > 0, phi, cosine) - else: - phi = tf.where(cosine > self.th, phi, cosine - self.mm) - one_hot = tf.cast( - tf.one_hot(y, depth=self.n_classes), - dtype=cosine.dtype - ) - if self.ls_eps > 0: - one_hot = (1 - self.ls_eps) * one_hot + self.ls_eps / self.n_classes - - output = (one_hot * phi) + ((1.0 - one_hot) * cosine) - output *= self.s - return output - - -TFHUB = { - 'hub_efnv2s': "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2", - 'hub_efnv2m': "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_m/feature_vector/2", - 'hub_efnv2l': "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_l/feature_vector/2", - 'hub_efnv2xl': "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_xl/feature_vector/2", - 'bit_m-r50x1': "https://tfhub.dev/google/bit/m-r50x1/1", - 'bit_m-r50x3': "https://tfhub.dev/google/bit/m-r50x3/1", - 'bit_m-r101x1': "https://tfhub.dev/google/bit/m-r101x1/1", - 'bit_m-r101x3': "https://tfhub.dev/google/bit/m-r101x3/1", - 'bit_m-r152x4': "https://tfhub.dev/google/bit/m-r152x4/1", -} - - -def get_model(cfg): - aux_arcface = False # Chris Deotte suggested this - if cfg.head == 'arcface': - head = ArcMarginProductSubCenter - else: - assert False, "INVALID HEAD" - - if cfg.adaptive_margin: - raise NotImplementedError - - if cfg.arch_name.startswith('efnv1'): - EFN = {'efnv1b0': efnv1.EfficientNetB0, 'efnv1b1': efnv1.EfficientNetB1, - 'efnv1b2': efnv1.EfficientNetB2, 'efnv1b3': efnv1.EfficientNetB3, - 'efnv1b4': efnv1.EfficientNetB4, 'efnv1b5': efnv1.EfficientNetB5, - 'efnv1b6': efnv1.EfficientNetB6, 'efnv1b7': efnv1.EfficientNetB7} - - if cfg.arch_name.startswith('efnv2'): - EFN = {'efnv2s': efnv2.EfficientNetV2S, 'efnv2m': efnv2.EfficientNetV2M, - 'efnv2l': efnv2.EfficientNetV2L, 'efnv2xl': efnv2.EfficientNetV2XL} - - with tf.distribute.get_strategy().scope(): - - margin = head( - n_classes=cfg.N_CLASSES, - s=30, - m=0.3, - k=cfg.subcenters or 1, - easy_margin=False, - name=f'head/{cfg.head}', - dtype='float32') - - inp = tf.keras.layers.Input(shape=[*cfg.IMAGE_SIZE, 3], name='inp1') - label = tf.keras.layers.Input(shape=(), name='inp2') - if aux_arcface: - label2 = tf.keras.layers.Input(shape=(), name='inp3') - - if cfg.arch_name.startswith('efnv1'): - x = EFN[cfg.arch_name](weights=cfg.pretrained, include_top=False)(inp) - if cfg.pool == 'flatten': - embed = tf.keras.layers.Flatten()(x) - elif cfg.pool == 'fc': - embed = tf.keras.layers.Flatten()(x) - embed = tf.keras.layers.Dropout(0.1)(embed) - embed = tf.keras.layers.Dense(1024)(embed) - elif cfg.pool == 'concat': - embed = tf.keras.layers.concatenate([tf.keras.layers.GlobalAveragePooling2D()(x), - tf.keras.layers.GlobalAveragePooling2D()(x)]) - elif cfg.pool == 'max': - embed = tf.keras.layers.GlobalMaxPooling2D()(x) - else: - embed = tf.keras.layers.GlobalAveragePooling2D()(x) - - elif cfg.arch_name.startswith('efnv2'): - x = EFN[cfg.arch_name](input_shape=(None, None, 3), num_classes=0, - pretrained=cfg.pretrained)(inp) - if cfg.pool == 'flatten': - embed = tf.keras.layers.Flatten()(x) - elif cfg.pool == 'fc': - embed = tf.keras.layers.Flatten()(x) - embed = tf.keras.layers.Dropout(0.1)(embed) - embed = tf.keras.layers.Dense(1024)(embed) - elif cfg.pool == 'concat': - embed = tf.keras.layers.concatenate([tf.keras.layers.GlobalAveragePooling2D()(x), - tf.keras.layers.GlobalAveragePooling2D()(x)]) - elif cfg.pool == 'max': - embed = tf.keras.layers.GlobalMaxPooling2D()(x) - else: - embed = tf.keras.layers.GlobalAveragePooling2D()(x) - - elif cfg.arch_name in TFHUB: - # tfhub models cannot be modified => Pooling cannot be changed! - url = TFHUB[cfg.arch_name] - model = hub.KerasLayer(url, trainable=True) - embed = model(inp) - assert cfg.pool in [None, False, 'avg', ''], 'tfhub model, no custom pooling supported!' - - elif cfg.arch_name in tfimm.list_models(pretrained="timm"): - embed = tfimm.create_model(cfg.arch_name, pretrained=None, nb_classes=0)(inp) - - if len(cfg.dropout_ps) > 0: - # Chris Deotte posted model code without Dropout/FC1 after pooling - embed = tf.keras.layers.Dropout(cfg.dropout_ps[0])(embed) - embed = tf.keras.layers.Dense(1024)(embed) # tunable embedding size - embed = tf.keras.layers.BatchNormalization()(embed) # missing in public notebooks - x = margin([embed, label]) - - output = tf.keras.layers.Softmax(dtype='float32', name='arc' if cfg.aux_loss else None)(x) - - if cfg.aux_loss: - aux_features = tf.keras.layers.Dense(cfg.n_species)(embed) - aux_output = tf.keras.layers.Softmax(dtype='float32', name='aux')(aux_features) - inputs = [inp, label, label2] if (cfg.aux_loss and aux_arcface) else [inp, label] - outputs = (output, aux_output) if cfg.aux_loss else [output] - - model = tf.keras.models.Model(inputs=inputs, outputs=outputs) - embed_model = tf.keras.models.Model(inputs=inp, outputs=embed) - - if cfg.FREEZE_BATCH_NORM: - raise NotImplementedError - - return model, embed_model diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py deleted file mode 100644 index 667f96e1ded35d48f163f37e21d1ed8ff191aac3..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py +++ /dev/null @@ -1,186 +0,0 @@ -# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501 - -import torch -from torch.autograd import Function -from torch.nn import functional as F - -try: - from . import upfirdn2d_ext -except ImportError: - import os - BASICSR_JIT = os.getenv('BASICSR_JIT') - if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - upfirdn2d_ext = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'src', 'upfirdn2d.cpp'), - os.path.join(module_path, 'src', 'upfirdn2d_kernel.cu'), - ], - ) - - -class UpFirDn2dBackward(Function): - - @staticmethod - def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_ext.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_ext.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], - # ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_ext.upfirdn2d(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - if input.device.type == 'cpu': - out = upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]) - else: - out = UpFirDn2d.apply(input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])) - - return out - - -def upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad(out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]) - out = out[:, max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape([-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/yigekeqing/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/yigekeqing/QQsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000 --- a/spaces/yigekeqing/QQsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/modeling_tf_deit.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/modeling_tf_deit.py deleted file mode 100644 index efd25788b0330b06de313ed53d1db69c0ef05bd4..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/modeling_tf_deit.py +++ /dev/null @@ -1,1000 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Facebook AI Research (FAIR) and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TensorFlow DeiT model.""" - - -from __future__ import annotations - -import collections.abc -import math -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_outputs import ( - TFBaseModelOutput, - TFBaseModelOutputWithPooling, - TFImageClassifierOutput, - TFMaskedImageModelingOutput, -) -from ...modeling_tf_utils import ( - TFPreTrainedModel, - TFSequenceClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import shape_list, stable_softmax -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_deit import DeiTConfig - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "DeiTConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "facebook/deit-base-distilled-patch16-224" -_EXPECTED_OUTPUT_SHAPE = [1, 198, 768] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "facebook/deit-base-distilled-patch16-224" -_IMAGE_CLASS_EXPECTED_OUTPUT = "tabby, tabby cat" - - -TF_DEIT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "facebook/deit-base-distilled-patch16-224", - # See all DeiT models at https://huggingface.co/models?filter=deit -] - - -@dataclass -class TFDeiTForImageClassificationWithTeacherOutput(ModelOutput): - """ - Output type of [`DeiTForImageClassificationWithTeacher`]. - - Args: - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Prediction scores as the average of the cls_logits and distillation logits. - cls_logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the - class token). - distillation_logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the - distillation token). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus - the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in - the self-attention heads. - """ - - logits: tf.Tensor = None - cls_logits: tf.Tensor = None - distillation_logits: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - - -class TFDeiTEmbeddings(tf.keras.layers.Layer): - """ - Construct the CLS token, distillation token, position and patch embeddings. Optionally, also the mask token. - """ - - def __init__(self, config: DeiTConfig, use_mask_token: bool = False, **kwargs) -> None: - super().__init__(**kwargs) - self.config = config - self.use_mask_token = use_mask_token - self.patch_embeddings = TFDeiTPatchEmbeddings(config=config, name="patch_embeddings") - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob, name="dropout") - - def build(self, input_shape: tf.TensorShape): - self.cls_token = self.add_weight( - shape=(1, 1, self.config.hidden_size), - initializer=tf.keras.initializers.zeros(), - trainable=True, - name="cls_token", - ) - self.distillation_token = self.add_weight( - shape=(1, 1, self.config.hidden_size), - initializer=tf.keras.initializers.zeros(), - trainable=True, - name="distillation_token", - ) - self.mask_token = None - if self.use_mask_token: - self.mask_token = self.add_weight( - shape=(1, 1, self.config.hidden_size), - initializer=tf.keras.initializers.zeros(), - trainable=True, - name="mask_token", - ) - num_patches = self.patch_embeddings.num_patches - self.position_embeddings = self.add_weight( - shape=(1, num_patches + 2, self.config.hidden_size), - initializer=tf.keras.initializers.zeros(), - trainable=True, - name="position_embeddings", - ) - super().build(input_shape) - - def call( - self, pixel_values: tf.Tensor, bool_masked_pos: tf.Tensor | None = None, training: bool = False - ) -> tf.Tensor: - embeddings = self.patch_embeddings(pixel_values) - batch_size, seq_length, _ = shape_list(embeddings) - - if bool_masked_pos is not None: - mask_tokens = tf.tile(self.mask_token, [batch_size, seq_length, 1]) - # replace the masked visual tokens by mask_tokens - mask = tf.expand_dims(bool_masked_pos, axis=-1) - mask = tf.cast(mask, dtype=mask_tokens.dtype) - embeddings = embeddings * (1.0 - mask) + mask_tokens * mask - - cls_tokens = tf.repeat(self.cls_token, repeats=batch_size, axis=0) - distillation_tokens = tf.repeat(self.distillation_token, repeats=batch_size, axis=0) - embeddings = tf.concat((cls_tokens, distillation_tokens, embeddings), axis=1) - embeddings = embeddings + self.position_embeddings - embeddings = self.dropout(embeddings, training=training) - return embeddings - - -class TFDeiTPatchEmbeddings(tf.keras.layers.Layer): - """ - This class turns `pixel_values` of shape `(batch_size, num_channels, height, width)` into the initial - `hidden_states` (patch embeddings) of shape `(batch_size, seq_length, hidden_size)` to be consumed by a - Transformer. - """ - - def __init__(self, config: DeiTConfig, **kwargs) -> None: - super().__init__(**kwargs) - image_size, patch_size = config.image_size, config.patch_size - num_channels, hidden_size = config.num_channels, config.hidden_size - - image_size = image_size if isinstance(image_size, collections.abc.Iterable) else (image_size, image_size) - patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size) - num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) - self.image_size = image_size - self.patch_size = patch_size - self.num_channels = num_channels - self.num_patches = num_patches - - self.projection = tf.keras.layers.Conv2D( - hidden_size, kernel_size=patch_size, strides=patch_size, name="projection" - ) - - def call(self, pixel_values: tf.Tensor) -> tf.Tensor: - batch_size, height, width, num_channels = shape_list(pixel_values) - if tf.executing_eagerly() and num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - if tf.executing_eagerly() and (height != self.image_size[0] or width != self.image_size[1]): - raise ValueError( - f"Input image size ({height}*{width}) doesn't match model ({self.image_size[0]}*{self.image_size[1]})." - ) - x = self.projection(pixel_values) - batch_size, height, width, num_channels = shape_list(x) - x = tf.reshape(x, (batch_size, height * width, num_channels)) - return x - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTSelfAttention with ViT->DeiT -class TFDeiTSelfAttention(tf.keras.layers.Layer): - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number " - f"of attention heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - self.sqrt_att_head_size = math.sqrt(self.attention_head_size) - - self.query = tf.keras.layers.Dense( - units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" - ) - self.key = tf.keras.layers.Dense( - units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" - ) - self.value = tf.keras.layers.Dense( - units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" - ) - self.dropout = tf.keras.layers.Dropout(rate=config.attention_probs_dropout_prob) - - def transpose_for_scores(self, tensor: tf.Tensor, batch_size: int) -> tf.Tensor: - # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] - tensor = tf.reshape(tensor=tensor, shape=(batch_size, -1, self.num_attention_heads, self.attention_head_size)) - - # Transpose the tensor from [batch_size, seq_length, num_attention_heads, attention_head_size] to [batch_size, num_attention_heads, seq_length, attention_head_size] - return tf.transpose(tensor, perm=[0, 2, 1, 3]) - - def call( - self, - hidden_states: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - training: bool = False, - ) -> Tuple[tf.Tensor]: - batch_size = shape_list(hidden_states)[0] - mixed_query_layer = self.query(inputs=hidden_states) - mixed_key_layer = self.key(inputs=hidden_states) - mixed_value_layer = self.value(inputs=hidden_states) - query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) - key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) - value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) - - # Take the dot product between "query" and "key" to get the raw attention scores. - # (batch size, num_heads, seq_len_q, seq_len_k) - attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) - dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) - attention_scores = tf.divide(attention_scores, dk) - - # Normalize the attention scores to probabilities. - attention_probs = stable_softmax(logits=attention_scores, axis=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(inputs=attention_probs, training=training) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = tf.multiply(attention_probs, head_mask) - - attention_output = tf.matmul(attention_probs, value_layer) - attention_output = tf.transpose(attention_output, perm=[0, 2, 1, 3]) - - # (batch_size, seq_len_q, all_head_size) - attention_output = tf.reshape(tensor=attention_output, shape=(batch_size, -1, self.all_head_size)) - outputs = (attention_output, attention_probs) if output_attentions else (attention_output,) - - return outputs - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTSelfOutput with ViT->DeiT -class TFDeiTSelfOutput(tf.keras.layers.Layer): - """ - The residual connection is defined in TFDeiTLayer instead of here (as is the case with other models), due to the - layernorm applied before each block. - """ - - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.dropout(inputs=hidden_states, training=training) - - return hidden_states - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTAttention with ViT->DeiT -class TFDeiTAttention(tf.keras.layers.Layer): - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - self.self_attention = TFDeiTSelfAttention(config, name="attention") - self.dense_output = TFDeiTSelfOutput(config, name="output") - - def prune_heads(self, heads): - raise NotImplementedError - - def call( - self, - input_tensor: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - training: bool = False, - ) -> Tuple[tf.Tensor]: - self_outputs = self.self_attention( - hidden_states=input_tensor, head_mask=head_mask, output_attentions=output_attentions, training=training - ) - attention_output = self.dense_output( - hidden_states=self_outputs[0], input_tensor=input_tensor, training=training - ) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - - return outputs - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTIntermediate with ViT->DeiT -class TFDeiTIntermediate(tf.keras.layers.Layer): - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = get_tf_activation(config.hidden_act) - else: - self.intermediate_act_fn = config.hidden_act - - def call(self, hidden_states: tf.Tensor) -> tf.Tensor: - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - - return hidden_states - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTOutput with ViT->DeiT -class TFDeiTOutput(tf.keras.layers.Layer): - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.dropout(inputs=hidden_states, training=training) - hidden_states = hidden_states + input_tensor - - return hidden_states - - -class TFDeiTLayer(tf.keras.layers.Layer): - """This corresponds to the Block class in the timm implementation.""" - - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - self.attention = TFDeiTAttention(config, name="attention") - self.intermediate = TFDeiTIntermediate(config, name="intermediate") - self.deit_output = TFDeiTOutput(config, name="output") - - self.layernorm_before = tf.keras.layers.LayerNormalization( - epsilon=config.layer_norm_eps, name="layernorm_before" - ) - self.layernorm_after = tf.keras.layers.LayerNormalization( - epsilon=config.layer_norm_eps, name="layernorm_after" - ) - - def call( - self, - hidden_states: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - training: bool = False, - ) -> Tuple[tf.Tensor]: - attention_outputs = self.attention( - # in DeiT, layernorm is applied before self-attention - input_tensor=self.layernorm_before(inputs=hidden_states, training=training), - head_mask=head_mask, - output_attentions=output_attentions, - training=training, - ) - attention_output = attention_outputs[0] - - # first residual connection - hidden_states = attention_output + hidden_states - - # in DeiT, layernorm is also applied after self-attention - layer_output = self.layernorm_after(inputs=hidden_states, training=training) - - intermediate_output = self.intermediate(hidden_states=layer_output, training=training) - - # second residual connection is done here - layer_output = self.deit_output( - hidden_states=intermediate_output, input_tensor=hidden_states, training=training - ) - outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them - - return outputs - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTEncoder with ViT->DeiT -class TFDeiTEncoder(tf.keras.layers.Layer): - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - self.layer = [TFDeiTLayer(config, name=f"layer_._{i}") for i in range(config.num_hidden_layers)] - - def call( - self, - hidden_states: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - output_hidden_states: bool, - return_dict: bool, - training: bool = False, - ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: - all_hidden_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_outputs = layer_module( - hidden_states=hidden_states, - head_mask=head_mask[i], - output_attentions=output_attentions, - training=training, - ) - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - # Add last layer - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - - return TFBaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -@keras_serializable -class TFDeiTMainLayer(tf.keras.layers.Layer): - config_class = DeiTConfig - - def __init__( - self, config: DeiTConfig, add_pooling_layer: bool = True, use_mask_token: bool = False, **kwargs - ) -> None: - super().__init__(**kwargs) - self.config = config - - self.embeddings = TFDeiTEmbeddings(config, use_mask_token=use_mask_token, name="embeddings") - self.encoder = TFDeiTEncoder(config, name="encoder") - - self.layernorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layernorm") - self.pooler = TFDeiTPooler(config, name="pooler") if add_pooling_layer else None - - def get_input_embeddings(self) -> TFDeiTPatchEmbeddings: - return self.embeddings.patch_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - raise NotImplementedError - - def get_head_mask(self, head_mask): - if head_mask is not None: - raise NotImplementedError - else: - head_mask = [None] * self.config.num_hidden_layers - - return head_mask - - @unpack_inputs - def call( - self, - pixel_values: tf.Tensor | None = None, - bool_masked_pos: tf.Tensor | None = None, - head_mask: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor, ...]]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - # TF 2.0 image layers can't use NCHW format when running on CPU. - # (batch_size, num_channels, height, width) -> (batch_size, height, width, num_channels) - pixel_values = tf.transpose(pixel_values, (0, 2, 3, 1)) - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask) - - embedding_output = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos, training=training) - - encoder_outputs = self.encoder( - embedding_output, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = encoder_outputs[0] - sequence_output = self.layernorm(sequence_output, training=training) - pooled_output = self.pooler(sequence_output, training=training) if self.pooler is not None else None - - if not return_dict: - head_outputs = (sequence_output, pooled_output) if pooled_output is not None else (sequence_output,) - return head_outputs + encoder_outputs[1:] - - return TFBaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTPreTrainedModel with ViT->DeiT all-casing -class TFDeiTPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = DeiTConfig - base_model_prefix = "deit" - main_input_name = "pixel_values" - - -DEIT_START_DOCSTRING = r""" - This model is a TensorFlow - [tf.keras.layers.Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer). Use it as a regular - TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. - - Parameters: - config ([`DeiTConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -DEIT_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`DeiTImageProcessor.__call__`] for details. - - head_mask (`tf.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare DeiT Model transformer outputting raw hidden-states without any specific head on top.", - DEIT_START_DOCSTRING, -) -class TFDeiTModel(TFDeiTPreTrainedModel): - def __init__( - self, config: DeiTConfig, add_pooling_layer: bool = True, use_mask_token: bool = False, **kwargs - ) -> None: - super().__init__(config, **kwargs) - - self.deit = TFDeiTMainLayer( - config, add_pooling_layer=add_pooling_layer, use_mask_token=use_mask_token, name="deit" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFBaseModelOutputWithPooling, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def call( - self, - pixel_values: tf.Tensor | None = None, - bool_masked_pos: tf.Tensor | None = None, - head_mask: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[Tuple, TFBaseModelOutputWithPooling]: - outputs = self.deit( - pixel_values=pixel_values, - bool_masked_pos=bool_masked_pos, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - return outputs - - -# Copied from transformers.models.vit.modeling_tf_vit.TFViTPooler with ViT->DeiT -class TFDeiTPooler(tf.keras.layers.Layer): - def __init__(self, config: DeiTConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.hidden_size, - kernel_initializer=get_initializer(config.initializer_range), - activation="tanh", - name="dense", - ) - - def call(self, hidden_states: tf.Tensor) -> tf.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(inputs=first_token_tensor) - - return pooled_output - - -class TFDeitPixelShuffle(tf.keras.layers.Layer): - """TF layer implementation of torch.nn.PixelShuffle""" - - def __init__(self, upscale_factor: int, **kwargs) -> None: - super().__init__(**kwargs) - if not isinstance(upscale_factor, int) or upscale_factor < 2: - raise ValueError(f"upscale_factor must be an integer value >= 2 got {upscale_factor}") - self.upscale_factor = upscale_factor - - def call(self, x: tf.Tensor) -> tf.Tensor: - hidden_states = x - batch_size, _, _, num_input_channels = shape_list(hidden_states) - block_size_squared = self.upscale_factor**2 - output_depth = int(num_input_channels / block_size_squared) - # When the number of output channels >= 2, PyTorch's PixelShuffle and - # TF's depth_to_space differ in their output as the order of channels selected for combining - # is a permutation of the other c.f. - # https://stackoverflow.com/questions/68272502/tf-depth-to-space-not-same-as-torchs-pixelshuffle-when-output-channels-1 - permutation = tf.constant( - [[i + j * block_size_squared for i in range(block_size_squared) for j in range(output_depth)]] - ) - hidden_states = tf.gather(params=hidden_states, indices=tf.tile(permutation, [batch_size, 1]), batch_dims=-1) - hidden_states = tf.nn.depth_to_space(hidden_states, block_size=self.upscale_factor, data_format="NHWC") - return hidden_states - - -class TFDeitDecoder(tf.keras.layers.Layer): - def __init__(self, config: DeiTConfig, **kwargs) -> None: - super().__init__(**kwargs) - self.conv2d = tf.keras.layers.Conv2D( - filters=config.encoder_stride**2 * config.num_channels, kernel_size=1, name="0" - ) - self.pixel_shuffle = TFDeitPixelShuffle(config.encoder_stride, name="1") - - def call(self, inputs: tf.Tensor, training: bool = False) -> tf.Tensor: - hidden_states = inputs - hidden_states = self.conv2d(hidden_states) - hidden_states = self.pixel_shuffle(hidden_states) - return hidden_states - - -@add_start_docstrings( - "DeiT Model with a decoder on top for masked image modeling, as proposed in" - " [SimMIM](https://arxiv.org/abs/2111.09886).", - DEIT_START_DOCSTRING, -) -class TFDeiTForMaskedImageModeling(TFDeiTPreTrainedModel): - def __init__(self, config: DeiTConfig) -> None: - super().__init__(config) - - self.deit = TFDeiTMainLayer(config, add_pooling_layer=False, use_mask_token=True, name="deit") - self.decoder = TFDeitDecoder(config, name="decoder") - - @unpack_inputs - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=TFMaskedImageModelingOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - pixel_values: tf.Tensor | None = None, - bool_masked_pos: tf.Tensor | None = None, - head_mask: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[tuple, TFMaskedImageModelingOutput]: - r""" - bool_masked_pos (`tf.Tensor` of type bool and shape `(batch_size, num_patches)`): - Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). - - Returns: - - Examples: - ```python - >>> from transformers import AutoImageProcessor, TFDeiTForMaskedImageModeling - >>> import tensorflow as tf - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") - >>> model = TFDeiTForMaskedImageModeling.from_pretrained("facebook/deit-base-distilled-patch16-224") - - >>> num_patches = (model.config.image_size // model.config.patch_size) ** 2 - >>> pixel_values = image_processor(images=image, return_tensors="tf").pixel_values - >>> # create random boolean mask of shape (batch_size, num_patches) - >>> bool_masked_pos = tf.cast(tf.random.uniform((1, num_patches), minval=0, maxval=2, dtype=tf.int32), tf.bool) - - >>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) - >>> loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction - >>> list(reconstructed_pixel_values.shape) - [1, 3, 224, 224] - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.deit( - pixel_values, - bool_masked_pos=bool_masked_pos, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = outputs[0] - - # Reshape to (batch_size, num_channels, height, width) - sequence_output = sequence_output[:, 1:-1] - batch_size, sequence_length, num_channels = shape_list(sequence_output) - height = width = int(sequence_length**0.5) - sequence_output = tf.reshape(sequence_output, (batch_size, height, width, num_channels)) - - # Reconstruct pixel values - reconstructed_pixel_values = self.decoder(sequence_output, training=training) - # TF 2.0 image layers can't use NCHW format when running on CPU, so intermediate layers use NHWC, - # including the The decoder. We transpose to compute the loss against the pixel values - # (batch_size, height, width, num_channels) -> (batch_size, num_channels, height, width) - reconstructed_pixel_values = tf.transpose(reconstructed_pixel_values, (0, 3, 1, 2)) - - masked_im_loss = None - if bool_masked_pos is not None: - size = self.config.image_size // self.config.patch_size - bool_masked_pos = tf.reshape(bool_masked_pos, (-1, size, size)) - mask = tf.repeat(bool_masked_pos, self.config.patch_size, 1) - mask = tf.repeat(mask, self.config.patch_size, 2) - mask = tf.expand_dims(mask, 1) - mask = tf.cast(mask, tf.float32) - - reconstruction_loss = tf.keras.losses.mean_absolute_error( - # Swap axes as metric calculation reduces over the final dimension - tf.transpose(pixel_values, (1, 2, 3, 0)), - tf.transpose(reconstructed_pixel_values, (1, 2, 3, 0)), - ) - reconstruction_loss = tf.expand_dims(reconstruction_loss, 0) - total_loss = tf.reduce_sum(reconstruction_loss * mask) - num_masked_pixels = (tf.reduce_sum(mask) + 1e-5) * self.config.num_channels - masked_im_loss = total_loss / num_masked_pixels - masked_im_loss = tf.reshape(masked_im_loss, (1,)) - - if not return_dict: - output = (reconstructed_pixel_values,) + outputs[1:] - return ((masked_im_loss,) + output) if masked_im_loss is not None else output - - return TFMaskedImageModelingOutput( - loss=masked_im_loss, - reconstruction=reconstructed_pixel_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - DeiT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of - the [CLS] token) e.g. for ImageNet. - """, - DEIT_START_DOCSTRING, -) -class TFDeiTForImageClassification(TFDeiTPreTrainedModel, TFSequenceClassificationLoss): - def __init__(self, config: DeiTConfig): - super().__init__(config) - - self.num_labels = config.num_labels - self.deit = TFDeiTMainLayer(config, add_pooling_layer=False, name="deit") - - # Classifier head - self.classifier = ( - tf.keras.layers.Dense(config.num_labels, name="classifier") - if config.num_labels > 0 - else tf.keras.layers.Activation("linear", name="classifier") - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=TFImageClassifierOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - pixel_values: tf.Tensor | None = None, - head_mask: tf.Tensor | None = None, - labels: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[tf.Tensor, TFImageClassifierOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, TFDeiTForImageClassification - >>> import tensorflow as tf - >>> from PIL import Image - >>> import requests - - >>> tf.keras.utils.set_random_seed(3) # doctest: +IGNORE_RESULT - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> # note: we are loading a TFDeiTForImageClassificationWithTeacher from the hub here, - >>> # so the head will be randomly initialized, hence the predictions will be random - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") - >>> model = TFDeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224") - - >>> inputs = image_processor(images=image, return_tensors="tf") - >>> outputs = model(**inputs) - >>> logits = outputs.logits - >>> # model predicts one of the 1000 ImageNet classes - >>> predicted_class_idx = tf.math.argmax(logits, axis=-1)[0] - >>> print("Predicted class:", model.config.id2label[int(predicted_class_idx)]) - Predicted class: little blue heron, Egretta caerulea - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.deit( - pixel_values, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = outputs[0] - - logits = self.classifier(sequence_output[:, 0, :]) - # we don't use the distillation token - - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFImageClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - DeiT Model transformer with image classification heads on top (a linear layer on top of the final hidden state of - the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. - - .. warning:: - - This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet - supported. - """, - DEIT_START_DOCSTRING, -) -class TFDeiTForImageClassificationWithTeacher(TFDeiTPreTrainedModel): - def __init__(self, config: DeiTConfig) -> None: - super().__init__(config) - - self.num_labels = config.num_labels - self.deit = TFDeiTMainLayer(config, add_pooling_layer=False, name="deit") - - # Classifier heads - self.cls_classifier = ( - tf.keras.layers.Dense(config.num_labels, name="cls_classifier") - if config.num_labels > 0 - else tf.keras.layers.Activation("linear", name="cls_classifier") - ) - self.distillation_classifier = ( - tf.keras.layers.Dense(config.num_labels, name="distillation_classifier") - if config.num_labels > 0 - else tf.keras.layers.Activation("linear", name="distillation_classifier") - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(DEIT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=TFDeiTForImageClassificationWithTeacherOutput, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def call( - self, - pixel_values: tf.Tensor | None = None, - head_mask: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[tuple, TFDeiTForImageClassificationWithTeacherOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.deit( - pixel_values, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = outputs[0] - - cls_logits = self.cls_classifier(sequence_output[:, 0, :]) - distillation_logits = self.distillation_classifier(sequence_output[:, 1, :]) - - # during inference, return the average of both classifier predictions - logits = (cls_logits + distillation_logits) / 2 - - if not return_dict: - output = (logits, cls_logits, distillation_logits) + outputs[1:] - return output - - return TFDeiTForImageClassificationWithTeacherOutput( - logits=logits, - cls_logits=cls_logits, - distillation_logits=distillation_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/oneformer/image_processing_oneformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/oneformer/image_processing_oneformer.py deleted file mode 100644 index 16f5013f154a50f2d870b044bba2810753130ef5..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/oneformer/image_processing_oneformer.py +++ /dev/null @@ -1,1323 +0,0 @@ -# coding=utf-8 -# Copyright 2022 SHI Labs and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Image processor class for OneFormer.""" - -import json -import warnings -from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union - -import numpy as np -from huggingface_hub import hf_hub_download - -from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict -from ...image_transforms import ( - PaddingMode, - get_resize_output_image_size, - pad, - rescale, - resize, - to_channel_dimension_format, -) -from ...image_utils import ( - ChannelDimension, - ImageInput, - PILImageResampling, - get_image_size, - infer_channel_dimension_format, - is_scaled_image, - make_list_of_images, - to_numpy_array, - valid_images, -) -from ...utils import ( - IMAGENET_DEFAULT_MEAN, - IMAGENET_DEFAULT_STD, - TensorType, - is_torch_available, - is_torch_tensor, - logging, -) - - -logger = logging.get_logger(__name__) - - -if is_torch_available(): - import torch - from torch import nn - - -# Copied from transformers.models.detr.image_processing_detr.max_across_indices -def max_across_indices(values: Iterable[Any]) -> List[Any]: - """ - Return the maximum value across all indices of an iterable of values. - """ - return [max(values_i) for values_i in zip(*values)] - - -# Copied from transformers.models.detr.image_processing_detr.get_max_height_width -def get_max_height_width( - images: List[np.ndarray], input_data_format: Optional[Union[str, ChannelDimension]] = None -) -> List[int]: - """ - Get the maximum height and width across all images in a batch. - """ - if input_data_format is None: - input_data_format = infer_channel_dimension_format(images[0]) - - if input_data_format == ChannelDimension.FIRST: - _, max_height, max_width = max_across_indices([img.shape for img in images]) - elif input_data_format == ChannelDimension.LAST: - max_height, max_width, _ = max_across_indices([img.shape for img in images]) - else: - raise ValueError(f"Invalid channel dimension format: {input_data_format}") - return (max_height, max_width) - - -# Copied from transformers.models.detr.image_processing_detr.make_pixel_mask -def make_pixel_mask( - image: np.ndarray, output_size: Tuple[int, int], input_data_format: Optional[Union[str, ChannelDimension]] = None -) -> np.ndarray: - """ - Make a pixel mask for the image, where 1 indicates a valid pixel and 0 indicates padding. - - Args: - image (`np.ndarray`): - Image to make the pixel mask for. - output_size (`Tuple[int, int]`): - Output size of the mask. - """ - input_height, input_width = get_image_size(image, channel_dim=input_data_format) - mask = np.zeros(output_size, dtype=np.int64) - mask[:input_height, :input_width] = 1 - return mask - - -# Copied from transformers.models.detr.image_processing_detr.binary_mask_to_rle -def binary_mask_to_rle(mask): - """ - Converts given binary mask of shape `(height, width)` to the run-length encoding (RLE) format. - - Args: - mask (`torch.Tensor` or `numpy.array`): - A binary mask tensor of shape `(height, width)` where 0 denotes background and 1 denotes the target - segment_id or class_id. - Returns: - `List`: Run-length encoded list of the binary mask. Refer to COCO API for more information about the RLE - format. - """ - if is_torch_tensor(mask): - mask = mask.numpy() - - pixels = mask.flatten() - pixels = np.concatenate([[0], pixels, [0]]) - runs = np.where(pixels[1:] != pixels[:-1])[0] + 1 - runs[1::2] -= runs[::2] - return list(runs) - - -# Copied from transformers.models.detr.image_processing_detr.convert_segmentation_to_rle -def convert_segmentation_to_rle(segmentation): - """ - Converts given segmentation map of shape `(height, width)` to the run-length encoding (RLE) format. - - Args: - segmentation (`torch.Tensor` or `numpy.array`): - A segmentation map of shape `(height, width)` where each value denotes a segment or class id. - Returns: - `List[List]`: A list of lists, where each list is the run-length encoding of a segment / class id. - """ - segment_ids = torch.unique(segmentation) - - run_length_encodings = [] - for idx in segment_ids: - mask = torch.where(segmentation == idx, 1, 0) - rle = binary_mask_to_rle(mask) - run_length_encodings.append(rle) - - return run_length_encodings - - -# Copied from transformers.models.detr.image_processing_detr.remove_low_and_no_objects -def remove_low_and_no_objects(masks, scores, labels, object_mask_threshold, num_labels): - """ - Binarize the given masks using `object_mask_threshold`, it returns the associated values of `masks`, `scores` and - `labels`. - - Args: - masks (`torch.Tensor`): - A tensor of shape `(num_queries, height, width)`. - scores (`torch.Tensor`): - A tensor of shape `(num_queries)`. - labels (`torch.Tensor`): - A tensor of shape `(num_queries)`. - object_mask_threshold (`float`): - A number between 0 and 1 used to binarize the masks. - Raises: - `ValueError`: Raised when the first dimension doesn't match in all input tensors. - Returns: - `Tuple[`torch.Tensor`, `torch.Tensor`, `torch.Tensor`]`: The `masks`, `scores` and `labels` without the region - < `object_mask_threshold`. - """ - if not (masks.shape[0] == scores.shape[0] == labels.shape[0]): - raise ValueError("mask, scores and labels must have the same shape!") - - to_keep = labels.ne(num_labels) & (scores > object_mask_threshold) - - return masks[to_keep], scores[to_keep], labels[to_keep] - - -# Copied from transformers.models.detr.image_processing_detr.check_segment_validity -def check_segment_validity(mask_labels, mask_probs, k, mask_threshold=0.5, overlap_mask_area_threshold=0.8): - # Get the mask associated with the k class - mask_k = mask_labels == k - mask_k_area = mask_k.sum() - - # Compute the area of all the stuff in query k - original_area = (mask_probs[k] >= mask_threshold).sum() - mask_exists = mask_k_area > 0 and original_area > 0 - - # Eliminate disconnected tiny segments - if mask_exists: - area_ratio = mask_k_area / original_area - if not area_ratio.item() > overlap_mask_area_threshold: - mask_exists = False - - return mask_exists, mask_k - - -# Copied from transformers.models.detr.image_processing_detr.compute_segments -def compute_segments( - mask_probs, - pred_scores, - pred_labels, - mask_threshold: float = 0.5, - overlap_mask_area_threshold: float = 0.8, - label_ids_to_fuse: Optional[Set[int]] = None, - target_size: Tuple[int, int] = None, -): - height = mask_probs.shape[1] if target_size is None else target_size[0] - width = mask_probs.shape[2] if target_size is None else target_size[1] - - segmentation = torch.zeros((height, width), dtype=torch.int32, device=mask_probs.device) - segments: List[Dict] = [] - - if target_size is not None: - mask_probs = nn.functional.interpolate( - mask_probs.unsqueeze(0), size=target_size, mode="bilinear", align_corners=False - )[0] - - current_segment_id = 0 - - # Weigh each mask by its prediction score - mask_probs *= pred_scores.view(-1, 1, 1) - mask_labels = mask_probs.argmax(0) # [height, width] - - # Keep track of instances of each class - stuff_memory_list: Dict[str, int] = {} - for k in range(pred_labels.shape[0]): - pred_class = pred_labels[k].item() - should_fuse = pred_class in label_ids_to_fuse - - # Check if mask exists and large enough to be a segment - mask_exists, mask_k = check_segment_validity( - mask_labels, mask_probs, k, mask_threshold, overlap_mask_area_threshold - ) - - if mask_exists: - if pred_class in stuff_memory_list: - current_segment_id = stuff_memory_list[pred_class] - else: - current_segment_id += 1 - - # Add current object segment to final segmentation map - segmentation[mask_k] = current_segment_id - segment_score = round(pred_scores[k].item(), 6) - segments.append( - { - "id": current_segment_id, - "label_id": pred_class, - "was_fused": should_fuse, - "score": segment_score, - } - ) - if should_fuse: - stuff_memory_list[pred_class] = current_segment_id - - return segmentation, segments - - -# Copied from transformers.models.maskformer.image_processing_maskformer.convert_segmentation_map_to_binary_masks -def convert_segmentation_map_to_binary_masks( - segmentation_map: "np.ndarray", - instance_id_to_semantic_id: Optional[Dict[int, int]] = None, - ignore_index: Optional[int] = None, - reduce_labels: bool = False, -): - if reduce_labels and ignore_index is None: - raise ValueError("If `reduce_labels` is True, `ignore_index` must be provided.") - - if reduce_labels: - segmentation_map = np.where(segmentation_map == 0, ignore_index, segmentation_map - 1) - - # Get unique ids (class or instance ids based on input) - all_labels = np.unique(segmentation_map) - - # Drop background label if applicable - if ignore_index is not None: - all_labels = all_labels[all_labels != ignore_index] - - # Generate a binary mask for each object instance - binary_masks = [(segmentation_map == i) for i in all_labels] - binary_masks = np.stack(binary_masks, axis=0) # (num_labels, height, width) - - # Convert instance ids to class ids - if instance_id_to_semantic_id is not None: - labels = np.zeros(all_labels.shape[0]) - - for label in all_labels: - class_id = instance_id_to_semantic_id[label + 1 if reduce_labels else label] - labels[all_labels == label] = class_id - 1 if reduce_labels else class_id - else: - labels = all_labels - - return binary_masks.astype(np.float32), labels.astype(np.int64) - - -def get_oneformer_resize_output_image_size( - image: np.ndarray, - size: Union[int, Tuple[int, int], List[int], Tuple[int]], - max_size: Optional[int] = None, - default_to_square: bool = True, - input_data_format: Optional[Union[str, ChannelDimension]] = None, -) -> tuple: - """ - Computes the output size given the desired size. - - Args: - input_image (`np.ndarray`): - The input image. - size (`int`, `Tuple[int, int]`, `List[int]`, `Tuple[int]`): - The size of the output image. - max_size (`int`, *optional*): - The maximum size of the output image. - default_to_square (`bool`, *optional*, defaults to `True`): - Whether to default to square if no size is provided. - - Returns: - `Tuple[int, int]`: The output size. - """ - output_size = get_resize_output_image_size( - input_image=image, - size=size, - default_to_square=default_to_square, - max_size=max_size, - input_data_format=input_data_format, - ) - return output_size - - -def prepare_metadata(repo_path, class_info_file): - with open(hf_hub_download(repo_path, class_info_file, repo_type="dataset"), "r") as f: - class_info = json.load(f) - metadata = {} - class_names = [] - thing_ids = [] - for key, info in class_info.items(): - metadata[key] = info["name"] - class_names.append(info["name"]) - if info["isthing"]: - thing_ids.append(int(key)) - metadata["thing_ids"] = thing_ids - metadata["class_names"] = class_names - return metadata - - -class OneFormerImageProcessor(BaseImageProcessor): - r""" - Constructs a OneFormer image processor. The image processor can be used to prepare image(s), task input(s) and - optional text inputs and targets for the model. - - This image processor inherits from [`BaseImageProcessor`] which contains most of the main methods. Users should - refer to this superclass for more information regarding those methods. - - Args: - do_resize (`bool`, *optional*, defaults to `True`): - Whether to resize the input to a certain `size`. - size (`int`, *optional*, defaults to 800): - Resize the input to the given size. Only has an effect if `do_resize` is set to `True`. If size is a - sequence like `(width, height)`, output size will be matched to this. If size is an int, smaller edge of - the image will be matched to this number. i.e, if `height > width`, then image will be rescaled to `(size * - height / width, size)`. - resample (`int`, *optional*, defaults to `Resampling.BILINEAR`): - An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`, - `PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`, - `PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set - to `True`. - do_rescale (`bool`, *optional*, defaults to `True`): - Whether to rescale the input to a certain `scale`. - rescale_factor (`float`, *optional*, defaults to `1/ 255`): - Rescale the input by the given factor. Only has an effect if `do_rescale` is set to `True`. - do_normalize (`bool`, *optional*, defaults to `True`): - Whether or not to normalize the input with mean and standard deviation. - image_mean (`int`, *optional*, defaults to `[0.485, 0.456, 0.406]`): - The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean. - image_std (`int`, *optional*, defaults to `[0.229, 0.224, 0.225]`): - The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the - ImageNet std. - ignore_index (`int`, *optional*): - Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels - denoted with 0 (background) will be replaced with `ignore_index`. - do_reduce_labels (`bool`, *optional*, defaults to `False`): - Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0 - is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). - The background label will be replaced by `ignore_index`. - repo_path (`str`, defaults to `shi-labs/oneformer_demo`, *optional*, defaults to `"shi-labs/oneformer_demo"`): - Dataset repository on huggingface hub containing the JSON file with class information for the dataset. - class_info_file (`str`, *optional*): - JSON file containing class information for the dataset. It is stored inside on the `repo_path` dataset - repository. - num_text (`int`, *optional*): - Number of text entries in the text input list. - """ - - model_input_names = ["pixel_values", "pixel_mask", "task_inputs"] - - def __init__( - self, - do_resize: bool = True, - size: Dict[str, int] = None, - resample: PILImageResampling = PILImageResampling.BILINEAR, - do_rescale: bool = True, - rescale_factor: float = 1 / 255, - do_normalize: bool = True, - image_mean: Union[float, List[float]] = None, - image_std: Union[float, List[float]] = None, - ignore_index: Optional[int] = None, - do_reduce_labels: bool = False, - repo_path: str = "shi-labs/oneformer_demo", - class_info_file: str = None, - num_text: Optional[int] = None, - **kwargs, - ): - if "max_size" in kwargs: - self._max_size = kwargs.pop("max_size") - else: - self._max_size = 1333 - - size = size if size is not None else {"shortest_edge": 800, "longest_edge": self._max_size} - size = get_size_dict(size, max_size=self._max_size, default_to_square=False) - - if "reduce_labels" in kwargs: - warnings.warn( - "The `reduce_labels` argument is deprecated and will be removed in v4.27. " - "Please use `do_reduce_labels` instead.", - FutureWarning, - ) - do_reduce_labels = kwargs.pop("reduce_labels") - - super().__init__(**kwargs) - self.do_resize = do_resize - self.size = size - self.resample = resample - self.do_rescale = do_rescale - self.rescale_factor = rescale_factor - self.do_normalize = do_normalize - self.image_mean = image_mean if image_mean is not None else IMAGENET_DEFAULT_MEAN - self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD - self.ignore_index = ignore_index - self.do_reduce_labels = do_reduce_labels - self.class_info_file = class_info_file - self.repo_path = repo_path - self.metadata = prepare_metadata(repo_path, class_info_file) - self.num_text = num_text - - def resize( - self, - image: np.ndarray, - size: Dict[str, int], - resample: PILImageResampling = PILImageResampling.BILINEAR, - data_format=None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> np.ndarray: - """ - Resize the image to the given size. Size can be min_size (scalar) or `(height, width)` tuple. If size is an - int, smaller edge of the image will be matched to this number. - """ - if "max_size" in kwargs: - warnings.warn( - "The `max_size` parameter is deprecated and will be removed in v4.27. " - "Please specify in `size['longest_edge'] instead`.", - FutureWarning, - ) - max_size = kwargs.pop("max_size") - else: - max_size = None - size = get_size_dict(size, max_size=max_size, default_to_square=False) - if "shortest_edge" in size and "longest_edge" in size: - size, max_size = size["shortest_edge"], size["longest_edge"] - elif "height" in size and "width" in size: - size = (size["height"], size["width"]) - max_size = None - else: - raise ValueError( - "Size must contain 'height' and 'width' keys or 'shortest_edge' and 'longest_edge' keys. Got" - f" {size.keys()}." - ) - size = get_oneformer_resize_output_image_size( - image=image, size=size, max_size=max_size, default_to_square=False, input_data_format=input_data_format - ) - image = resize( - image, size=size, resample=resample, data_format=data_format, input_data_format=input_data_format - ) - return image - - # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.rescale - def rescale( - self, - image: np.ndarray, - rescale_factor: float, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.ndarray: - """ - Rescale the image by the given factor. image = image * rescale_factor. - - Args: - image (`np.ndarray`): - Image to rescale. - rescale_factor (`float`): - The value to use for rescaling. - data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format for the output image. If unset, the channel dimension format of the input - image is used. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - input_data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format for the input image. If unset, is inferred from the input image. Can be - one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - """ - return rescale(image, rescale_factor, data_format=data_format, input_data_format=input_data_format) - - # Copied from transformers.models.maskformer.image_processing_maskformer.MaskFormerImageProcessor.convert_segmentation_map_to_binary_masks - def convert_segmentation_map_to_binary_masks( - self, - segmentation_map: "np.ndarray", - instance_id_to_semantic_id: Optional[Dict[int, int]] = None, - ignore_index: Optional[int] = None, - reduce_labels: bool = False, - ): - reduce_labels = reduce_labels if reduce_labels is not None else self.reduce_labels - ignore_index = ignore_index if ignore_index is not None else self.ignore_index - return convert_segmentation_map_to_binary_masks( - segmentation_map=segmentation_map, - instance_id_to_semantic_id=instance_id_to_semantic_id, - ignore_index=ignore_index, - reduce_labels=reduce_labels, - ) - - def __call__(self, images, task_inputs=None, segmentation_maps=None, **kwargs) -> BatchFeature: - return self.preprocess(images, task_inputs=task_inputs, segmentation_maps=segmentation_maps, **kwargs) - - def _preprocess( - self, - image: ImageInput, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_rescale: bool = None, - rescale_factor: float = None, - do_normalize: bool = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ): - if do_resize: - image = self.resize(image, size=size, resample=resample, input_data_format=input_data_format) - if do_rescale: - image = self.rescale(image, rescale_factor=rescale_factor, input_data_format=input_data_format) - if do_normalize: - image = self.normalize(image, mean=image_mean, std=image_std, input_data_format=input_data_format) - return image - - def _preprocess_image( - self, - image: ImageInput, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_rescale: bool = None, - rescale_factor: float = None, - do_normalize: bool = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.ndarray: - """Preprocesses a single image.""" - # All transformations expect numpy arrays. - image = to_numpy_array(image) - if is_scaled_image(image) and do_rescale: - logger.warning_once( - "It looks like you are trying to rescale already rescaled images. If the input" - " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again." - ) - if input_data_format is None: - input_data_format = infer_channel_dimension_format(image) - image = self._preprocess( - image=image, - do_resize=do_resize, - size=size, - resample=resample, - do_rescale=do_rescale, - rescale_factor=rescale_factor, - do_normalize=do_normalize, - image_mean=image_mean, - image_std=image_std, - input_data_format=input_data_format, - ) - if data_format is not None: - image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) - return image - - def _preprocess_mask( - self, - segmentation_map: ImageInput, - do_resize: bool = None, - size: Dict[str, int] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.ndarray: - """Preprocesses a single mask.""" - segmentation_map = to_numpy_array(segmentation_map) - # Add channel dimension if missing - needed for certain transformations - if segmentation_map.ndim == 2: - added_channel_dim = True - segmentation_map = segmentation_map[None, ...] - input_data_format = ChannelDimension.FIRST - else: - added_channel_dim = False - if input_data_format is None: - input_data_format = infer_channel_dimension_format(segmentation_map, num_channels=1) - # TODO: (Amy) - # Remork segmentation map processing to include reducing labels and resizing which doesn't - # drop segment IDs > 255. - segmentation_map = self._preprocess( - image=segmentation_map, - do_resize=do_resize, - resample=PILImageResampling.NEAREST, - size=size, - do_rescale=False, - do_normalize=False, - input_data_format=input_data_format, - ) - # Remove extra channel dimension if added for processing - if added_channel_dim: - segmentation_map = segmentation_map.squeeze(0) - return segmentation_map - - def preprocess( - self, - images: ImageInput, - task_inputs: Optional[List[str]] = None, - segmentation_maps: Optional[ImageInput] = None, - instance_id_to_semantic_id: Optional[Dict[int, int]] = None, - do_resize: Optional[bool] = None, - size: Optional[Dict[str, int]] = None, - resample: PILImageResampling = None, - do_rescale: Optional[bool] = None, - rescale_factor: Optional[float] = None, - do_normalize: Optional[bool] = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - ignore_index: Optional[int] = None, - do_reduce_labels: Optional[bool] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - data_format: Union[str, ChannelDimension] = ChannelDimension.FIRST, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> BatchFeature: - if "pad_and_return_pixel_mask" in kwargs: - warnings.warn( - "The `pad_and_return_pixel_mask` argument is deprecated and will be removed in v4.27", - FutureWarning, - ) - if "reduce_labels" in kwargs: - warnings.warn( - "The `reduce_labels` argument is deprecated and will be removed in a v4.27. Please use" - " `do_reduce_labels` instead.", - FutureWarning, - ) - if do_reduce_labels is not None: - raise ValueError( - "You cannot use both `reduce_labels` and `do_reduce_labels` arguments. Please use" - " `do_reduce_labels` instead." - ) - do_reduce_labels = kwargs.pop("reduce_labels") - - if task_inputs is None: - # Default value - task_inputs = ["panoptic"] - - do_resize = do_resize if do_resize is not None else self.do_resize - size = size if size is not None else self.size - size = get_size_dict(size, default_to_square=False, max_size=self._max_size) - resample = resample if resample is not None else self.resample - do_rescale = do_rescale if do_rescale is not None else self.do_rescale - rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor - do_normalize = do_normalize if do_normalize is not None else self.do_normalize - image_mean = image_mean if image_mean is not None else self.image_mean - image_std = image_std if image_std is not None else self.image_std - ignore_index = ignore_index if ignore_index is not None else self.ignore_index - do_reduce_labels = do_reduce_labels if do_reduce_labels is not None else self.do_reduce_labels - - if do_resize is not None and size is None: - raise ValueError("If `do_resize` is True, `size` must be provided.") - - if do_rescale is not None and rescale_factor is None: - raise ValueError("If `do_rescale` is True, `rescale_factor` must be provided.") - - if do_normalize is not None and (image_mean is None or image_std is None): - raise ValueError("If `do_normalize` is True, `image_mean` and `image_std` must be provided.") - - if not valid_images(images): - raise ValueError( - "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, " - "torch.Tensor, tf.Tensor or jax.ndarray." - ) - - if segmentation_maps is not None and not valid_images(segmentation_maps): - raise ValueError( - "Invalid segmentation map type. Must be of type PIL.Image.Image, numpy.ndarray, " - "torch.Tensor, tf.Tensor or jax.ndarray." - ) - - images = make_list_of_images(images) - if segmentation_maps is not None: - segmentation_maps = make_list_of_images(segmentation_maps, expected_ndims=2) - - if segmentation_maps is not None and len(images) != len(segmentation_maps): - raise ValueError("Images and segmentation maps must have the same length.") - - images = [ - self._preprocess_image( - image, - do_resize=do_resize, - size=size, - resample=resample, - do_rescale=do_rescale, - rescale_factor=rescale_factor, - do_normalize=do_normalize, - image_mean=image_mean, - image_std=image_std, - data_format=data_format, - input_data_format=input_data_format, - ) - for image in images - ] - - if segmentation_maps is not None: - segmentation_maps = [ - self._preprocess_mask(segmentation_map, do_resize, size, input_data_format=input_data_format) - for segmentation_map in segmentation_maps - ] - encoded_inputs = self.encode_inputs( - images, - task_inputs, - segmentation_maps, - instance_id_to_semantic_id, - ignore_index, - do_reduce_labels, - return_tensors, - input_data_format=input_data_format, - ) - return encoded_inputs - - # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor._pad_image - def _pad_image( - self, - image: np.ndarray, - output_size: Tuple[int, int], - constant_values: Union[float, Iterable[float]] = 0, - data_format: Optional[ChannelDimension] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.ndarray: - """ - Pad an image with zeros to the given size. - """ - input_height, input_width = get_image_size(image, channel_dim=input_data_format) - output_height, output_width = output_size - - pad_bottom = output_height - input_height - pad_right = output_width - input_width - padding = ((0, pad_bottom), (0, pad_right)) - padded_image = pad( - image, - padding, - mode=PaddingMode.CONSTANT, - constant_values=constant_values, - data_format=data_format, - input_data_format=input_data_format, - ) - return padded_image - - # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.pad - def pad( - self, - images: List[np.ndarray], - constant_values: Union[float, Iterable[float]] = 0, - return_pixel_mask: bool = True, - return_tensors: Optional[Union[str, TensorType]] = None, - data_format: Optional[ChannelDimension] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> BatchFeature: - """ - Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width - in the batch and optionally returns their corresponding pixel mask. - - Args: - image (`np.ndarray`): - Image to pad. - constant_values (`float` or `Iterable[float]`, *optional*): - The value to use for the padding if `mode` is `"constant"`. - return_pixel_mask (`bool`, *optional*, defaults to `True`): - Whether to return a pixel mask. - return_tensors (`str` or `TensorType`, *optional*): - The type of tensors to return. Can be one of: - - Unset: Return a list of `np.ndarray`. - - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. - data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format of the image. If not provided, it will be the same as the input image. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred. - """ - pad_size = get_max_height_width(images, input_data_format=input_data_format) - - padded_images = [ - self._pad_image( - image, - pad_size, - constant_values=constant_values, - data_format=data_format, - input_data_format=input_data_format, - ) - for image in images - ] - data = {"pixel_values": padded_images} - - if return_pixel_mask: - masks = [ - make_pixel_mask(image=image, output_size=pad_size, input_data_format=input_data_format) - for image in images - ] - data["pixel_mask"] = masks - - return BatchFeature(data=data, tensor_type=return_tensors) - - def get_semantic_annotations(self, label, num_class_obj): - annotation_classes = label["classes"] - annotation_masks = label["masks"] - - texts = ["a semantic photo"] * self.num_text - classes = [] - masks = [] - - for idx in range(len(annotation_classes)): - class_id = annotation_classes[idx] - mask = annotation_masks[idx] - if not np.all(mask is False): - if class_id not in classes: - cls_name = self.metadata[str(class_id)] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - else: - idx = classes.index(class_id) - masks[idx] += mask - masks[idx] = np.clip(masks[idx], 0, 1) - - num = 0 - for i, cls_name in enumerate(self.metadata["class_names"]): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - masks = np.array(masks) - return classes, masks, texts - - def get_instance_annotations(self, label, num_class_obj): - annotation_classes = label["classes"] - annotation_masks = label["masks"] - - texts = ["an instance photo"] * self.num_text - classes = [] - masks = [] - - for idx in range(len(annotation_classes)): - class_id = annotation_classes[idx] - mask = annotation_masks[idx] - - if class_id in self.metadata["thing_ids"]: - if not np.all(mask is False): - cls_name = self.metadata[str(class_id)] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - - num = 0 - for i, cls_name in enumerate(self.metadata["class_names"]): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - masks = np.array(masks) - return classes, masks, texts - - def get_panoptic_annotations(self, label, num_class_obj): - annotation_classes = label["classes"] - annotation_masks = label["masks"] - - texts = ["an panoptic photo"] * self.num_text - classes = [] - masks = [] - - for idx in range(len(annotation_classes)): - class_id = annotation_classes[idx] - mask = annotation_masks[idx].data - if not np.all(mask is False): - cls_name = self.metadata[str(class_id)] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - - num = 0 - for i, cls_name in enumerate(self.metadata["class_names"]): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - masks = np.array(masks) - return classes, masks, texts - - def encode_inputs( - self, - pixel_values_list: List[ImageInput], - task_inputs: List[str], - segmentation_maps: ImageInput = None, - instance_id_to_semantic_id: Optional[Union[List[Dict[int, int]], Dict[int, int]]] = None, - ignore_index: Optional[int] = None, - reduce_labels: bool = False, - return_tensors: Optional[Union[str, TensorType]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ): - """ - Pad images up to the largest image in a batch and create a corresponding `pixel_mask`. - - OneFormer addresses semantic segmentation with a mask classification paradigm, thus input segmentation maps - will be converted to lists of binary masks and their respective labels. Let's see an example, assuming - `segmentation_maps = [[2,6,7,9]]`, the output will contain `mask_labels = - [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]` (four binary masks) and `class_labels = [2,6,7,9]`, the labels for - each mask. - - Args: - pixel_values_list (`List[ImageInput]`): - List of images (pixel values) to be padded. Each image should be a tensor of shape `(channels, height, - width)`. - - task_inputs (`List[str]`): - List of task values. - - segmentation_maps (`ImageInput`, *optional*): - The corresponding semantic segmentation maps with the pixel-wise annotations. - - (`bool`, *optional*, defaults to `True`): - Whether or not to pad images up to the largest image in a batch and create a pixel mask. - - If left to the default, will return a pixel mask that is: - - - 1 for pixels that are real (i.e. **not masked**), - - 0 for pixels that are padding (i.e. **masked**). - - instance_id_to_semantic_id (`List[Dict[int, int]]` or `Dict[int, int]`, *optional*): - A mapping between object instance ids and class ids. If passed, `segmentation_maps` is treated as an - instance segmentation map where each pixel represents an instance id. Can be provided as a single - dictionary with a global/dataset-level mapping or as a list of dictionaries (one per image), to map - instance ids in each image separately. - - return_tensors (`str` or [`~file_utils.TensorType`], *optional*): - If set, will return tensors instead of NumPy arrays. If set to `'pt'`, return PyTorch `torch.Tensor` - objects. - - input_data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred from the input - image. - - Returns: - [`BatchFeature`]: A [`BatchFeature`] with the following fields: - - - **pixel_values** -- Pixel values to be fed to a model. - - **pixel_mask** -- Pixel mask to be fed to a model (when `=True` or if `pixel_mask` is in - `self.model_input_names`). - - **mask_labels** -- Optional list of mask labels of shape `(labels, height, width)` to be fed to a model - (when `annotations` are provided). - - **class_labels** -- Optional list of class labels of shape `(labels)` to be fed to a model (when - `annotations` are provided). They identify the labels of `mask_labels`, e.g. the label of - `mask_labels[i][j]` if `class_labels[i][j]`. - - **text_inputs** -- Optional list of text string entries to be fed to a model (when `annotations` are - provided). They identify the binary masks present in the image. - """ - ignore_index = self.ignore_index if ignore_index is None else ignore_index - reduce_labels = self.do_reduce_labels if reduce_labels is None else reduce_labels - pixel_values_list = [to_numpy_array(pixel_values) for pixel_values in pixel_values_list] - - if input_data_format is None: - input_data_format = infer_channel_dimension_format(pixel_values_list[0]) - - pad_size = get_max_height_width(pixel_values_list, input_data_format=input_data_format) - encoded_inputs = self.pad( - pixel_values_list, return_tensors=return_tensors, input_data_format=input_data_format - ) - - annotations = None - if segmentation_maps is not None: - segmentation_maps = map(np.array, segmentation_maps) - annotations = [] - for idx, segmentation_map in enumerate(segmentation_maps): - # Use instance2class_id mapping per image - if isinstance(instance_id_to_semantic_id, list): - instance_id = instance_id_to_semantic_id[idx] - else: - instance_id = instance_id_to_semantic_id - # Use instance2class_id mapping per image - masks, classes = self.convert_segmentation_map_to_binary_masks( - segmentation_map, instance_id, ignore_index=ignore_index, reduce_labels=reduce_labels - ) - annotations.append({"masks": masks, "classes": classes}) - - if annotations is not None: - mask_labels = [] - class_labels = [] - text_inputs = [] - - num_class_obj = {} - for cls_name in self.metadata["class_names"]: - num_class_obj[cls_name] = 0 - - for i, label in enumerate(annotations): - task = task_inputs[i] - if task == "semantic": - classes, masks, texts = self.get_semantic_annotations(label, num_class_obj) - elif task == "instance": - classes, masks, texts = self.get_instance_annotations(label, num_class_obj) - elif task == "panoptic": - classes, masks, texts = self.get_panoptic_annotations(label, num_class_obj) - else: - raise ValueError(f"{task} was not expected, expected `semantic`, `instance` or `panoptic`") - - # we cannot batch them since they don't share a common class size - masks = [mask[None, ...] for mask in masks] - masks = [ - self._pad_image(image=mask, output_size=pad_size, constant_values=ignore_index) for mask in masks - ] - masks = np.concatenate(masks, axis=0) - mask_labels.append(torch.from_numpy(masks)) - class_labels.append(torch.from_numpy(classes).long()) - text_inputs.append(texts) - - encoded_inputs["mask_labels"] = mask_labels - encoded_inputs["class_labels"] = class_labels - encoded_inputs["text_inputs"] = text_inputs - - # This needs to be tokenized before sending to the model. - encoded_inputs["task_inputs"] = [f"the task is {task_input}" for task_input in task_inputs] - - return encoded_inputs - - # Copied from transformers.models.maskformer.image_processing_maskformer.MaskFormerImageProcessor.post_process_semantic_segmentation - def post_process_semantic_segmentation( - self, outputs, target_sizes: Optional[List[Tuple[int, int]]] = None - ) -> "torch.Tensor": - """ - Converts the output of [`MaskFormerForInstanceSegmentation`] into semantic segmentation maps. Only supports - PyTorch. - - Args: - outputs ([`MaskFormerForInstanceSegmentation`]): - Raw outputs of the model. - target_sizes (`List[Tuple[int, int]]`, *optional*): - List of length (batch_size), where each list item (`Tuple[int, int]]`) corresponds to the requested - final size (height, width) of each prediction. If left to None, predictions will not be resized. - Returns: - `List[torch.Tensor]`: - A list of length `batch_size`, where each item is a semantic segmentation map of shape (height, width) - corresponding to the target_sizes entry (if `target_sizes` is specified). Each entry of each - `torch.Tensor` correspond to a semantic class id. - """ - class_queries_logits = outputs.class_queries_logits # [batch_size, num_queries, num_classes+1] - masks_queries_logits = outputs.masks_queries_logits # [batch_size, num_queries, height, width] - - # Remove the null class `[..., :-1]` - masks_classes = class_queries_logits.softmax(dim=-1)[..., :-1] - masks_probs = masks_queries_logits.sigmoid() # [batch_size, num_queries, height, width] - - # Semantic segmentation logits of shape (batch_size, num_classes, height, width) - segmentation = torch.einsum("bqc, bqhw -> bchw", masks_classes, masks_probs) - batch_size = class_queries_logits.shape[0] - - # Resize logits and compute semantic segmentation maps - if target_sizes is not None: - if batch_size != len(target_sizes): - raise ValueError( - "Make sure that you pass in as many target sizes as the batch dimension of the logits" - ) - - semantic_segmentation = [] - for idx in range(batch_size): - resized_logits = torch.nn.functional.interpolate( - segmentation[idx].unsqueeze(dim=0), size=target_sizes[idx], mode="bilinear", align_corners=False - ) - semantic_map = resized_logits[0].argmax(dim=0) - semantic_segmentation.append(semantic_map) - else: - semantic_segmentation = segmentation.argmax(dim=1) - semantic_segmentation = [semantic_segmentation[i] for i in range(semantic_segmentation.shape[0])] - - return semantic_segmentation - - def post_process_instance_segmentation( - self, - outputs, - task_type: str = "instance", - is_demo: bool = True, - threshold: float = 0.5, - mask_threshold: float = 0.5, - overlap_mask_area_threshold: float = 0.8, - target_sizes: Optional[List[Tuple[int, int]]] = None, - return_coco_annotation: Optional[bool] = False, - ): - """ - Converts the output of [`OneFormerForUniversalSegmentationOutput`] into image instance segmentation - predictions. Only supports PyTorch. - - Args: - outputs ([`OneFormerForUniversalSegmentationOutput`]): - The outputs from [`OneFormerForUniversalSegmentationOutput`]. - task_type (`str`, *optional)*, defaults to "instance"): - The post processing depends on the task token input. If the `task_type` is "panoptic", we need to - ignore the stuff predictions. - is_demo (`bool`, *optional)*, defaults to `True`): - Whether the model is in demo mode. If true, use threshold to predict final masks. - threshold (`float`, *optional*, defaults to 0.5): - The probability score threshold to keep predicted instance masks. - mask_threshold (`float`, *optional*, defaults to 0.5): - Threshold to use when turning the predicted masks into binary values. - overlap_mask_area_threshold (`float`, *optional*, defaults to 0.8): - The overlap mask area threshold to merge or discard small disconnected parts within each binary - instance mask. - target_sizes (`List[Tuple]`, *optional*): - List of length (batch_size), where each list item (`Tuple[int, int]]`) corresponds to the requested - final size (height, width) of each prediction in batch. If left to None, predictions will not be - resized. - return_coco_annotation (`bool`, *optional)*, defaults to `False`): - Whether to return predictions in COCO format. - - Returns: - `List[Dict]`: A list of dictionaries, one per image, each dictionary containing two keys: - - **segmentation** -- a tensor of shape `(height, width)` where each pixel represents a `segment_id`, set - to `None` if no mask if found above `threshold`. If `target_sizes` is specified, segmentation is resized - to the corresponding `target_sizes` entry. - - **segments_info** -- A dictionary that contains additional information on each segment. - - **id** -- an integer representing the `segment_id`. - - **label_id** -- An integer representing the label / semantic class id corresponding to `segment_id`. - - **was_fused** -- a boolean, `True` if `label_id` was in `label_ids_to_fuse`, `False` otherwise. - Multiple instances of the same class / label were fused and assigned a single `segment_id`. - - **score** -- Prediction score of segment with `segment_id`. - """ - class_queries_logits = outputs.class_queries_logits # [batch_size, num_queries, num_classes+1] - masks_queries_logits = outputs.masks_queries_logits # [batch_size, num_queries, height, width] - - batch_size = class_queries_logits.shape[0] - num_queries = class_queries_logits.shape[1] - num_classes = class_queries_logits.shape[-1] - 1 - - # Loop over items in batch size - results: List[Dict[str, torch.Tensor]] = [] - - for i in range(batch_size): - # [Q, K] - scores = torch.nn.functional.softmax(class_queries_logits[i], dim=-1)[:, :-1] - labels = torch.arange(num_classes).unsqueeze(0).repeat(num_queries, 1).flatten(0, 1) - - # scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.num_queries, sorted=False) - scores_per_image, topk_indices = scores.flatten(0, 1).topk(num_queries, sorted=False) - labels_per_image = labels[topk_indices] - - topk_indices = torch.div(topk_indices, num_classes, rounding_mode="floor") - # mask_pred = mask_pred.unsqueeze(1).repeat(1, self.sem_seg_head.num_classes, 1).flatten(0, 1) - mask_pred = masks_queries_logits[i][topk_indices] - - # Only consider scores with confidence over [threshold] for demo - if is_demo: - keep = scores_per_image > threshold - scores_per_image = scores_per_image[keep] - labels_per_image = labels_per_image[keep] - mask_pred = mask_pred[keep] - - # if this is panoptic segmentation, we only keep the "thing" classes - if task_type == "panoptic": - keep = torch.zeros_like(scores_per_image).bool() - for i, lab in enumerate(labels_per_image): - keep[i] = lab in self.metadata["thing_ids"] - - scores_per_image = scores_per_image[keep] - labels_per_image = labels_per_image[keep] - mask_pred = mask_pred[keep] - - if mask_pred.shape[0] <= 0: - height, width = target_sizes[i] if target_sizes is not None else mask_pred.shape[1:] - segmentation = torch.zeros((height, width)) - 1 - results.append({"segmentation": segmentation, "segments_info": []}) - continue - - if "ade20k" in self.class_info_file and not is_demo and "instance" in task_type: - for i in range(labels_per_image.shape[0]): - labels_per_image[i] = self.metadata["thing_ids"].index(labels_per_image[i].item()) - - # Get segmentation map and segment information of batch item - target_size = target_sizes[i] if target_sizes is not None else None - segmentation, segments = compute_segments( - mask_pred, - scores_per_image, - labels_per_image, - mask_threshold, - overlap_mask_area_threshold, - set(), - target_size, - ) - - # Return segmentation map in run-length encoding (RLE) format - if return_coco_annotation: - segmentation = convert_segmentation_to_rle(segmentation) - - results.append({"segmentation": segmentation, "segments_info": segments}) - return results - - # Copied from transformers.models.maskformer.image_processing_maskformer.MaskFormerImageProcessor.post_process_panoptic_segmentation - def post_process_panoptic_segmentation( - self, - outputs, - threshold: float = 0.5, - mask_threshold: float = 0.5, - overlap_mask_area_threshold: float = 0.8, - label_ids_to_fuse: Optional[Set[int]] = None, - target_sizes: Optional[List[Tuple[int, int]]] = None, - ) -> List[Dict]: - """ - Converts the output of [`MaskFormerForInstanceSegmentationOutput`] into image panoptic segmentation - predictions. Only supports PyTorch. - - Args: - outputs ([`MaskFormerForInstanceSegmentationOutput`]): - The outputs from [`MaskFormerForInstanceSegmentation`]. - threshold (`float`, *optional*, defaults to 0.5): - The probability score threshold to keep predicted instance masks. - mask_threshold (`float`, *optional*, defaults to 0.5): - Threshold to use when turning the predicted masks into binary values. - overlap_mask_area_threshold (`float`, *optional*, defaults to 0.8): - The overlap mask area threshold to merge or discard small disconnected parts within each binary - instance mask. - label_ids_to_fuse (`Set[int]`, *optional*): - The labels in this state will have all their instances be fused together. For instance we could say - there can only be one sky in an image, but several persons, so the label ID for sky would be in that - set, but not the one for person. - target_sizes (`List[Tuple]`, *optional*): - List of length (batch_size), where each list item (`Tuple[int, int]]`) corresponds to the requested - final size (height, width) of each prediction in batch. If left to None, predictions will not be - resized. - - Returns: - `List[Dict]`: A list of dictionaries, one per image, each dictionary containing two keys: - - **segmentation** -- a tensor of shape `(height, width)` where each pixel represents a `segment_id`, set - to `None` if no mask if found above `threshold`. If `target_sizes` is specified, segmentation is resized - to the corresponding `target_sizes` entry. - - **segments_info** -- A dictionary that contains additional information on each segment. - - **id** -- an integer representing the `segment_id`. - - **label_id** -- An integer representing the label / semantic class id corresponding to `segment_id`. - - **was_fused** -- a boolean, `True` if `label_id` was in `label_ids_to_fuse`, `False` otherwise. - Multiple instances of the same class / label were fused and assigned a single `segment_id`. - - **score** -- Prediction score of segment with `segment_id`. - """ - - if label_ids_to_fuse is None: - logger.warning("`label_ids_to_fuse` unset. No instance will be fused.") - label_ids_to_fuse = set() - - class_queries_logits = outputs.class_queries_logits # [batch_size, num_queries, num_classes+1] - masks_queries_logits = outputs.masks_queries_logits # [batch_size, num_queries, height, width] - - batch_size = class_queries_logits.shape[0] - num_labels = class_queries_logits.shape[-1] - 1 - - mask_probs = masks_queries_logits.sigmoid() # [batch_size, num_queries, height, width] - - # Predicted label and score of each query (batch_size, num_queries) - pred_scores, pred_labels = nn.functional.softmax(class_queries_logits, dim=-1).max(-1) - - # Loop over items in batch size - results: List[Dict[str, TensorType]] = [] - - for i in range(batch_size): - mask_probs_item, pred_scores_item, pred_labels_item = remove_low_and_no_objects( - mask_probs[i], pred_scores[i], pred_labels[i], threshold, num_labels - ) - - # No mask found - if mask_probs_item.shape[0] <= 0: - height, width = target_sizes[i] if target_sizes is not None else mask_probs_item.shape[1:] - segmentation = torch.zeros((height, width)) - 1 - results.append({"segmentation": segmentation, "segments_info": []}) - continue - - # Get segmentation map and segment information of batch item - target_size = target_sizes[i] if target_sizes is not None else None - segmentation, segments = compute_segments( - mask_probs=mask_probs_item, - pred_scores=pred_scores_item, - pred_labels=pred_labels_item, - mask_threshold=mask_threshold, - overlap_mask_area_threshold=overlap_mask_area_threshold, - label_ids_to_fuse=label_ids_to_fuse, - target_size=target_size, - ) - - results.append({"segmentation": segmentation, "segments_info": segments}) - return results diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/dphubert/utils/__init__.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/dphubert/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py deleted file mode 100644 index 8f369a2afedb6c6e69fd52ff9a9a6b1cdf965937..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/config/config.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/config/config.py deleted file mode 100644 index 49a55b1bc87509e2bb24b902ae12c21d5aaeda81..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/config/config.py +++ /dev/null @@ -1,265 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import functools -import inspect -import logging -from fvcore.common.config import CfgNode as _CfgNode - -from detectron2.utils.file_io import PathManager - - -class CfgNode(_CfgNode): - """ - The same as `fvcore.common.config.CfgNode`, but different in: - - 1. Use unsafe yaml loading by default. - Note that this may lead to arbitrary code execution: you must not - load a config file from untrusted sources before manually inspecting - the content of the file. - 2. Support config versioning. - When attempting to merge an old config, it will convert the old config automatically. - - .. automethod:: clone - .. automethod:: freeze - .. automethod:: defrost - .. automethod:: is_frozen - .. automethod:: load_yaml_with_base - .. automethod:: merge_from_list - .. automethod:: merge_from_other_cfg - """ - - @classmethod - def _open_cfg(cls, filename): - return PathManager.open(filename, "r") - - # Note that the default value of allow_unsafe is changed to True - def merge_from_file(self, cfg_filename: str, allow_unsafe: bool = True) -> None: - """ - Load content from the given config file and merge it into self. - - Args: - cfg_filename: config filename - allow_unsafe: allow unsafe yaml syntax - """ - assert PathManager.isfile(cfg_filename), f"Config file '{cfg_filename}' does not exist!" - loaded_cfg = self.load_yaml_with_base(cfg_filename, allow_unsafe=allow_unsafe) - loaded_cfg = type(self)(loaded_cfg) - - # defaults.py needs to import CfgNode - from .defaults import _C - - latest_ver = _C.VERSION - assert ( - latest_ver == self.VERSION - ), "CfgNode.merge_from_file is only allowed on a config object of latest version!" - - logger = logging.getLogger(__name__) - - loaded_ver = loaded_cfg.get("VERSION", None) - if loaded_ver is None: - from .compat import guess_version - - loaded_ver = guess_version(loaded_cfg, cfg_filename) - assert loaded_ver <= self.VERSION, "Cannot merge a v{} config into a v{} config.".format( - loaded_ver, self.VERSION - ) - - if loaded_ver == self.VERSION: - self.merge_from_other_cfg(loaded_cfg) - else: - # compat.py needs to import CfgNode - from .compat import upgrade_config, downgrade_config - - logger.warning( - "Loading an old v{} config file '{}' by automatically upgrading to v{}. " - "See docs/CHANGELOG.md for instructions to update your files.".format( - loaded_ver, cfg_filename, self.VERSION - ) - ) - # To convert, first obtain a full config at an old version - old_self = downgrade_config(self, to_version=loaded_ver) - old_self.merge_from_other_cfg(loaded_cfg) - new_config = upgrade_config(old_self) - self.clear() - self.update(new_config) - - def dump(self, *args, **kwargs): - """ - Returns: - str: a yaml string representation of the config - """ - # to make it show up in docs - return super().dump(*args, **kwargs) - - -global_cfg = CfgNode() - - -def get_cfg() -> CfgNode: - """ - Get a copy of the default config. - - Returns: - a detectron2 CfgNode instance. - """ - from .defaults import _C - - return _C.clone() - - -def set_global_cfg(cfg: CfgNode) -> None: - """ - Let the global config point to the given cfg. - - Assume that the given "cfg" has the key "KEY", after calling - `set_global_cfg(cfg)`, the key can be accessed by: - :: - from detectron2.config import global_cfg - print(global_cfg.KEY) - - By using a hacky global config, you can access these configs anywhere, - without having to pass the config object or the values deep into the code. - This is a hacky feature introduced for quick prototyping / research exploration. - """ - global global_cfg - global_cfg.clear() - global_cfg.update(cfg) - - -def configurable(init_func=None, *, from_config=None): - """ - Decorate a function or a class's __init__ method so that it can be called - with a :class:`CfgNode` object using a :func:`from_config` function that translates - :class:`CfgNode` to arguments. - - Examples: - :: - # Usage 1: Decorator on __init__: - class A: - @configurable - def __init__(self, a, b=2, c=3): - pass - - @classmethod - def from_config(cls, cfg): # 'cfg' must be the first argument - # Returns kwargs to be passed to __init__ - return {"a": cfg.A, "b": cfg.B} - - a1 = A(a=1, b=2) # regular construction - a2 = A(cfg) # construct with a cfg - a3 = A(cfg, b=3, c=4) # construct with extra overwrite - - # Usage 2: Decorator on any function. Needs an extra from_config argument: - @configurable(from_config=lambda cfg: {"a: cfg.A, "b": cfg.B}) - def a_func(a, b=2, c=3): - pass - - a1 = a_func(a=1, b=2) # regular call - a2 = a_func(cfg) # call with a cfg - a3 = a_func(cfg, b=3, c=4) # call with extra overwrite - - Args: - init_func (callable): a class's ``__init__`` method in usage 1. The - class must have a ``from_config`` classmethod which takes `cfg` as - the first argument. - from_config (callable): the from_config function in usage 2. It must take `cfg` - as its first argument. - """ - - if init_func is not None: - assert ( - inspect.isfunction(init_func) - and from_config is None - and init_func.__name__ == "__init__" - ), "Incorrect use of @configurable. Check API documentation for examples." - - @functools.wraps(init_func) - def wrapped(self, *args, **kwargs): - try: - from_config_func = type(self).from_config - except AttributeError as e: - raise AttributeError( - "Class with @configurable must have a 'from_config' classmethod." - ) from e - if not inspect.ismethod(from_config_func): - raise TypeError("Class with @configurable must have a 'from_config' classmethod.") - - if _called_with_cfg(*args, **kwargs): - explicit_args = _get_args_from_config(from_config_func, *args, **kwargs) - init_func(self, **explicit_args) - else: - init_func(self, *args, **kwargs) - - return wrapped - - else: - if from_config is None: - return configurable # @configurable() is made equivalent to @configurable - assert inspect.isfunction( - from_config - ), "from_config argument of configurable must be a function!" - - def wrapper(orig_func): - @functools.wraps(orig_func) - def wrapped(*args, **kwargs): - if _called_with_cfg(*args, **kwargs): - explicit_args = _get_args_from_config(from_config, *args, **kwargs) - return orig_func(**explicit_args) - else: - return orig_func(*args, **kwargs) - - wrapped.from_config = from_config - return wrapped - - return wrapper - - -def _get_args_from_config(from_config_func, *args, **kwargs): - """ - Use `from_config` to obtain explicit arguments. - - Returns: - dict: arguments to be used for cls.__init__ - """ - signature = inspect.signature(from_config_func) - if list(signature.parameters.keys())[0] != "cfg": - if inspect.isfunction(from_config_func): - name = from_config_func.__name__ - else: - name = f"{from_config_func.__self__}.from_config" - raise TypeError(f"{name} must take 'cfg' as the first argument!") - support_var_arg = any( - param.kind in [param.VAR_POSITIONAL, param.VAR_KEYWORD] - for param in signature.parameters.values() - ) - if support_var_arg: # forward all arguments to from_config, if from_config accepts them - ret = from_config_func(*args, **kwargs) - else: - # forward supported arguments to from_config - supported_arg_names = set(signature.parameters.keys()) - extra_kwargs = {} - for name in list(kwargs.keys()): - if name not in supported_arg_names: - extra_kwargs[name] = kwargs.pop(name) - ret = from_config_func(*args, **kwargs) - # forward the other arguments to __init__ - ret.update(extra_kwargs) - return ret - - -def _called_with_cfg(*args, **kwargs): - """ - Returns: - bool: whether the arguments contain CfgNode and should be considered - forwarded to from_config. - """ - from omegaconf import DictConfig - - if len(args) and isinstance(args[0], (_CfgNode, DictConfig)): - return True - if isinstance(kwargs.pop("cfg", None), (_CfgNode, DictConfig)): - return True - # `from_config`'s first argument is forced to be "cfg". - # So the above check covers all cases. - return False diff --git a/spaces/ysharma/Talk_to_Multilingual_AI_WhisperBloomCoqui/README.md b/spaces/ysharma/Talk_to_Multilingual_AI_WhisperBloomCoqui/README.md deleted file mode 100644 index bec88902a86de09fcd74441ddab97995cff92fd5..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Talk_to_Multilingual_AI_WhisperBloomCoqui/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Talk To Multilingual AI WhisperBloomCoqui -emoji: 📉 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/imgs/convert_rgb_alpha.py b/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/imgs/convert_rgb_alpha.py deleted file mode 100644 index 12fc39b538eea20f4e3f17469765a1120852a27f..0000000000000000000000000000000000000000 --- a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/imgs/convert_rgb_alpha.py +++ /dev/null @@ -1,14 +0,0 @@ -import matplotlib.pyplot as plt -import numpy as np - -rgb_file = 'fg-1-rgb.png' -alpha_file = 'fg-1-alpha.png' -output_file = 'fg-1-rgba.png' - -rgb = plt.imread(rgb_file) -alpha = plt.imread(alpha_file) - -print(rgb.shape, alpha.shape) - -rgba = np.concatenate([rgb[..., :3], alpha[..., 0:1]], axis=2) -plt.imsave(output_file, rgba) diff --git a/spaces/zakiu/Personal-TTS/app.py b/spaces/zakiu/Personal-TTS/app.py deleted file mode 100644 index 79b5567466577c4b4dc3078754bd03f1238adc14..0000000000000000000000000000000000000000 --- a/spaces/zakiu/Personal-TTS/app.py +++ /dev/null @@ -1,158 +0,0 @@ -import os -import gradio as gr -import random - -os.system("pip install kantts -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html") -os.system("pip install librosa==0.9.2") -os.system("pip install numpy==1.22.0") - -from modelscope.models.audio.tts import SambertHifigan -from modelscope.pipelines import pipeline -from modelscope.utils.constant import Tasks - -from voicefixer import VoiceFixer -voicefixer = VoiceFixer() - -# model_0 - -model_dir = os.path.abspath("./pretrain_work_dir") - -custom_infer_abs = { - 'voice_name': - 'F7', - 'am_ckpt': - os.path.join(model_dir, 'tmp_am', 'ckpt'), - 'am_config': - os.path.join(model_dir, 'tmp_am', 'config.yaml'), - 'voc_ckpt': - os.path.join(model_dir, 'orig_model', 'basemodel_16k', 'hifigan', 'ckpt'), - 'voc_config': - os.path.join(model_dir, 'orig_model', 'basemodel_16k', 'hifigan', - 'config.yaml'), - 'audio_config': - os.path.join(model_dir, 'data', 'audio_config.yaml'), - 'se_file': - os.path.join(model_dir, 'data', 'se', 'se.npy') -} -kwargs = {'custom_ckpt': custom_infer_abs} - -model_id = SambertHifigan(os.path.join(model_dir, "orig_model"), **kwargs) - -inference = pipeline(task=Tasks.text_to_speech, model=model_id) - -# model_1 - -model_dir1 = os.path.abspath("./jay/pretrain_work_dir") - -custom_infer_abs1 = { - 'voice_name': - 'F7', - 'am_ckpt': - os.path.join(model_dir1, 'tmp_am', 'ckpt'), - 'am_config': - os.path.join(model_dir1, 'tmp_am', 'config.yaml'), - 'voc_ckpt': - os.path.join(model_dir1, 'orig_model', 'basemodel_16k', 'hifigan', 'ckpt'), - 'voc_config': - os.path.join(model_dir1, 'orig_model', 'basemodel_16k', 'hifigan', - 'config.yaml'), - 'audio_config': - os.path.join(model_dir1, 'data', 'audio_config.yaml'), - 'se_file': - os.path.join(model_dir1, 'data', 'se', 'se.npy') -} -kwargs1 = {'custom_ckpt': custom_infer_abs1} - -model_id1 = SambertHifigan(os.path.join(model_dir1, "orig_model"), **kwargs1) - -inference1 = pipeline(task=Tasks.text_to_speech, model=model_id1) - - -# functions - -def infer(text): - output = inference(input=text) - filename = str(random.randint(1, 1000000000000)) - - with open(filename + "myfile.wav", mode='bx') as f: - f.write(output["output_wav"]) - return filename + "myfile.wav" - -def infer1(text): - output = inference1(input=text) - filename = str(random.randint(1, 1000000000000)) - - with open(filename + "file.wav", mode='bx') as f: - f.write(output["output_wav"]) - return filename + "file.wav" - -# upsample - -import numpy as np -import torch -from hifi_gan_bwe import BandwidthExtender -from scipy.io.wavfile import write - -MAX_LENGTH = 600.0 - -model = BandwidthExtender.from_pretrained("hifi-gan-bwe-10-42890e3-vctk-48kHz") - -def extend(audio): - fs, x = audio - x = x[:int(MAX_LENGTH * fs)] - x = x.astype(np.float32) / 32767.0 - if len(x.shape) == 1: - x = x[:, np.newaxis] - - with torch.no_grad(): - y = np.stack([model(torch.from_numpy(x), fs) for x in x.T]).T - y = (y * 32767.0).astype(np.int16) - fs = int(model.sample_rate) - write("upsample.wav", fs, y) - - return "upsample.wav" - -# denoise - -def inference_denoise(audio): - voicefixer.restore(input=audio, # input wav file path - output="output.wav", # output wav file path - cuda=False, # whether to use gpu acceleration - mode = int(0)) # You can try out mode 0, 1 to find out the best result - return 'output.wav' - - -app = gr.Blocks() - -with app: - gr.Markdown("#
    🥳🎶🎡 - KanTTS中文声音克隆
    ") - gr.Markdown("##
    🌊 - 更多精彩应用,敬请关注[滔滔AI](http://www.talktalkai.com);滔滔AI,为爱滔滔!💕
    ") - - with gr.Row(): - with gr.Column(): - inp = gr.Textbox(lines=5, label="请填写您想要转换的中文文本") - with gr.Row(): - btn = gr.Button("使用AI娜娜的声音", variant="primary") - btn1 = gr.Button("使用AI小杰的声音", variant="primary") - with gr.Column(): - with gr.Row(): - out = gr.Audio(label="为您生成的专属音频") - out1 = gr.Audio(label="更高采样率的专属音频", type="filepath") - out2 = gr.Audio(label="降噪后的高采样率音频", type="filepath") - with gr.Row(): - btn2 = gr.Button("一键提高采样率") - btn3 = gr.Button("一键降噪") - - btn.click(fn=infer, inputs=[inp], outputs=[out]) - btn1.click(fn=infer1, inputs=[inp], outputs=[out]) - btn2.click(fn=extend, inputs=[out], outputs=[out1]) - btn3.click(fn=inference_denoise, inputs=[out1], outputs=[out2]) - - gr.Markdown("###
    注意❗:请不要生成会对个人以及组织造成侵害的内容,此程序仅供科研、学习及个人娱乐使用。
    ") - gr.HTML(''' - - ''') -app.launch(show_error=True) \ No newline at end of file diff --git a/spaces/zhan66/vits-uma-genshin-honkai/attentions.py b/spaces/zhan66/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/zhan66/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/zhang-wei-jian/docker/node_modules/tsscmp/README.md b/spaces/zhang-wei-jian/docker/node_modules/tsscmp/README.md deleted file mode 100644 index cba99d03700b1c5aadda61cee9cb04f9b0c27153..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/tsscmp/README.md +++ /dev/null @@ -1,48 +0,0 @@ -# Timing safe string compare using double HMAC - -[![Node.js Version](https://img.shields.io/node/v/tsscmp.svg?style=flat-square)](https://nodejs.org/en/download) -[![npm](https://img.shields.io/npm/v/tsscmp.svg?style=flat-square)](https://npmjs.org/package/tsscmp) -[![NPM Downloads](https://img.shields.io/npm/dm/tsscmp.svg?style=flat-square)](https://npmjs.org/package/tsscmp) -[![Build Status](https://img.shields.io/travis/suryagh/tsscmp/master.svg?style=flat-square)](https://travis-ci.org/suryagh/tsscmp) -[![Build Status](https://img.shields.io/appveyor/ci/suryagh/tsscmp/master.svg?style=flat-square&label=windows)](https://ci.appveyor.com/project/suryagh/tsscmp) -[![Dependency Status](http://img.shields.io/david/suryagh/tsscmp.svg?style=flat-square)](https://david-dm.org/suryagh/tsscmp) -[![npm-license](http://img.shields.io/npm/l/tsscmp.svg?style=flat-square)](LICENSE) - - -Prevents [timing attacks](http://codahale.com/a-lesson-in-timing-attacks/) using Brad Hill's -[Double HMAC pattern](https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2011/february/double-hmac-verification/) -to perform secure string comparison. Double HMAC avoids the timing atacks by blinding the -timing channel using random time per attempt comparison against iterative brute force attacks. - - -## Install - -``` -npm install tsscmp -``` -## Why -To compare secret values like **authentication tokens**, **passwords** or -**capability urls** so that timing information is not -leaked to the attacker. - -## Example - -```js -var timingSafeCompare = require('tsscmp'); - -var sessionToken = '127e6fbfe24a750e72930c'; -var givenToken = '127e6fbfe24a750e72930c'; - -if (timingSafeCompare(sessionToken, givenToken)) { - console.log('good token'); -} else { - console.log('bad token'); -} -``` -##License: -[MIT](LICENSE) - -**Credits to:** [@jsha](https://github.com/jsha) | -[@bnoordhuis](https://github.com/bnoordhuis) | -[@suryagh](https://github.com/suryagh) | - \ No newline at end of file diff --git a/spaces/zhenwusw/JoJoGAN/op/fused_bias_act.cpp b/spaces/zhenwusw/JoJoGAN/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/zhenwusw/JoJoGAN/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/zideliu/styledrop/configs/custom.py b/spaces/zideliu/styledrop/configs/custom.py deleted file mode 100644 index 0a4a1f8f41f089718ec95b0e87c7cdafede84e49..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/configs/custom.py +++ /dev/null @@ -1,83 +0,0 @@ -import ml_collections - - -def d(**kwargs): - """Helper of creating a config dict.""" - return ml_collections.ConfigDict(initial_dictionary=kwargs) - - -def get_config(): - config = ml_collections.ConfigDict() - - - config.seed = 1234 - config.z_shape = (8, 16, 16) - - config.autoencoder = d( - config_file='vq-f16-jax.yaml', - ) - config.data_path="data/one_style.json" - config.resume_root="assets/ckpts/cc3m-285000.ckpt" - config.adapter_path=None - config.sample_interval=True - config.train = d( - n_steps=1000, - batch_size=8, - log_interval=20, - eval_interval=100, - save_interval=100, - fid_interval=20000, - num_workers=8, - resampled=False, - ) - - config.optimizer = d( - name='adamw', - lr=0.0003, - weight_decay=0.03, - betas=(0.99, 0.99), - ) - - config.lr_scheduler = d( - name='customized', - warmup_steps=-1, # 5000 - ) - - config.nnet = d( - name='uvit_t2i_vq', - img_size=16, - codebook_size=1024, - in_chans=4, - embed_dim=1152, - depth=28, - num_heads=16, - mlp_ratio=4, - qkv_bias=False, - clip_dim=1280, - num_clip_token=77, - use_checkpoint=False, - skip=True, - d_prj=32,# Stage I: 32; Stage II: TODO - is_shared=False, # Stage I: False; Stage II: False - ) - - config.muse = d( - ignore_ind=-1, - smoothing=0.1, - gen_temp=4.5 - ) - - - config.sample = d( - sample_steps=36, - n_samples=50, - mini_batch_size=8, - cfg=True, - linear_inc_scale=True, - scale=10., - path='', - lambdaA=2.0, # Stage I: 2.0; Stage II: TODO - lambdaB=5.0, # Stage I: 5.0; Stage II: TODO - ) - - return config diff --git a/spaces/zomehwh/sovits-rudolf/hubert/__init__.py b/spaces/zomehwh/sovits-rudolf/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zwhe99/MAPS-mt/data/format_ask_kw.py b/spaces/zwhe99/MAPS-mt/data/format_ask_kw.py deleted file mode 100644 index 6a66e32e591f86ed51d323edb7f57d0ab91bc781..0000000000000000000000000000000000000000 --- a/spaces/zwhe99/MAPS-mt/data/format_ask_kw.py +++ /dev/null @@ -1,106 +0,0 @@ -import random -import os -from langcodes import Language -import argparse -from .trigger_sents import SUPPORT_LANGS, TRIGGER_SENTS - -KETWORDS = { - "en": [ - ["Stanford University", "School of Medicine"], - ["JAS 39C Gripen", "commercial flights"], - ["Barça", "Sevilla"], - ["Whitehall", "Downing Street", "Prime Minister's official residence"], - ["Yahoo!", "Microsoft"] - ], - "zh": [ - ["斯坦福大学", "医学院"], - ["JAS 39C 鹰狮战斗机", "商业航班"], - ["巴萨", "塞维利亚队"], - ["白厅", "唐宁街", "首相官邸"], - ["雅虎", "微软"], - ], - "de": [ - ["Stanford Universität", "Medizinische Fakultät"], - ["JAS 39C Gripen", "kommerzielle Flüge"], - ["Barça", "Sevilla"], - ["Whitehall", "Downing Straße", "offizielle Residenz des Premierministers"], - ["Yahoo!", "Microsoft"], - ], - "ja": [ - ["スタンフォード大学", "医学部"], - ["JAS 39C Gripen", "商用フライト"], - ["バルサ", "セビージャ"], - ["ホワイトホール", "ダウニングストリート", "首相官邸"], - ["ヤフー", "マイクロソフト"] - ], - "fr": [ - ["Université Stanford", "l'école de médecine"], - ["JAS 39C Gripen", "les vols commerciaux"], - ["Barça", "Sevilla"], - ["Whitehall", "Downing Street", "la résidence officielle du Premier ministre"], - ["Yahoo!", "Microsoft"] - ] -} - -demo_dict = {} -for src_lng in SUPPORT_LANGS: - for tgt_lng in SUPPORT_LANGS: - if src_lng == tgt_lng: - continue - else: - demo_dict[(src_lng, tgt_lng)] = [ - (tri_sent, ", ".join([f"{src_kw}={tgt_kw}" for src_kw, tgt_kw in zip(src_kw_lst, tgt_kw_lst)])) - for tri_sent, src_kw_lst, tgt_kw_lst in zip(TRIGGER_SENTS[src_lng], KETWORDS[src_lng], KETWORDS[tgt_lng]) - ] - -def parse_args(): - parser = argparse.ArgumentParser("", formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument('-w', "--workspace", type=str, default=os.path.join(os.path.dirname(os.path.abspath(__file__)), '..'), help="Workspace dir") - parser.add_argument('-tn', "--test-name", type=str, required=True, help="wmt22/wmt21/...") - parser.add_argument("--seed", type=int, default=0) - parser.add_argument('-s', "--src", type=str, required=True, help='source lang') - parser.add_argument('-t', "--tgt", type=str, required=True, help='target lang') - return parser.parse_args() - -def main(args): - workspace = args.workspace - data_dir=os.path.join(workspace, "data") - raw_dir=os.path.join(data_dir, "raw") - format_dir=os.path.join(data_dir, "format") - test_name = args.test_name - seed = args.seed - src = args.src - tgt = args.tgt - src_full = Language.make(language=src).display_name() - tgt_full = Language.make(language=tgt).display_name() - - # seed random - random.seed(seed) - - # read files - with open(os.path.join(raw_dir, f"{test_name}.{src}-{tgt}.{src}")) as test_src_f: - - test_src_lines = [l.strip() for l in test_src_f.readlines()] - out_file_path = os.path.join(format_dir, f"{test_name}.{src}-{tgt}.{src}.ask-kw") - - demos = demo_dict[(src, tgt)] - with open(out_file_path, 'w') as out_f: - for id, src_line in enumerate(test_src_lines): - all_items = demos + [(src_line, None)] - prompt_lst = [] - for it in all_items: - it_src, it_kw = it - s = f"Let's extract the keywords in the following {src_full} sentence, and then translate these keywords into {tgt_full}.\n" + \ - f"{src_full}: {it_src}\n" + \ - (f"Keyword Pairs: {it_kw}" if it_kw else "Keyword Pairs:") - prompt_lst.append(s) - - prompt = "\n\n".join(prompt_lst) - out_f.write( - f"{id:04}\n" - f"{prompt}\n\n\n" - ) - -if __name__ == "__main__": - args = parse_args() - main(args) \ No newline at end of file